Updates from: 06/28/2021 03:04:35
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md
You can add and modify redirect URIs in your registered applications at any time
The web app sample uses in memory token cache serialization. This implementation is great in samples. It's also good in production applications provided you don't mind if the token cache is lost when the web app is restarted.
-For production environment, we recommend you use a distributed memory cache. For example, Redis cache, NCache, or a SQL Server cache. For details about the distributed memory cache implementations, see [Token cache for a web app](../active-directory/develop/msal-net-token-cache-serialization.md#token-cache-for-a-web-app-confidential-client-application).
+For production environment, we recommend you use a distributed memory cache. For example, Redis cache, NCache, or a SQL Server cache. For details about the distributed memory cache implementations, see [Token cache serialization](../active-directory/develop/msal-net-token-cache-serialization.md).
## Next steps
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui-with-html.md
Previously updated : 04/19/2021 Last updated : 06/27/2021
When using your own HTML and CSS files to customize the UI, host your UI content
You localize your HTML content by enabling [language customization](language-customization.md) in your Azure AD B2C tenant. Enabling this feature allows Azure AD B2C to forward the OpenID Connect parameter `ui_locales` to your endpoint. Your content server can use this parameter to provide language-specific HTML pages.
+> [!NOTE]
+> Azure AD B2C doesn't pass OpenID Connect parameters, such as `ui_locales` to the [Exception pages](page-layout.md#exception-page-globalexception).
++ Content can be pulled from different places based on the locale that's used. In your CORS-enabled endpoint, you set up a folder structure to host content for specific languages. You'll call the right one if you use the wildcard value `{Culture:RFC5646}`. For example, your custom page URI might look like:
active-directory-b2c Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/customize-ui.md
Previously updated : 05/26/2021 Last updated : 06/27/2021
active-directory-b2c Json Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/json-transformations.md
Previously updated : 03/04/2021 Last updated : 06/27/2021
In the following example, the claims transformation extracted the `emailAddress`
- Output claims: - **extractedClaim**: someone@example.com
+The GetClaimFromJson claims transformation gets a single element from a JSON data. In the preceding example, the emailAddress. To get the displayName, create another claims transformation. For example:
+
+```xml
+<ClaimsTransformation Id="GetDispalyNameClaimFromJson" TransformationMethod="GetClaimFromJson">
+ <InputClaims>
+ <InputClaim ClaimTypeReferenceId="customUserData" TransformationClaimType="inputJson" />
+ </InputClaims>
+ <InputParameters>
+ <InputParameter Id="claimToExtract" DataType="string" Value="displayName" />
+ </InputParameters>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="displayName" TransformationClaimType="extractedClaim" />
+ </OutputClaims>
+</ClaimsTransformation>
+```
+
+- Input claims:
+ - **inputJson**: {"emailAddress": "someone@example.com", "displayName": "Someone"}
+- Input parameter:
+ - **claimToExtract**: displayName
+- Output claims:
+ - **extractedClaim**: Someone
## GetClaimsFromJsonArray
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/relyingparty.md
Previously updated : 05/26/2021 Last updated : 06/27/2021
The **SingleSignOn** element contains the following attributes:
| Attribute | Required | Description | | | -- | -- | | Scope | Yes | The scope of the single sign-on behavior. Possible values: `Suppressed`, `Tenant`, `Application`, or `Policy`. The `Suppressed` value indicates that the behavior is suppressed, and the user is always prompted for an identity provider selection. The `Tenant` value indicates that the behavior is applied to all policies in the tenant. For example, a user navigating through two policy journeys for a tenant is not prompted for an identity provider selection. The `Application` value indicates that the behavior is applied to all policies for the application making the request. For example, a user navigating through two policy journeys for an application is not prompted for an identity provider selection. The `Policy` value indicates that the behavior only applies to a policy. For example, a user navigating through two policy journeys for a trust framework is prompted for an identity provider selection when switching between policies. |
-| KeepAliveInDays | No | Controls how long the user remains signed in. Setting the value to 0 turns off KMSI functionality. For more information, see [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi). |
+| KeepAliveInDays | No | Controls how long the user remains signed in. Setting the value to 0 turns off KMSI functionality. The default is `0` (disabled). The minimum is `1` day. The maximum is `90` days. For more information, see [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi). |
|EnforceIdTokenHintOnLogout| No| Force to pass a previously issued ID token to the logout endpoint as a hint about the end user's current authenticated session with the client. Possible values: `false` (default), or `true`. For more information, see [Web sign-in with OpenID Connect](openid-connect.md). |
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/session-behavior.md
Previously updated : 06/07/2021 Last updated : 06/27/2021
To add the KMSI checkbox to the sign-up and sign-in page, set the `setting.enabl
### Configure a relying party file
-Update the relying party (RP) file that initiates the user journey that you created. The keepAliveInDays parameter allows you to configure how the long the keep me signed in (KMSI) session cookie should persist. For example, if you set the value to 30, then KMSI session cookie will persist for 30 days. The range for the value is from 1 to 90 days.
+Update the relying party (RP) file that initiates the user journey that you created. The keepAliveInDays parameter allows you to configure how the long the keep me signed in (KMSI) session cookie should persist. For example, if you set the value to 30, then KMSI session cookie will persist for 30 days. The range for the value is from 1 to 90 days. Setting the value to 0 turns off KMSI functionality.
1. Open your custom policy file. For example, *SignUpOrSignin.xml*. 1. If it doesn't already exist, add a `<UserJourneyBehaviors>` child node to the `<RelyingParty>` node. It must be located immediately after `<DefaultUserJourney ReferenceId="User journey Id" />`, for example: `<DefaultUserJourney ReferenceId="SignUpOrSignIn" />`.
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/userjourneys.md
Previously updated : 06/16/2021 Last updated : 06/27/2021
The **OrchestrationStep** element can contain the following elements:
### Preconditions
+Orchestration steps can be conditionally executed based on preconditions defined in the orchestration step. The `Preconditions` element contains a list of preconditions to evaluate. When the precondition evaluation is satisfied, the associated orchestration step skips to the next orchestration step.
+
+Each precondition evaluates a single claim. There are two types of preconditions:
+ 
+- **Claims exist** - Specifies that the actions should be performed if the specified claims exist in the user's current claim bag.
+- **Claim equals** - Specifies that the actions should be performed if the specified claim exists, and its value is equal to the specified value. The check performs a case-sensitive ordinal comparison. When checking Boolean claim type, use `True`, or `False`.
+
+Azure AD B2C evaluates the preconditions in list order. The oder-based preconditions allows you set the order in which the preconditions are applied. The first precondition that satisfied overrides all the subsequent preconditions. The orchestration step is executed only if all of the preconditions are not satisfied.
+ The **Preconditions** element contains the following element: | Element | Occurrences | Description | | - | -- | -- |
-| Precondition | 1:n | Depending on the technical profile being used, either redirects the client according to the claims provider selection or makes a server call to exchange claims. |
-
+| Precondition | 1:n | A precondition to evaluate. |
#### Precondition
-Orchestration steps can be conditionally executed based on preconditions defined in the orchestration step. There are two types of preconditions:
- 
-- **Claims exist** - Specifies that the actions should be performed if the specified claims exist in the user's current claim bag.-- **Claim equals** - Specifies that the actions should be performed if the specified claim exists, and its value is equal to the specified value. The check performs a case-sensitive ordinal comparison. When checking Boolean claim type, use `True`, or `False`.- The **Precondition** element contains the following attributes: | Attribute | Required | Description | | | -- | -- | | `Type` | Yes | The type of check or query to perform for this precondition. The value can be **ClaimsExist**, which specifies that the actions should be performed if the specified claims exist in the user's current claim set, or **ClaimEquals**, which specifies that the actions should be performed if the specified claim exists and its value is equal to the specified value. |
-| `ExecuteActionsIf` | Yes | Use a `true` or `false` test to decide if the actions in the precondition should be performed. |
+| `ExecuteActionsIf` | Yes | Decides how the precondition is considered satisfied. Possible values: `true` (default), or `false`. If the value is set to `true`, it's considered satisfied when the claim matches the precondition. If the value is set to `false`, it's considered satisfied when the claim doesn't match the precondition. |
The **Precondition** elements contains the following elements: | Element | Occurrences | Description | | - | -- | -- | | Value | 1:2 | The identifier of a claim type. The claim is already defined in the claims schema section in the policy file, or parent policy file. When the precondition is type of `ClaimEquals`, a second `Value` element contains the value to be checked. |
-| Action | 1:1 | The action that should be performed if the precondition check within an orchestration step is true. If the value of the `Action` is set to `SkipThisOrchestrationStep`, the associated `OrchestrationStep` should not be executed. |
+| Action | 1:1 | The action that should be performed if the precondition evaluation is satisfied. Possible value: `SkipThisOrchestrationStep`. The associated orchestration step skips to the next one. |
#### Preconditions examples
active-directory Identity Platform Integration Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/identity-platform-integration-checklist.md
Use the following checklist to ensure that your application is effectively integ
![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) For mobile apps, configure each platform using the application registration experience. In order for your application to take advantage of the Microsoft Authenticator or Microsoft Company Portal for single sign-in, your app needs a ΓÇ£broker redirect URIΓÇ¥ configured. This allows Microsoft to return control to your application after authentication. When configuring each platform, the app registration experience will guide you through the process. Use the quickstart to download a working example. On iOS, use brokers and system webview whenever possible.
-![checkbox](./medi#token-cache-for-a-web-app-confidential-client-application).
+![checkbox](./medi).
![checkbox](./media/active-directory-integration-checklist/checkbox-two.svg) If the data your app requires is available through [Microsoft Graph](https://developer.microsoft.com/graph), request permissions for this data using the Microsoft Graph endpoint rather than the individual API.
active-directory Msal Net Migration Confidential Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-migration-confidential-client.md
This troubleshooting guide makes two assumptions:
### AADSTS700027 exception
-If you get an exception with the following message: `AADSTS700027: Client assertion contains an invalid signature. [Reason - The key was not found.]`:
+If you get an exception with the following message:
+
+> `AADSTS700027: Client assertion contains an invalid signature. [Reason - The key was not found.]`
+
+You can troubleshoot the exception using the steps below:
- Confirm that you're using the latest version of MSAL.NET, - Confirm that the authority host set when building the confidential client application and the authority host you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md)? (Azure Government, Azure China 21Vianet, Azure Germany). ### AADSTS700030 exception
-If you get an exception with the following message: `AADSTS90002: Tenant 'cf61953b-e41a-46b3-b500-663d279ea744' not found. This may happen if there are no active subscriptions for the tenant. Check to make sure you have the correct tenant ID. Check with your subscription administrator.`:
+If you get an exception with the following message:
+
+> `AADSTS90002: Tenant 'cf61953b-e41a-46b3-b500-663d279ea744' not found. This may happen if there are no active`
+> `subscriptions for the tenant. Check to make sure you have the correct tenant ID. Check with your subscription`
+> `administrator.`
+
+You can troubleshoot the exception using the steps below:
- Confirm that you're using the latest version of MSAL.NET, - Confirm that the authority host set when building the confidential client application and the authority host you used with ADAL are similar. In particular, is it the same [cloud](msal-national-cloud.md)? (Azure Government, Azure China 21Vianet, Azure Germany).
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-token-cache-serialization.md
Title: Token cache serialization (MSAL.NET) | Azure
-description: Learn about serialization and customer serialization of the token cache using the Microsoft Authentication Library for .NET (MSAL.NET).
+description: Learn about serialization and custom serialization of the token cache using the Microsoft Authentication Library for .NET (MSAL.NET).
Previously updated : 09/16/2019 Last updated : 06/25/2021 -+ #Customer intent: As an application developer, I want to learn about token cache serialization so I can have fine-grained control of the proxy. # Token cache serialization in MSAL.NET
-After a [token is acquired](msal-acquire-cache-tokens.md), it is cached by the Microsoft Authentication Library (MSAL). Application code should try to get a token from the cache before acquiring a token by another method. This article discusses default and custom serialization of the token cache in MSAL.NET.
-This article is for MSAL.NET 3.x. If you're interested in MSAL.NET 2.x, see [Token cache serialization in MSAL.NET 2.x](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Token-cache-serialization-2x).
+After it [acquires a token](msal-acquire-cache-tokens.md), Microsoft Authentication Library (MSAL) caches it. Public client applications (desktop/mobile apps) should try to get a token from the cache before acquiring a token by another method. Acquisition methods on confidential client applications manage the cache themselves. This article discusses default and custom serialization of the token cache in MSAL.NET.
-## Default serialization for mobile platforms
+## Quick summary
-In MSAL.NET, an in-memory token cache is provided by default. Serialization is provided by default for platforms where secure storage is available for a user as part of the platform. This is the case for Universal Windows Platform (UWP), Xamarin.iOS, and Xamarin.Android.
+The recommendation is:
+- In web apps and web APIs, use [token cache serializers from "Microsoft.Identity.Web"](https://github.com/AzureAD/microsoft-identity-web/wiki/token-cache-serialization). They even provide distributed database or cache system to store tokens.
+ - In ASP.NET Core [web apps](scenario-web-app-call-api-overview.md) and [web API](scenario-web-api-call-api-overview.md), use Microsoft.Identity.Web as a higher-level API in ASP.NET Core.
+ - In ASP.NET classic, .NET Core, .NET framework, use MSAL.NET directly with [token cache serialization adapters for MSAL](https://aka.ms/ms-id-web/token-cache-serialization-msal) provided in Microsoft.Identity.Web.
+- In desktop applications (which can use file system to store tokens), use [Microsoft.Identity.Client.Extensions.Msal](https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/wiki/Cross-platform-Token-Cache) with MSAL.Net.
+- In mobile applications (Xamarin.iOS, Xamarin.Android, Universal Windows Platform) don't do anything, as MSAL.NET handles the cache for you: these platforms have a secure storage.
-> [!Note]
-> When you migrate a Xamarin.Android project from MSAL.NET 1.x to MSAL.NET 3.x, you might want to add `android:allowBackup="false"` to your project to avoid old cached tokens from coming back when Visual Studio deployments trigger a restore of local storage. See [Issue #659](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/659#issuecomment-436181938).
+## [ASP.NET Core web apps and web APIs](#tab/aspnetcore)
+
+The [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web) library provides a NuGet package [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) containing token cache serialization:
+
+| Extension Method | Description |
+| - | |
+| `AddInMemoryTokenCaches` | In memory token cache serialization. This implementation is great in samples. It's also good in production applications provided you don't mind if the token cache is lost when the web app is restarted. `AddInMemoryTokenCaches` takes an optional parameter of type `MsalMemoryTokenCacheOptions` that enables you to specify the duration after which the cache entry will expire unless it's used.
+| `AddSessionTokenCaches` | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims as the cookie would become too large.
+| `AddDistributedTokenCaches` | The token cache is an adapter against the ASP.NET Core `IDistributedCache` implementation, therefore enabling you to choose between a distributed memory cache, a Redis cache, a distributed NCache, or a SQL Server cache. For details about the `IDistributedCache` implementations, see [Distributed memory cache](/aspnet/core/performance/caching/distributed.md).
++
+Here's an example of code using the in-memory cache in the [ConfigureServices](/dotnet/api/microsoft.aspnetcore.hosting.startupbase.configureservices) method of the [Startup](/aspnet/core/fundamentals/startup) class in an ASP.NET Core application:
+
+```CSharp
+#using Microsoft.Identity.Web
+```
+
+```CSharp
+using Microsoft.Identity.Web;
+
+public class Startup
+{
+ const string scopesToRequest = "user.read";
+
+ public void ConfigureServices(IServiceCollection services)
+ {
+ // code before
+ services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApp(Configuration)
+ .EnableTokenAcquisitionToCallDownstreamApi(new string[] { scopesToRequest })
+ .AddInMemoryTokenCaches();
+ // code after
+ }
+ // code after
+}
+```
+
+From the point of view of the cache, the code would be similar in ASP.NET Core web APIs
++
+Here are examples of possible distributed caches:
+
+```C#
+// or use a distributed Token Cache by adding
+ services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+ .AddMicrosoftIdentityWebApp(Configuration)
+ .EnableTokenAcquisitionToCallDownstreamApi(new string[] { scopesToRequest })
+
+// and then choose your implementation
+
+// For instance the distributed in memory cache (not cleared when you stop the app)
+services.AddDistributedMemoryCache()
+
+// Or a Redis cache
+services.AddStackExchangeRedisCache(options =>
+{
+ options.Configuration = "localhost";
+ options.InstanceName = "SampleInstance";
+});
+
+// Or even a SQL Server token cache
+services.AddDistributedSqlServerCache(options =>
+{
+ options.ConnectionString = _config["DistCache_ConnectionString"];
+ options.SchemaName = "dbo";
+ options.TableName = "TestCache";
+});
+```
+
+Their usage is featured in the [ASP.NET Core web app tutorial](/aspnet/core/tutorials/first-mvc-app/) in the phase [2-2 Token Cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-2-TokenCache).
+
+## [Non ASP.NET Core web apps and web APIs](#tab/aspnet)
+
+Even when you use MSAL.NET, you can benefit from token cache serializers brought in Microsoft.Identity.Web
+
+### Referencing the NuGet package
+
+Add the [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) NuGet package to your project in addition to MSAL.NET
+
+### Configuring the token cache
+
+The following code shows how to add an in-memory well partitioned token cache to your app.
+
+```CSharp
+#using Microsoft.Identity.Web
+#using Microsoft.Identity.Client
+```
+
+```CSharp
+
+ private static IConfidentialClientApplication app;
+
+ public static async Task<IConfidentialClientApplication> BuildConfidentialClientApplication()
+ {
+ if (app== null)
+ {
+ // Create the confidential client application
+ app= ConfidentialClientApplicationBuilder.Create(clientId)
+ // Alternatively to the certificate you can use .WithClientSecret(clientSecret)
+ .WithCertificate(certDescription.Certificate)
+ .WithLegacyCacheCompatibility(false)
+ .WithTenantId(tenant)
+ .Build();
+
+ // Add an in-memory token cache. Other options available: see below
+ app.UseInMemoryTokenCaches();
+ }
+ return clientapp;
+ }
+```
+
+### Available serialization technologies
+
+#### In memory token cache
+
+```CSharp
+ // Add an in-memory token cache
+ app.UseInMemoryTokenCaches();
+```
+
+#### Distributed in memory token cache
+
+```CSharp
+ // In memory distributed token cache
+ app.UseDistributedTokenCaches(services =>
+ {
+ // In net462/net472, requires to reference Microsoft.Extensions.Caching.Memory
+ services.AddDistributedMemoryCache();
+ });
+```
+
+#### SQL server
+
+```CSharp
+ // SQL Server token cache
+ app.UseDistributedTokenCaches(services =>
+ {
+ services.AddDistributedSqlServerCache(options =>
+ {
+ // In net462/net472, requires to reference Microsoft.Extensions.Caching.Memory
+
+ // Requires to reference Microsoft.Extensions.Caching.SqlServer
+ options.ConnectionString = @"Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=TestCache;Integrated Security=True;Connect Timeout=30;Encrypt=False;TrustServerCertificate=False;ApplicationIntent=ReadWrite;MultiSubnetFailover=False";
+ options.SchemaName = "dbo";
+ options.TableName = "TestCache";
+
+ // You don't want the SQL token cache to be purged before the access token has expired. Usually
+ // access tokens expire after 1 hour (but this can be changed by token lifetime policies), whereas
+ // the default sliding expiration for the distributed SQL database is 20 mins.
+ // Use a value which is above 60 mins (or the lifetime of a token in case of longer lived tokens)
+ options.DefaultSlidingExpiration = TimeSpan.FromMinutes(90);
+ });
+ });
+```
-## Custom serialization for Windows desktop apps and web apps/web APIs
+#### Redis cache
+
+```CSharp
+ // Redis token cache
+ app.UseDistributedTokenCaches(services =>
+ {
+ // Requires to reference Microsoft.Extensions.Caching.StackExchangeRedis
+ services.AddStackExchangeRedisCache(options =>
+ {
+ options.Configuration = "localhost";
+ options.InstanceName = "Redis";
+ });
+ });
+```
+
+#### Cosmos DB
+
+```CSharp
+ // Cosmos DB token cache
+ app.UseDistributedTokenCaches(services =>
+ {
+ // Requires to reference Microsoft.Extensions.Caching.Cosmos (preview)
+ services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
+ {
+ cacheOptions.ContainerName = Configuration["CosmosCacheContainer"];
+ cacheOptions.DatabaseName = Configuration["CosmosCacheDatabase"];
+ cacheOptions.ClientBuilder = new CosmosClientBuilder(Configuration["CosmosConnectionString"]);
+ cacheOptions.CreateIfNotExists = true;
+ });
+ });
+```
+
+### Disabling legacy token cache
+MSAL has some internal code specifically to enable the ability to interact with legacy ADAL cache. When MSAL and ADAL aren't used side by side (therefore the legacy cache isn't used), the related legacy cache code is unnecessary. MSAL [4.25.0](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/releases/tag/4.25.0) adds the ability to disable legacy ADAL cache code and improve cache usage performance. See pull request [#2309](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/pull/2309) for performance comparison before and after disabling the legacy cache. Call `.WithLegacyCacheCompatibility(false)` on an application builder like below.
+
+```csharp
+var app = ConfidentialClientApplicationBuilder
+ .Create(clientId)
+ .WithClientSecret(clientSecret)
+ .WithLegacyCacheCompatibility(false)
+ .Build();
+```
+
+### Samples
+
+- Using the token cache serializers in a .NET Framework and .NET Core applications is showed-cased in this sample [ConfidentialClientTokenCache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache)
+- The following sample is an ASP.NET web app using the same technics: https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect (See [WebApp/Utils/MsalAppBuilder.cs](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs)
+
+## [Desktop apps](#tab/desktop)
+
+In desktop applications, the recommendation is to use the cross platform token cache.
+
+#### Cross platform token cache (MSAL only)
+
+MSAL.NET provides a cross platform token cache in a separate library named Microsoft.Identity.Client.Extensions.Msal, which source code is available from https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet.
+
+##### Referencing the NuGet package
+
+Add the [Microsoft.Identity.Client.Extensions.Msal](https://www.nuget.org/packages/Microsoft.Identity.Client.Extensions.Msal/) NuGet package to your project.
+
+##### Configuring the token cache
+
+See https://github.com/AzureAD/microsoft-authentication-extensions-for-dotnet/wiki/Cross-platform-Token-Cache for details. Here's an example of usage of the cross platform token cache.
+
+```csharp
+ var storageProperties =
+ new StorageCreationPropertiesBuilder(Config.CacheFileName, Config.CacheDir)
+ .WithLinuxKeyring(
+ Config.LinuxKeyRingSchema,
+ Config.LinuxKeyRingCollection,
+ Config.LinuxKeyRingLabel,
+ Config.LinuxKeyRingAttr1,
+ Config.LinuxKeyRingAttr2)
+ .WithMacKeyChain(
+ Config.KeyChainServiceName,
+ Config.KeyChainAccountName)
+ .Build();
+
+ IPublicClientApplication pca = PublicClientApplicationBuilder.Create(clientId)
+ .WithAuthority(Config.Authority)
+ .WithRedirectUri("http://localhost") // make sure to register this redirect URI for the interactive login
+ .Build();
+
+
+// This hooks up the cross-platform cache into MSAL
+var cacheHelper = await MsalCacheHelper.CreateAsync(storageProperties );
+cacheHelper.RegisterCache(pca.UserTokenCache);
+
+```
+
+## [Mobile apps](#tab/mobile)
+
+In MSAL.NET, an in-memory token cache is provided by default. Serialization is provided by default for platforms where secure storage is available for a user as part of the platform: Universal Windows Platform (UWP), Xamarin.iOS, and Xamarin.Android.
+
+## [Write your own cache](#tab/custom)
+
+If you really want to write your own token cache serializer, MSAL.NET provides custom token cache serialization in .NET Framework and .NET Core subplatforms. Events are fired when the cache is accessed, apps can choose whether to serialize or deserialize the cache. On confidential client applications that handle users (web apps that sign in users and call web APIs, and web APIs calling downstream web APIs), there can be many users and the users are processed in parallel. For security and performance reasons, our recommendation is to serialize one cache per user. Serialization events compute a cache key based on the identity of the processed user and serialize/deserialize a token cache for that user.
Remember, custom serialization isn't available on mobile platforms (UWP, Xamarin.iOS, and Xamarin.Android). MSAL already defines a secure and performant serialization mechanism for these platforms. .NET desktop and .NET Core applications, however, have varied architectures and MSAL can't implement a general-purpose serialization mechanism. For example, web sites may choose to store tokens in a Redis cache, or desktop apps store tokens in an encrypted file. So serialization isn't provided out-of-the-box. To have a persistent token cache application in .NET desktop or .NET Core, customize the serialization. The following classes and interfaces are used in token cache serialization: -- `ITokenCache`, which defines events to subscribe to token cache serialization requests as well as methods to serialize or de-serialize the cache at various formats (ADAL v3.0, MSAL 2.x, and MSAL 3.x = ADAL v5.0).
+- `ITokenCache`, which defines events to subscribe to token cache serialization requests and methods to serialize or de-serialize the cache at various formats (ADAL v3.0, MSAL 2.x, and MSAL 3.x = ADAL v5.0).
- `TokenCacheCallback` is a callback passed to the events so that you can handle the serialization. They'll be called with arguments of type `TokenCacheNotificationArgs`. - `TokenCacheNotificationArgs` only provides the `ClientId` of the application and a reference to the user for which the token is available.
The following classes and interfaces are used in token cache serialization:
The strategies are different depending on if you're writing a token cache serialization for a [public client application](msal-client-applications.md) (desktop), or a [confidential client application](msal-client-applications.md)) (web app / web API, daemon app).
-### Token cache for a public client
+### Custom Token cache for a web app or web API (confidential client application)
+
+In web apps or web APIs, the cache could use the session, a Redis cache, a SQL database, or a Cosmos DB database. Keep one token cache per account in web apps or web APIs:
+- For web apps, the token cache should be keyed by the account ID.
+- For web APIs, the account should be keyed by the hash of the token used to call the API.
+
+Examples of token cache serializers are provided in [Microsoft.Identity.Web/TokenCacheProviders](https://github.com/AzureAD/microsoft-identity-web/tree/master/src/Microsoft.Identity.Web/TokenCacheProviders).
+
+### Custom token cache for a desktop or mobile app (public client application)
Since MSAL.NET v2.x you have several options for serializing the token cache of a public client. You can serialize the cache only to the MSAL.NET format (the unified format cache is common across MSAL and the platforms). You can also support the [legacy](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/Token-cache-serialization) token cache serialization of ADAL V3.
namespace CommonCacheMsalV3
} ```
-### Token cache for a web app (confidential client application)
-
-In web apps or web APIs, the cache could leverage the session, a Redis cache, or a database. You should keep one token cache per account in web apps or web APIs.
-
-For web apps, the token cache should be keyed by the account ID.
-
-For web APIs, the account should be keyed by the hash of the token used to call the API.
-
-MSAL.NET provides custom token cache serialization in .NET Framework and .NET Core subplatforms. Events are fired when the cache is accessed, apps can choose whether to serialize or deserialize the cache. On confidential client applications that handle users (web apps that sign in users and call web APIs, and web APIs calling downstream web APIs), there can be many users and the users are processed in parallel. For security and performance reasons, our recommendation is to serialize one cache per user. Serialization events compute a cache key based on the identity of the processed user and serialize/deserialize a token cache for that user.
-
-The [Microsoft.Identity.Web](https://github.com/AzureAD/microsoft-identity-web) library provides a preview NuGet package [Microsoft.Identity.Web](https://www.nuget.org/packages/Microsoft.Identity.Web) containing token cache serialization:
-
-| Extension Method | Microsoft.Identity.Web sub namespace | Description |
-| - | | |
-| `AddInMemoryTokenCaches` | `TokenCacheProviders.InMemory` | In memory token cache serialization. This implementation is great in samples. It's also good in production applications provided you don't mind if the token cache is lost when the web app is restarted. `AddInMemoryTokenCaches` takes an optional parameter of type `MsalMemoryTokenCacheOptions` that enables you to specify the duration after which the cache entry will expire unless it's used.
-| `AddSessionTokenCaches` | `TokenCacheProviders.Session` | The token cache is bound to the user session. This option isn't ideal if the ID token contains many claims as the cookie would become too large.
-| `AddDistributedTokenCaches` | `TokenCacheProviders.Distributed` | The token cache is an adapter against the ASP.NET Core `IDistributedCache` implementation, therefore enabling you to choose between a distributed memory cache, a Redis cache, a distributed NCache, or a SQL Server cache. For details about the `IDistributedCache` implementations, see https://docs.microsoft.com/aspnet/core/performance/caching/distributed#distributed-memory-cache.
-
-Here's an example of using the in-memory cache in the [ConfigureServices](/dotnet/api/microsoft.aspnetcore.hosting.startupbase.configureservices) method of the [Startup](/aspnet/core/fundamentals/startup) class in an ASP.NET Core application:
-
-```C#
-// or use a distributed Token Cache by adding
- services.AddSignIn(Configuration);
- services.AddWebAppCallsProtectedWebApi(Configuration, new string[] { scopesToRequest })
- .AddInMemoryTokenCaches();
-```
-
-Examples of possible distributed caches:
-
-```C#
-// or use a distributed Token Cache by adding
- services.AddSignIn(Configuration);
- services.AddWebAppCallsProtectedWebApi(Configuration, new string[] { scopesToRequest })
- .AddDistributedTokenCaches();
-
-// and then choose your implementation
-
-// For instance the distributed in memory cache (not cleared when you stop the app)
-services.AddDistributedMemoryCache()
-
-// Or a Redis cache
-services.AddStackExchangeRedisCache(options =>
-{
- options.Configuration = "localhost";
- options.InstanceName = "SampleInstance";
-});
-
-// Or even a SQL Server token cache
-services.AddDistributedSqlServerCache(options =>
-{
- options.ConnectionString = _config["DistCache_ConnectionString"];
- options.SchemaName = "dbo";
- options.TableName = "TestCache";
-});
-```
-
-Their usage is featured in the [ASP.NET Core web app tutorial](/aspnet/core/tutorials/first-mvc-app/) in the phase [2-2 Token Cache](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/2-WebApp-graph-user/2-2-TokenCache).
+ ## Next steps
The following samples illustrate token cache serialization.
| Sample | Platform | Description| | | -- | -- |
-|[active-directory-dotnet-desktop-msgraph-v2](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | Desktop (WPF) | Windows Desktop .NET (WPF) application calling the Microsoft Graph API. ![Diagram shows a topology with Desktop App W P F TodoListClient flowing to Azure A D by acquiring a token interactively and to Microsoft Graph.](media/msal-net-token-cache-serialization/topology.png)|
+|[active-directory-dotnet-desktop-msgraph-v2](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | Desktop (WPF) | Windows Desktop .NET (WPF) application calling the Microsoft Graph API. ![Diagram shows a topology with Desktop App WPF TodoListClient flowing to Azure AD by acquiring a token interactively and to Microsoft Graph.](media/msal-net-token-cache-serialization/topology.png)|
|[active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2) | Desktop (Console) | Set of Visual Studio solutions illustrating the migration of Azure AD v1.0 applications (using ADAL.NET) to Microsoft identity platform applications (using MSAL.NET). In particular, see [Token Cache Migration](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/TokenCacheMigration/README.md)|
active-directory Scenario Web App Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-web-app-call-api-app-configuration.md
The use of client assertions is an advanced scenario, detailed in [Client assert
# [ASP.NET Core](#tab/aspnetcore)
-The ASP.NET core tutorial uses dependency injection to let you decide the token cache implementation in the Startup.cs file for your application. Microsoft.Identity.Web comes with pre-built token-cache serializers described in [Token cache serialization](msal-net-token-cache-serialization.md#token-cache-for-a-web-app-confidential-client-application). An interesting possibility is to choose ASP.NET Core [distributed memory caches](/aspnet/core/performance/caching/distributed#distributed-memory-cache):
+The ASP.NET core tutorial uses dependency injection to let you decide the token cache implementation in the Startup.cs file for your application. Microsoft.Identity.Web comes with pre-built token-cache serializers described in [Token cache serialization](msal-net-token-cache-serialization.md). An interesting possibility is to choose ASP.NET Core [distributed memory caches](/aspnet/core/performance/caching/distributed#distributed-memory-cache):
```csharp // Use a distributed token cache by adding:
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/api-server-authorized-ip-ranges.md
To find IP ranges that have been authorized, use [az aks show][az-aks-show] and
az aks show \ --resource-group myResourceGroup \ --name myAKSCluster \
- --query apiServerAccessProfile.authorizedIpRanges'
+ --query apiServerAccessProfile.authorizedIpRanges
``` ## Update, disable, and find authorized IP ranges using Azure portal
For more information, see [Security concepts for applications and clusters in AK
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-cluster-security]: operator-best-practices-cluster-security.md [route-tables]: ../virtual-network/manage-route-table.md
-[standard-sku-lb]: load-balancer-standard.md
+[standard-sku-lb]: load-balancer-standard.md
azure-vmware Concepts Api Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-api-management.md
The traffic flow goes through the API Management instance, which abstracts the b
API Management has an Azure Public API, and activating Azure DDOS Protection Service is recommended. ## Internal deployment
In an internal deployment, APIs get exposed to the same API Management instance.
* External traffic enters Azure through Application Gateway, which uses the external protection layer for API Management.
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-identity.md
You can view the privileges granted to the Azure VMware Solution CloudAdmin role
1. From the list of roles, select **CloudAdmin** and then select **Privileges**.
- :::image type="content" source="media/role-based-access-control-cloudadmin-privileges.png" alt-text="How to view the CloudAdmin role privileges in vSphere Client":::
+ :::image type="content" source="media/concepts/role-based-access-control-cloudadmin-privileges.png" alt-text="How to view the CloudAdmin role privileges in vSphere Client":::
The CloudAdmin role in Azure VMware Solution has the following privileges on vCenter. For more information, see the [VMware product documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-ED56F3C4-77D0-49E3-88B6-B99B8B437B62.html).
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
As with other resources, private clouds are installed and managed from within an
The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters. ## Hosts
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-storage.md
Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datas
That default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provisioning. Unless you adjust the storage policy or you apply a new policy, the cluster continues to grow with this configuration. In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an architecture perspective. |Provisioning type |Description |
That default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provi
>[!TIP] >If you're unsure if the cluster will grow to four or more, then deploy using the default policy. If you're sure your cluster will grow, then instead of expanding the cluster after your initial deployment, we recommend to deploy the extra hosts during deployment. As the VMs are deployed to the cluster, change the disk's storage policy in the VM settings to either RAID-5 FTT-1 or RAID-6 FTT-2. >
->:::image type="content" source="media/vsphere-vm-storage-policies-2.png" alt-text="Screenshot ":::
+>:::image type="content" source="media/concepts/vsphere-vm-storage-policies-2.png" alt-text="Screenshot ":::
## Data-at-rest encryption
cloud-services-extended-support Override Sku https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/override-sku.md
Title: Override SKU information over CSCFG/CSDEF for Azure Cloud Services (extended support)
-description: Override SKU information over CSCFG/CSDEF for Azure Cloud Services (extended support)
+description: This article describes how to override SKU information in .cscfg and .csdef files for Azure Cloud Services (extended support).
Last updated 04/05/2021
-# Override SKU information over CSCFG/CSDEF in Cloud Services (extended support)
+# Override SKU settings in .cscfg and .csdef files for Cloud Services (extended support)
-This feature will allow the user to update the role size and instance count in their Cloud Service using the **allowModelOverride** property without having to update the service configuration and service definition files, thereby allowing the cloud service to scale up/down/in/out without doing a repackage and redeploy.
+This article describes how to update the role size and instance count in Azure Cloud Services by using the **allowModelOverride** property. When you use this property, you don't need to update the service configuration (.cscfg) and service definition (.csdef) files. So you can scale the cloud service up, down, in, or out without repackaging and redeploying it.
-## Set allowModelOverride property
-The allowModelOverride property can be set in the following ways:
-* When allowModelOverride = true , the API call will update the role size and instance count for the cloud service without validating the values with the csdef and cscfg files.
-> [!Note]
-> The cscfg will be updated to reflect the role instance count but the csdef (within the cspkg) will retain the old values
-* When allowModelOverride = false , the API call would throw an error when the role size and instance count values do not match with the csdef and cscfg files respectively
+## Set the allowModelOverride property
+You can set the **allowModelOverride** property to `true` or `false`.
+* When **allowModelOverride** is set to `true`, an API call will update the role size and instance count for the cloud service without validating the values with the .csdef and .cscfg files.
+ > [!Note]
+ > The .cscfg file will be updated to reflect the role instance count. The .csdef file (embedded within the .cspkg) will retain the old values.
-Default value is set to be false. If the property is reset to false back from true, the csdef and cscfg files would again be checked for validation.
+* When **allowModelOverride** is set to `false`, an API call throws an error if the role size and instance count values don't match the values in the .csdef and .cscfg files, respectively.
-Please go through the below samples to apply the property in PowerShell, template and SDK
+The default value is `false`. If the property is reset to `false` after being set to `true`, the .csdef and .cscfg files will again be validated.
-### Azure Resource Manager template
-Setting the property ΓÇ£allowModelOverrideΓÇ¥ = true here will update the cloud service with the role properties defined in the roleProfile section
+The following samples show how to set the **allowModelOverride** property by using an Azure Resource Manager (ARM) template, PowerShell, or the SDK.
+
+### ARM template
+Setting the **allowModelOverride** property to `true` here will update the cloud service with the role properties defined in the `roleProfile` section:
```json "properties": { "packageUrl": "[parameters('packageSasUri')]",
Setting the property ΓÇ£allowModelOverrideΓÇ¥ = true here will update the cloud
``` ### PowerShell
-Setting the switch ΓÇ£AllowModelOverrideΓÇ¥ on the new New-AzCloudService cmdlet, will update the cloud service with the SKU properties defined in the RoleProfile
+Setting the `AllowModelOverride` switch on the new `New-AzCloudService` cmdlet will update the cloud service with the SKU properties defined in the role profile:
```powershell New-AzCloudService ` --Name ΓÇ£ContosoCSΓÇ¥ ` --ResourceGroupName ΓÇ£ContosOrgΓÇ¥ ` --Location ΓÇ£East USΓÇ¥ `
+-Name "ContosoCS" `
+-ResourceGroupName "ContosOrg" `
+-Location "East US" `
-AllowModelOverride ` -PackageUrl $cspkgUrl ` -ConfigurationUrl $cscfgUrl `
New-AzCloudService `
-Tag $tag ``` ### SDK
-Setting the variable AllowModelOverride= true will update the cloud service with the SKU properties defined in the RoleProfile
+Setting the `AllowModelOverride` variable to `true` will update the cloud service with the SKU properties defined in the role profile:
```csharp CloudService cloudService = new CloudService
CloudService cloudService = new CloudService
}, Location = m_location };
-CloudService createOrUpdateResponse = m_CrpClient.CloudServices.CreateOrUpdate(ΓÇ£ContosOrgΓÇ¥, ΓÇ£ContosoCSΓÇ¥, cloudService);
+CloudService createOrUpdateResponse = m_CrpClient.CloudServices.CreateOrUpdate("ContosOrg", "ContosoCS", cloudService);
``` ### Azure portal
-The portal does not allow the above property to override the role size and instance count in the csdef and cscfg.
+The Azure portal doesn't allow you to use the **allowModelOverride** property to override the role size and instance count in the .csdef and .cscfg files.
## Next steps -- Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- View the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).
+- View [frequently asked questions](faq.md) for Cloud Services (extended support).
cloud-services-extended-support Swap Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/swap-cloud-service.md
You must make a cloud service swappable with another cloud service when you depl
You can swap the deployments by using an Azure Resource Manager template (ARM template), the Azure portal, or the REST API.
+> [!Note]
+> Upon deployment of the second cloud service, both the cloud services have their SwappableCloudService property set to point to each other. Any subsequent update to these cloud services will need to specify this property failing which an error will be returned indicating that the SwappableCloudService property cannot be deleted or updated.
+>
+> Once set, the SwappableCloudService property is treated as readonly. It cannot be deleted or changed to another value. Deleting one of the cloud services (of the swappable pair) will result in the SwappableCloudService property of the remaining cloud service being cleared.
++ ## ARM template If you use an ARM template deployment method, to make the cloud services swappable, set the `SwappableCloudService` property in `networkProfile` in the `cloudServices` object to the ID of the paired cloud service:
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/role-based-access-control.md
Azure RBAC can be assigned to a Custom Vision resource. To grant access to an Az
> [!NOTE] > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item (for example, selecting **Resource groups** and then clicking through to your wanted resource group). 1. Select **Access control (IAM)** on the left navigation pane.
-1. Select the **Role assignments** tab to view the role assignments for this scope.
1. Select **Add** -> **Add role assignment**.
-1. In the **Role** drop-down list, select a role you want to add.
-1. In the **Select** list, select a user, group, service principal, or managed identity. If you don't see the security principal in the list, you can type the Select box to search the directory for display names, email addresses, and object identifiers.
-1. Select **Save** to assign the role.
+1. On the **Role** tab on the next screen, select a role you want to add.
+1. On the **Members** tab, select a user, group, service principal, or managed identity.
+1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-Within a few minutes, the target will be assigned the selected role at the selected scope.
+Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](https://review.docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
## Custom Vision role types
cognitive-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Custom-Vision-Service/storage-integration.md
Previously updated : 09/11/2020 Last updated : 06/25/2021
This guide shows you how to use these REST APIs with cURL. You can also use an H
Go to your Custom Vision training resource on the Azure portal, select the **Identity** page, and enable system assigned managed identity.
-Next, go to your storage resource in the Azure portal. Go to the **Access control (IAM)** page and add a role assignment for each integration feature:
-* Select your Custom Vision training resource and assign the **Storage Blob Data Contributor** role if you plan to use the model backup feature.
-* Then select your Custom Vision training resource and assign the **Storage Queue Data Contributor** if you plan to use the notification queue feature.
+Next, go to your storage resource in the Azure portal. Go to the **Access control (IAM)** page and select **Add role assignment (Preview)**. Then add a role assignment for either integration feature, or both:
+* If you plan to use the model backup feature, select the **Storage Blob Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
+* If you plan to use the notification queue feature, then select the **Storage Queue Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
-> [!div class="mx-imgBorder"]
-> ![Storage account add role assignment page](./media/storage-integration/storage-access.png)
+For help with role assignments, see [Assign Azure roles using the Azure portal](https://review.docs.microsoft.com/azure/role-based-access-control/role-assignments-portal).
### Get integration URLs
container-registry Buffer Gate Public Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/buffer-gate-public-content.md
description: Practices and workflows in Azure Container Registry to manage depen
Previously updated : 11/20/2020 Last updated : 06/17/2021 # Manage public content with Azure Container Registry
az acr import \
Depending on your organization's needs, you can import to a dedicated registry or a repository in a shared registry.
-## Automate application image updates
+## Update image references
+
+Developers of application images should ensure that their code references local content under their control.
-Developers of application images should ensure that their code references local content under their control. For example, a `Docker FROM` statement in a Dockerfile should reference an image in a private base image registry instead of a public registry.
+* Update image references to use the private registry. For example, update a `FROM baseimage:v1` statement in a Dockerfile to `FROM myregistry.azurecr.io/mybaseimage:v1`
+* Configure credentials or an authentication mechanism to use the private registry. The exact mechanism depends on the tools you use to access the registry and how you manage user access.
+ * If you use a Kubernetes cluster or Azure Kubernetes Service to access the registry, see the [authentication scenarios](authenticate-kubernetes-options.md).
+ * Learn more about [options to authenticate](container-registry-authentication.md) with an Azure container registry.
+
+## Automate application image updates
Expanding on image import, set up an [Azure Container Registry task](container-registry-tasks-overview.md) to automate application image builds when base images are updated. An automated build task can track both [base image updates](container-registry-tasks-base-images.md) and [source code updates](container-registry-tasks-overview.md#trigger-task-on-source-code-update).
For a detailed example, see [How to consume and maintain public content with Azu
> A single preconfigured task can automatically rebuild every application image that references a dependent base image. ## Next steps
-
* Learn more about [ACR Tasks](container-registry-tasks-overview.md) to build, run, push, and patch container images in Azure. * See [How to consume and maintain public content with Azure Container Registry Tasks](tasks-consume-public-content.md) for an automated gating workflow to update base images to your environment. * See the [ACR Tasks tutorials](container-registry-tutorial-quick-task.md) for more examples to automate image builds and updates.
container-registry Container Registry Content Trust https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-content-trust.md
Title: Manage signed images description: Learn how to enable content trust for your Azure container registry, and push and pull signed images. Content trust implements Docker content trust and is a feature of the Premium service tier.- Previously updated : 09/18/2020+ Last updated : 06/25/2021+ # Content trust in Azure Container Registry
Details for granting the `AcrImageSigner` role in the Azure portal and the Azure
### Azure portal
-Navigate to your registry in the Azure portal, then select **Access control (IAM)** > **Add role assignment**. Under **Add role assignment**, select `AcrImageSigner` under **Role**, then **Select** one or more users or service principals, then **Save**.
+1. Select **Access control (IAM)**.
-In this example, two entities have been assigned the `AcrImageSigner` role: a service principal named "service-principal", and a user named "Azure User."
+1. Select **Add** > **Add role assignment** to open the Add role assignment page.
-![Grant ACR image signing permissions in the Azure portal][content-trust-02-portal]
+1. Assign the following role. In this example, the role is assigned to an individual user. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+
+ | Setting | Value |
+ | | |
+ | Role | AcrImageSigner |
+ | Assign access to | User |
+ | Members | Alain |
+
+ ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
### Azure CLI
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-customer-managed-keys.md
Title: Encrypt registry with a customer-managed key description: Learn about encryption-at-rest of your Azure container registry, and how to encrypt your Premium registry with a customer-managed key stored in Azure Key Vault- Previously updated : 05/27/2021-+ Last updated : 06/25/2021+ # Encrypt registry using a customer-managed key
keyvaultID=$(az keyvault show --resource-group <resource-group-name> --name <key
### Enable key vault access
-Configure a policy for the key vault so that the identity can access it. In the following [az keyvault set-policy][az-keyvault-set-policy] command, you pass the principal ID of the managed identity that you created, stored previously in an environment variable. Set key permissions to **get**, **unwrapKey**, and **wrapKey**.
+#### Enable key vault access policy
+
+One option is to configure a policy for the key vault so that the identity can access it. In the following [az keyvault set-policy][az-keyvault-set-policy] command, you pass the principal ID of the managed identity that you created, stored previously in an environment variable. Set key permissions to **get**, **unwrapKey**, and **wrapKey**.
```azurecli az keyvault set-policy \
az keyvault set-policy \
--name <key-vault-name> \ --object-id $identityPrincipalID \ --key-permissions get unwrapKey wrapKey+ ```
+#### Assign RBAC role
Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity using the [az role assignment create](/cli/azure/role/assignment#az_role_assignment_create) command:
When creating a key vault for a customer-managed key, in the **Basics** tab, ena
### Enable key vault access
-Configure a policy for the key vault so that the identity can access it.
+#### Enable key vault access policy
+
+One option is to configure a policy for the key vault so that the identity can access it.
1. Navigate to your key vault. 1. Select **Settings** > **Access policies > +Add Access Policy**.
Configure a policy for the key vault so that the identity can access it.
:::image type="content" source="media/container-registry-customer-managed-keys/add-key-vault-access-policy.png" alt-text="Create key vault access policy":::
-Alternatively, use [Azure RBAC for Key Vault](../key-vault/general/rbac-guide.md) to assign permissions to the identity to access the key vault. For example, assign the Key Vault Crypto Service Encryption role to the identity.
+#### Assign RBAC role
-1. Navigate to your key vault.
-1. Select **Access control (IAM)** > **+Add** > **Add role assignment**.
-1. In the **Add role assignment** window:
- 1. Select **Key Vault Crypto Service Encryption User** role.
- 1. Assign access to **User assigned managed identity**.
- 1. Select the resource name of your user-assigned managed identity, and select **Save**.
+Alternatively, assign the Key Vault Crypto Service Encryption User role to the user-assigned managed identity at the key vault scope.
+
+For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
### Create key (optional)
container-registry Container Registry Event Grid Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-event-grid-quickstart.md
Now that the sample app is up and running and you've subscribed to your registry
Execute the following Azure CLI command to build a container image from the contents of a GitHub repository. By default, ACR Tasks automatically pushes a successfully built image to your registry, which generates the `ImagePushed` event. + ```azurecli-interactive az acr build --registry $ACR_NAME --image myimage:v1 -f Dockerfile https://github.com/Azure-Samples/acr-build-helloworld-node.git#main ```
container-registry Container Registry Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-faq.md
We currently do not support GitLab for Source triggers.
## Next steps
-* [Learn more](container-registry-intro.md) about Azure Container Registry.
+* [Learn more](container-registry-intro.md) about Azure Container Registry.
container-registry Container Registry Task Run Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-task-run-template.md
For this example, provide values for the following template parameters:
Deploy the template with the [az deployment group create][az-deployment-group-create] command. This example builds and pushes the *helloworld-node:testrun* image to a registry named *mycontainerregistry*. + ```azurecli az deployment group create \ --resource-group myResourceGroup \
container-registry Container Registry Tutorial Base Image Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tutorial-base-image-update.md
docker stop updatedapp
## Next steps
-In this tutorial, you learned how to use a task to automatically trigger container image builds when the image's base image has been updated. Now, move on to the next tutorial to learn how to trigger tasks on a defined schedule.
+In this tutorial, you learned how to use a task to automatically trigger container image builds when the image's base image has been updated.
+
+For a complete workflow to manage base images originating from a public source, see [How to consume and maintain public content with Azure Container Registry Tasks](tasks-consume-public-content.md).
+
+Now, move on to the next tutorial to learn how to trigger tasks on a defined schedule.
> [!div class="nextstepaction"] > [Run a task on a schedule](container-registry-tasks-scheduled.md)
container-registry Container Registry Tutorial Build Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tutorial-build-task.md
GIT_USER=<github-username> # Your GitHub user account name
GIT_PAT=<personal-access-token> # The PAT you generated in the previous section ```
-Now, create the task by executing the following [az acr task create][az-acr-task-create] command:
+Now, create the task by executing the following [az acr task create][az-acr-task-create] command.
+ ```azurecli az acr task create \
container-registry Container Registry Tutorial Quick Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-tutorial-quick-task.md
To make executing the sample commands easier, the tutorials in this series use s
ACR_NAME=<registry-name> ```
-With the container registry environment variable populated, you should now be able to copy and paste the remainder of the commands in the tutorial without editing any values. Execute the following commands to create a resource group and container registry:
+With the container registry environment variable populated, you should now be able to copy and paste the remainder of the commands in the tutorial without editing any values. Execute the following commands to create a resource group and container registry.
```azurecli RES_GROUP=$ACR_NAME # Resource Group name
az group create --resource-group $RES_GROUP --location eastus
az acr create --resource-group $RES_GROUP --name $ACR_NAME --sku Standard --location eastus ```
-Now that you have a registry, use ACR Tasks to build a container image from the sample code. Execute the [az acr build][az-acr-build] command to perform a *quick task*:
+Now that you have a registry, use ACR Tasks to build a container image from the sample code. Execute the [az acr build][az-acr-build] command to perform a *quick task*.
+ ```azurecli az acr build --registry $ACR_NAME --image helloacrtasks:v1 .
cosmos-db Table Storage Design Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/table-storage-design-guide.md
- Title: Design Azure Cosmos DB tables for scaling and performance
-description: "Azure Table storage design guide: Scalable and performant tables in Azure Cosmos DB and Azure Table storage"
--- Previously updated : 06/19/2020-----
-# Azure Table storage table design guide: Scalable and performant tables
--
-To design scalable and performant tables, you must consider a variety of factors, including cost. If you've previously designed schemas for relational databases, these considerations will be familiar to you. But while there are some similarities between Azure Table storage and relational models, there are also many important differences. These differences typically lead to different designs that might look counter-intuitive or wrong to someone familiar with relational databases, but that do make sense if you're designing for a NoSQL key/value store, such as Table storage.
-
-Table storage is designed to support cloud-scale applications that can contain billions of entities ("rows" in relational database terminology) of data, or for datasets that must support high transaction volumes. You therefore need to think differently about how you store your data, and understand how Table storage works. A well-designed NoSQL data store can enable your solution to scale much further (and at a lower cost) than a solution that uses a relational database. This guide helps you with these topics.
-
-## About Azure Table storage
-This section highlights some of the key features of Table storage that are especially relevant to designing for performance and scalability. If you're new to Azure Storage and Table storage, see [Introduction to Microsoft Azure Storage](../storage/common/storage-introduction.md) and [Get started with Azure Table storage by using .NET](./tutorial-develop-table-dotnet.md) before reading the remainder of this article. Although the focus of this guide is on Table storage, it does include some discussion of Azure Queue storage and Azure Blob storage, and how you might use them along with Table storage in a solution.
-
-Table storage uses a tabular format to store data. In the standard terminology, each row of the table represents an entity, and the columns store the various properties of that entity. Every entity has a pair of keys to uniquely identify it, and a timestamp column that Table storage uses to track when the entity was last updated. The timestamp field is added automatically, and you can't manually overwrite the timestamp with an arbitrary value. Table storage uses this last-modified timestamp (LMT) to manage optimistic concurrency.
-
-> [!NOTE]
-> Table storage REST API operations also return an `ETag` value that it derives from the LMT. In this document, the terms ETag and LMT are used interchangeably, because they refer to the same underlying data.
->
->
-
-The following example shows a simple table design to store employee and department entities. Many of the examples shown later in this guide are based on this simple design.
-
-<table>
-<tr>
-<th>PartitionKey</th>
-<th>RowKey</th>
-<th>Timestamp</th>
-<th></th>
-</tr>
-<tr>
-<td>Marketing</td>
-<td>00001</td>
-<td>2014-08-22T00:50:32Z</td>
-<td>
-<table>
-<tr>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td>Don</td>
-<td>Hall</td>
-<td>34</td>
-<td>donh@contoso.com</td>
-</tr>
-</table>
-</tr>
-<tr>
-<td>Marketing</td>
-<td>00002</td>
-<td>2014-08-22T00:50:34Z</td>
-<td>
-<table>
-<tr>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td>Jun</td>
-<td>Cao</td>
-<td>47</td>
-<td>junc@contoso.com</td>
-</tr>
-</table>
-</tr>
-<tr>
-<td>Marketing</td>
-<td>Department</td>
-<td>2014-08-22T00:50:30Z</td>
-<td>
-<table>
-<tr>
-<th>DepartmentName</th>
-<th>EmployeeCount</th>
-</tr>
-<tr>
-<td>Marketing</td>
-<td>153</td>
-</tr>
-</table>
-</td>
-</tr>
-<tr>
-<td>Sales</td>
-<td>00010</td>
-<td>2014-08-22T00:50:44Z</td>
-<td>
-<table>
-<tr>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td>Ken</td>
-<td>Kwok</td>
-<td>23</td>
-<td>kenk@contoso.com</td>
-</tr>
-</table>
-</td>
-</tr>
-</table>
--
-So far, this design looks similar to a table in a relational database. The key differences are the mandatory columns and the ability to store multiple entity types in the same table. In addition, each of the user-defined properties, such as **FirstName** or **Age**, has a data type, such as integer or string, just like a column in a relational database. Unlike in a relational database, however, the schema-less nature of Table storage means that a property need not have the same data type on each entity. To store complex data types in a single property, you must use a serialized format such as JSON or XML. For more information, see [Understanding Table storage data model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model).
-
-Your choice of `PartitionKey` and `RowKey` is fundamental to good table design. Every entity stored in a table must have a unique combination of `PartitionKey` and `RowKey`. As with keys in a relational database table, the `PartitionKey` and `RowKey` values are indexed to create a clustered index that enables fast look-ups. Table storage, however, doesn't create any secondary indexes, so these are the only two indexed properties (some of the patterns described later show how you can work around this apparent limitation).
-
-A table is made up of one or more partitions, and many of the design decisions you make will be around choosing a suitable `PartitionKey` and `RowKey` to optimize your solution. A solution can consist of just a single table that contains all your entities organized into partitions, but typically a solution has multiple tables. Tables help you to logically organize your entities, and help you manage access to the data by using access control lists. You can drop an entire table by using a single storage operation.
-
-### Table partitions
-The account name, table name, and `PartitionKey` together identify the partition within the storage service where Table storage stores the entity. As well as being part of the addressing scheme for entities, partitions define a scope for transactions (see the section later in this article, [Entity group transactions](#entity-group-transactions)), and form the basis of how Table storage scales. For more information on table partitions, see [Performance and scalability checklist for Table storage](../storage/tables/storage-performance-checklist.md).
-
-In Table storage, an individual node services one or more complete partitions, and the service scales by dynamically load-balancing partitions across nodes. If a node is under load, Table storage can split the range of partitions serviced by that node onto different nodes. When traffic subsides, Table storage can merge the partition ranges from quiet nodes back onto a single node.
-
-For more information about the internal details of Table storage, and in particular how it manages partitions, see [Microsoft Azure Storage: A highly available
-cloud storage service with strong consistency](/archive/blogs/windowsazurestorage/sosp-paper-windows-azure-storage-a-highly-available-cloud-storage-service-with-strong-consistency).
-
-### Entity group transactions
-In Table storage, entity group transactions (EGTs) are the only built-in mechanism for performing atomic updates across multiple entities. EGTs are also referred to as *batch transactions*. EGTs can only operate on entities stored in the same partition (sharing the same partition key in a particular table), so anytime you need atomic transactional behavior across multiple entities, ensure that those entities are in the same partition. This is often a reason for keeping multiple entity types in the same table (and partition), and not using multiple tables for different entity types. A single EGT can operate on at most 100 entities. If you submit multiple concurrent EGTs for processing, it's important to ensure that those EGTs don't operate on entities that are common across EGTs. Otherwise, you risk delaying processing.
-
-EGTs also introduce a potential trade-off for you to evaluate in your design. Using more partitions increases the scalability of your application, because Azure has more opportunities for load-balancing requests across nodes. But this might limit the ability of your application to perform atomic transactions and maintain strong consistency for your data. Furthermore, there are specific scalability targets at the level of a partition that might limit the throughput of transactions you can expect for a single node.
-
-For more information about scalability targets for Azure storage accounts, see [Scalability targets for standard storage accounts](../storage/common/scalability-targets-standard-account.md). For more information about scalability targets for Table storage, see [Scalability and performance targets for Table storage](../storage/tables/scalability-targets.md). Later sections of this guide discuss various design strategies that help you manage trade-offs such as this one, and discuss how best to choose your partition key based on the specific requirements of your client application.
-
-### Capacity considerations
-The following table includes some of the key values to be aware of when you're designing a Table storage solution:
-
-| Total capacity of an Azure storage account | 500 TB |
-| | |
-| Number of tables in an Azure storage account |Limited only by the capacity of the storage account. |
-| Number of partitions in a table |Limited only by the capacity of the storage account. |
-| Number of entities in a partition |Limited only by the capacity of the storage account. |
-| Size of an individual entity |Up to 1 MB, with a maximum of 255 properties (including the `PartitionKey`, `RowKey`, and `Timestamp`). |
-| Size of the `PartitionKey` |A string up to 1 KB in size. |
-| Size of the `RowKey` |A string up to 1 KB in size. |
-| Size of an entity group transaction |A transaction can include at most 100 entities, and the payload must be less than 4 MB in size. An EGT can only update an entity once. |
-
-For more information, see [Understanding the Table service data model](/rest/api/storageservices/Understanding-the-Table-Service-Data-Model).
-
-### Cost considerations
-Table storage is relatively inexpensive, but you should include cost estimates for both capacity usage and the quantity of transactions as part of your evaluation of any solution that uses Table storage. In many scenarios, however, storing denormalized or duplicate data in order to improve the performance or scalability of your solution is a valid approach to take. For more information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
-
-## Guidelines for table design
-These lists summarize some of the key guidelines you should keep in mind when you're designing your tables. This guide addresses them all in more detail later on. These guidelines are different from the guidelines you'd typically follow for relational database design.
-
-Designing your Table storage to be *read* efficient:
-
-* **Design for querying in read-heavy applications.** When you're designing your tables, think about the queries (especially the latency-sensitive ones) you'll run before you think about how you'll update your entities. This typically results in an efficient and performant solution.
-* **Specify both `PartitionKey` and `RowKey` in your queries.** *Point queries* such as these are the most efficient Table storage queries.
-* **Consider storing duplicate copies of entities.** Table storage is cheap, so consider storing the same entity multiple times (with different keys), to enable more efficient queries.
-* **Consider denormalizing your data.** Table storage is cheap, so consider denormalizing your data. For example, store summary entities so that queries for aggregate data only need to access a single entity.
-* **Use compound key values.** The only keys you have are `PartitionKey` and `RowKey`. For example, use compound key values to enable alternate keyed access paths to entities.
-* **Use query projection.** You can reduce the amount of data that you transfer over the network by using queries that select just the fields you need.
-
-Designing your Table storage to be *write* efficient:
-
-* **Don't create hot partitions.** Choose keys that enable you to spread your requests across multiple partitions at any point of time.
-* **Avoid spikes in traffic.** Distribute the traffic over a reasonable period of time, and avoid spikes in traffic.
-* **Don't necessarily create a separate table for each type of entity.** When you require atomic transactions across entity types, you can store these multiple entity types in the same partition in the same table.
-* **Consider the maximum throughput you must achieve.** You must be aware of the scalability targets for Table storage, and ensure that your design won't cause you to exceed them.
-
-Later in this guide, you'll see examples that put all of these principles into practice.
-
-## Design for querying
-Table storage can be read intensive, write intensive, or a mix of the two. This section considers designing to support read operations efficiently. Typically, a design that supports read operations efficiently is also efficient for write operations. However, there are additional considerations when designing to support write operations. These are discussed in the next section, [Design for data modification](#design-for-data-modification).
-
-A good starting point to enable you to read data efficiently is to ask "What queries will my application need to run to retrieve the data it needs?"
-
-> [!NOTE]
-> With Table storage, it's important to get the design correct up front, because it's difficult and expensive to change it later. For example, in a relational database, it's often possible to address performance issues simply by adding indexes to an existing database. This isn't an option with Table storage.
-
-### How your choice of `PartitionKey` and `RowKey` affects query performance
-The following examples assume Table storage is storing employee entities with the following structure (most of the examples omit the `Timestamp` property for clarity):
-
-| Column name | Data type |
-| | |
-| `PartitionKey` (Department name) |String |
-| `RowKey` (Employee ID) |String |
-| `FirstName` |String |
-| `LastName` |String |
-| `Age` |Integer |
-| `EmailAddress` |String |
-
-Here are some general guidelines for designing Table storage queries. The filter syntax used in the following examples is from the Table storage REST API. For more information, see [Query entities](/rest/api/storageservices/Query-Entities).
-
-* A *point query* is the most efficient lookup to use, and is recommended for high-volume lookups or lookups requiring the lowest latency. Such a query can use the indexes to locate an individual entity efficiently by specifying both the `PartitionKey` and `RowKey` values. For example:
- `$filter=(PartitionKey eq 'Sales') and (RowKey eq '2')`.
-* Second best is a *range query*. It uses the `PartitionKey`, and filters on a range of `RowKey` values to return more than one entity. The `PartitionKey` value identifies a specific partition, and the `RowKey` values identify a subset of the entities in that partition. For example:
- `$filter=PartitionKey eq 'Sales' and RowKey ge 'S' and RowKey lt 'T'`.
-* Third best is a *partition scan*. It uses the `PartitionKey`, and filters on another non-key property and might return more than one entity. The `PartitionKey` value identifies a specific partition, and the property values select for a subset of the entities in that partition. For example:
- `$filter=PartitionKey eq 'Sales' and LastName eq 'Smith'`.
-* A *table scan* doesn't include the `PartitionKey`, and is inefficient because it searches all of the partitions that make up your table for any matching entities. It performs a table scan regardless of whether or not your filter uses the `RowKey`. For example:
- `$filter=LastName eq 'Jones'`.
-* Azure Table storage queries that return multiple entities sort them in `PartitionKey` and `RowKey` order. To avoid resorting the entities in the client, choose a `RowKey` that defines the most common sort order. Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/azure/cosmos-db/table-storage-how-to-use-java).
-
-Using an "**or**" to specify a filter based on `RowKey` values results in a partition scan, and isn't treated as a range query. Therefore, avoid queries that use filters such as:
-`$filter=PartitionKey eq 'Sales' and (RowKey eq '121' or RowKey eq '322')`.
-
-For examples of client-side code that use the Storage Client Library to run efficient queries, see:
-
-* [Run a point query by using the Storage Client Library](#run-a-point-query-by-using-the-storage-client-library)
-* [Retrieve multiple entities by using LINQ](#retrieve-multiple-entities-by-using-linq)
-* [Server-side projection](#server-side-projection)
-
-For examples of client-side code that can handle multiple entity types stored in the same table, see:
-
-* [Work with heterogeneous entity types](#work-with-heterogeneous-entity-types)
-
-### Choose an appropriate `PartitionKey`
-Your choice of `PartitionKey` should balance the need to enable the use of EGTs (to ensure consistency) against the requirement to distribute your entities across multiple partitions (to ensure a scalable solution).
-
-At one extreme, you can store all your entities in a single partition. But this might limit the scalability of your solution, and would prevent Table storage from being able to load-balance requests. At the other extreme, you can store one entity per partition. This is highly scalable and enables Table storage to load-balance requests, but prevents you from using entity group transactions.
-
-An ideal `PartitionKey` enables you to use efficient queries, and has sufficient partitions to ensure your solution is scalable. Typically, you'll find that your entities will have a suitable property that distributes your entities across sufficient partitions.
-
-> [!NOTE]
-> For example, in a system that stores information about users or employees, `UserID` can be a good `PartitionKey`. You might have several entities that use a particular `UserID` as the partition key. Each entity that stores data about a user is grouped into a single partition. These entities are accessible via EGTs, while still being highly scalable.
->
->
-
-There are additional considerations in your choice of `PartitionKey` that relate to how you insert, update, and delete entities. For more information, see [Design for data modification](#design-for-data-modification) later in this article.
-
-### Optimize queries for Table storage
-Table storage automatically indexes your entities by using the `PartitionKey` and `RowKey` values in a single clustered index. This is the reason that point queries are the most efficient to use. However, there are no indexes other than that on the clustered index on the `PartitionKey` and `RowKey`.
-
-Many designs must meet requirements to enable lookup of entities based on multiple criteria. For example, locating employee entities based on email, employee ID, or last name. The following patterns in the section [Table design patterns](#table-design-patterns) address these types of requirements. The patterns also describe ways of working around the fact that Table storage doesn't provide secondary indexes.
-
-* [Intra-partition secondary index pattern](#intra-partition-secondary-index-pattern): Store multiple copies of each entity by using different `RowKey` values (in the same partition). This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values.
-* [Inter-partition secondary index pattern](#inter-partition-secondary-index-pattern): Store multiple copies of each entity by using different `RowKey` values in separate partitions or in separate tables. This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values.
-* [Index entities pattern](#index-entities-pattern): Maintain index entities to enable efficient searches that return lists of entities.
-
-### Sort data in Table storage
-
-Table storage returns query results sorted in ascending order, based on `PartitionKey` and then by `RowKey`.
-
-> [!NOTE]
-> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table storage](/azure/cosmos-db/table-storage-how-to-use-java).
-
-Keys in Table storage are string values. To ensure that numeric values sort correctly, you should convert them to a fixed length, and pad them with zeroes. For example, if the employee ID value you use as the `RowKey` is an integer value, you should convert employee ID **123** to **00000123**.
-
-Many applications have requirements to use data sorted in different orders: for example, sorting employees by name, or by joining date. The following patterns in the section [Table design patterns](#table-design-patterns) address how to alternate sort orders for your entities:
-
-* [Intra-partition secondary index pattern](#intra-partition-secondary-index-pattern): Store multiple copies of each entity by using different `RowKey` values (in the same partition). This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values.
-* [Inter-partition secondary index pattern](#inter-partition-secondary-index-pattern): Store multiple copies of each entity by using different `RowKey` values in separate partitions in separate tables. This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values.
-* [Log tail pattern](#log-tail-pattern): Retrieve the *n* entities most recently added to a partition, by using a `RowKey` value that sorts in reverse date and time order.
-
-## Design for data modification
-This section focuses on the design considerations for optimizing inserts, updates, and deletes. In some cases, you'll need to evaluate the trade-off between designs that optimize for querying against designs that optimize for data modification. This evaluation is similar to what you do in designs for relational databases (although the techniques for managing the design trade-offs are different in a relational database). The section [Table design patterns](#table-design-patterns) describes some detailed design patterns for Table storage, and highlights some of these trade-offs. In practice, you'll find that many designs optimized for querying entities also work well for modifying entities.
-
-### Optimize the performance of insert, update, and delete operations
-To update or delete an entity, you must be able to identify it by using the `PartitionKey` and `RowKey` values. In this respect, your choice of `PartitionKey` and `RowKey` for modifying entities should follow similar criteria to your choice to support point queries. You want to identify entities as efficiently as possible. You don't want to use an inefficient partition or table scan to locate an entity in order to discover the `PartitionKey` and `RowKey` values you need to update or delete it.
-
-The following patterns in the section [Table design patterns](#table-design-patterns) address optimizing the performance of your insert, update, and delete operations:
-
-* [High volume delete pattern](#high-volume-delete-pattern): Enable the deletion of a high volume of entities by storing all the entities for simultaneous deletion in their own separate table. You delete the entities by deleting the table.
-* [Data series pattern](#data-series-pattern): Store complete data series in a single entity to minimize the number of requests you make.
-* [Wide entities pattern](#wide-entities-pattern): Use multiple physical entities to store logical entities with more than 252 properties.
-* [Large entities pattern](#large-entities-pattern): Use blob storage to store large property values.
-
-### Ensure consistency in your stored entities
-The other key factor that influences your choice of keys for optimizing data modifications is how to ensure consistency by using atomic transactions. You can only use an EGT to operate on entities stored in the same partition.
-
-The following patterns in the section [Table design patterns](#table-design-patterns) address managing consistency:
-
-* [Intra-partition secondary index pattern](#intra-partition-secondary-index-pattern): Store multiple copies of each entity by using different `RowKey` values (in the same partition). This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values.
-* [Inter-partition secondary index pattern](#inter-partition-secondary-index-pattern): Store multiple copies of each entity by using different `RowKey` values in separate partitions or in separate tables. This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values.
-* [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern): Enable eventually consistent behavior across partition boundaries or storage system boundaries by using Azure queues.
-* [Index entities pattern](#index-entities-pattern): Maintain index entities to enable efficient searches that return lists of entities.
-* [Denormalization pattern](#denormalization-pattern): Combine related data together in a single entity, to enable you to retrieve all the data you need with a single point query.
-* [Data series pattern](#data-series-pattern): Store complete data series in a single entity, to minimize the number of requests you make.
-
-For more information, see [Entity group transactions](#entity-group-transactions) later in this article.
-
-### Ensure your design for efficient modifications facilitates efficient queries
-In many cases, a design for efficient querying results in efficient modifications, but you should always evaluate whether this is the case for your specific scenario. Some of the patterns in the section [Table design patterns](#table-design-patterns) explicitly evaluate trade-offs between querying and modifying entities, and you should always take into account the number of each type of operation.
-
-The following patterns in the section [Table design patterns](#table-design-patterns) address trade-offs between designing for efficient queries and designing for efficient data modification:
-
-* [Compound key pattern](#compound-key-pattern): Use compound `RowKey` values to enable a client to look up related data with a single point query.
-* [Log tail pattern](#log-tail-pattern): Retrieve the *n* entities most recently added to a partition, by using a `RowKey` value that sorts in reverse date and time order.
-
-## Encrypt table data
-The .NET Azure Storage client library supports encryption of string entity properties for insert and replace operations. The encrypted strings are stored on the service as binary properties, and they're converted back to strings after decryption.
-
-For tables, in addition to the encryption policy, users must specify the properties to be encrypted. Either specify an `EncryptProperty` attribute (for POCO entities that derive from `TableEntity`), or specify an encryption resolver in request options. An encryption resolver is a delegate that takes a partition key, row key, and property name, and returns a Boolean that indicates whether that property should be encrypted. During encryption, the client library uses this information to decide whether a property should be encrypted while writing to the wire. The delegate also provides for the possibility of logic around how properties are encrypted. (For example, if X, then encrypt property A; otherwise encrypt properties A and B.) It's not necessary to provide this information while reading or querying entities.
-
-Merge isn't currently supported. Because a subset of properties might have been encrypted previously by using a different key, simply merging the new properties and updating the metadata will result in data loss. Merging either requires making extra service calls to read the pre-existing entity from the service, or using a new key per property. Neither of these are suitable for performance reasons.
-
-For information about encrypting table data, see [Client-side encryption and Azure Key Vault for Microsoft Azure Storage](../storage/common/storage-client-side-encryption.md).
-
-## Model relationships
-Building domain models is a key step in the design of complex systems. Typically, you use the modeling process to identify entities and the relationships between them, as a way to understand the business domain and inform the design of your system. This section focuses on how you can translate some of the common relationship types found in domain models to designs for Table storage. The process of mapping from a logical data model to a physical NoSQL-based data model is different from that used when designing a relational database. Relational databases design typically assumes a data normalization process optimized for minimizing redundancy. Such design also assumes a declarative querying capability that abstracts the implementation of how the database works.
-
-### One-to-many relationships
-One-to-many relationships between business domain objects occur frequently: for example, one department has many employees. There are several ways to implement one-to-many relationships in Table storage, each with pros and cons that might be relevant to the particular scenario.
-
-Consider the example of a large multinational corporation with tens of thousands of departments and employee entities. Every department has many employees and each employee is associated with one specific department. One approach is to store separate department and employee entities, such as the following:
--
-This example shows an implicit one-to-many relationship between the types, based on the `PartitionKey` value. Each department can have many employees.
-
-This example also shows a department entity and its related employee entities in the same partition. You can choose to use different partitions, tables, or even storage accounts for the different entity types.
-
-An alternative approach is to denormalize your data, and store only employee entities with denormalized department data, as shown in the following example. In this particular scenario, this denormalized approach might not be the best if you have a requirement to be able to change the details of a department manager. To do this, you would need to update every employee in the department.
--
-For more information, see the [Denormalization pattern](#denormalization-pattern) later in this guide.
-
-The following table summarizes the pros and cons of each of the approaches for storing employee and department entities that have a one-to-many relationship. You should also consider how often you expect to perform various operations. It might be acceptable to have a design that includes an expensive operation if that operation only happens infrequently.
-
-<table>
-<tr>
-<th>Approach</th>
-<th>Pros</th>
-<th>Cons</th>
-</tr>
-<tr>
-<td>Separate entity types, same partition, same table</td>
-<td>
-<ul>
-<li>You can update a department entity with a single operation.</li>
-<li>You can use an EGT to maintain consistency if you have a requirement to modify a department entity whenever you update/insert/delete an employee entity. For example, if you maintain a departmental employee count for each department.</li>
-</ul>
-</td>
-<td>
-<ul>
-<li>You might need to retrieve both an employee and a department entity for some client activities.</li>
-<li>Storage operations happen in the same partition. At high transaction volumes, this can result in a hotspot.</li>
-<li>You can't move an employee to a new department by using an EGT.</li>
-</ul>
-</td>
-</tr>
-<tr>
-<td>Separate entity types, different partitions, or tables or storage accounts</td>
-<td>
-<ul>
-<li>You can update a department entity or employee entity with a single operation.</li>
-<li>At high transaction volumes, this can help spread the load across more partitions.</li>
-</ul>
-</td>
-<td>
-<ul>
-<li>You might need to retrieve both an employee and a department entity for some client activities.</li>
-<li>You can't use EGTs to maintain consistency when you update/insert/delete an employee and update a department. For example, updating an employee count in a department entity.</li>
-<li>You can't move an employee to a new department by using an EGT.</li>
-</ul>
-</td>
-</tr>
-<tr>
-<td>Denormalize into single entity type</td>
-<td>
-<ul>
-<li>You can retrieve all the information you need with a single request.</li>
-</ul>
-</td>
-<td>
-<ul>
-<li>It can be expensive to maintain consistency if you need to update department information (this would require you to update all the employees in a department).</li>
-</ul>
-</td>
-</tr>
-</table>
-
-How you choose among these options, and which of the pros and cons are most significant, depends on your specific application scenarios. For example, how often do you modify department entities? Do all your employee queries need the additional departmental information? How close are you to the scalability limits on your partitions or your storage account?
-
-### One-to-one relationships
-Domain models can include one-to-one relationships between entities. If you need to implement a one-to-one relationship in Table storage, you must also choose how to link the two related entities when you need to retrieve them both. This link can be either implicit, based on a convention in the key values, or explicit, by storing a link in the form of `PartitionKey` and `RowKey` values in each entity to its related entity. For a discussion of whether you should store the related entities in the same partition, see the section [One-to-many relationships](#one-to-many-relationships).
-
-There are also implementation considerations that might lead you to implement one-to-one relationships in Table storage:
-
-* Handling large entities (for more information, see [Large entities pattern](#large-entities-pattern)).
-* Implementing access controls (for more information, see [Control access with shared access signatures](#control-access-with-shared-access-signatures)).
-
-### Join in the client
-Although there are ways to model relationships in Table storage, don't forget that the two prime reasons for using Table storage are scalability and performance. If you find you are modeling many relationships that compromise the performance and scalability of your solution, you should ask yourself if it's necessary to build all the data relationships into your table design. You might be able to simplify the design, and improve the scalability and performance of your solution, if you let your client application perform any necessary joins.
-
-For example, if you have small tables that contain data that doesn't change often, you can retrieve this data once, and cache it on the client. This can avoid repeated roundtrips to retrieve the same data. In the examples we've looked at in this guide, the set of departments in a small organization is likely to be small and change infrequently. This makes it a good candidate for data that a client application can download once and cache as lookup data.
-
-### Inheritance relationships
-If your client application uses a set of classes that form part of an inheritance relationship to represent business entities, you can easily persist those entities in Table storage. For example, you might have the following set of classes defined in your client application, where `Person` is an abstract class.
--
-You can persist instances of the two concrete classes in Table storage by using a single `Person` table. Use entities that look like the following:
--
-For more information about working with multiple entity types in the same table in client code, see [Work with heterogeneous entity types](#work-with-heterogeneous-entity-types) later in this guide. This provides examples of how to recognize the entity type in client code.
-
-## Table design patterns
-In previous sections, you learned about how to optimize your table design for both retrieving entity data by using queries, and for inserting, updating, and deleting entity data. This section describes some patterns appropriate for use with Table storage. In addition, you'll see how you can practically address some of the issues and trade-offs raised previously in this guide. The following diagram summarizes the relationships among the different patterns:
--
-The pattern map highlights some relationships between patterns (blue) and anti-patterns (orange) that are documented in this guide. There are of course many other patterns that are worth considering. For example, one of the key scenarios for Table storage is to use the [materialized view pattern](/previous-versions/msp-n-p/dn589782(v=pandp.10)) from the [command query responsibility segregation](/previous-versions/msp-n-p/jj554200(v=pandp.10)) pattern.
-
-### Intra-partition secondary index pattern
-Store multiple copies of each entity by using different `RowKey` values (in the same partition). This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values. Updates between copies can be kept consistent by using EGTs.
-
-#### Context and problem
-Table storage automatically indexes entities by using the `PartitionKey` and `RowKey` values. This enables a client application to retrieve an entity efficiently by using these values. For example, using the following table structure, a client application can use a point query to retrieve an individual employee entity by using the department name and the employee ID (the `PartitionKey` and `RowKey` values). A client can also retrieve entities sorted by employee ID within each department.
--
-If you also want to find an employee entity based on the value of another property, such as email address, you must use a less efficient partition scan to find a match. This is because Table storage doesn't provide secondary indexes. In addition, there's no option to request a list of employees sorted in a different order than `RowKey` order.
-
-#### Solution
-To work around the lack of secondary indexes, you can store multiple copies of each entity, with each copy using a different `RowKey` value. If you store an entity with the following structures, you can efficiently retrieve employee entities based on email address or employee ID. The prefix values for `RowKey`, `empid_`, and `email_` enable you to query for a single employee, or a range of employees, by using a range of email addresses or employee IDs.
--
-The following two filter criteria (one looking up by employee ID, and one looking up by email address) both specify point queries:
-
-* $filter=(PartitionKey eq 'Sales') and (RowKey eq 'empid_000223')
-* $filter=(PartitionKey eq 'Sales') and (RowKey eq 'email_jonesj@contoso.com')
-
-If you query for a range of employee entities, you can specify a range sorted in employee ID order, or a range sorted in email address order. Query for entities with the appropriate prefix in the `RowKey`.
-
-* To find all the employees in the Sales department with an employee ID in the range 000100 to 000199, use:
- $filter=(PartitionKey eq 'Sales') and (RowKey ge 'empid_000100') and (RowKey le 'empid_000199')
-* To find all the employees in the Sales department with an email address starting with the letter "a", use:
- $filter=(PartitionKey eq 'Sales') and (RowKey ge 'email_a') and (RowKey lt 'email_b')
-
-The filter syntax used in the preceding examples is from the Table storage REST API. For more information, see [Query entities](/rest/api/storageservices/Query-Entities).
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* Table storage is relatively cheap to use, so the cost overhead of storing duplicate data shouldn't be a major concern. However, you should always evaluate the cost of your design based on your anticipated storage requirements, and only add duplicate entities to support the queries your client application will run.
-* Because the secondary index entities are stored in the same partition as the original entities, ensure that you don't exceed the scalability targets for an individual partition.
-* You can keep your duplicate entities consistent with each other by using EGTs to update the two copies of the entity atomically. This implies that you should store all copies of an entity in the same partition. For more information, see [Use entity group transactions](#entity-group-transactions).
-* The value used for the `RowKey` must be unique for each entity. Consider using compound key values.
-* Padding numeric values in the `RowKey` (for example, the employee ID 000223) enables correct sorting and filtering based on upper and lower bounds.
-* You don't necessarily need to duplicate all the properties of your entity. For example, if the queries that look up the entities by using the email address in the `RowKey` never need the employee's age, these entities can have the following structure:
-
- :::image type="content" source="./media/storage-table-design-guide/storage-table-design-IMAGE08.png" alt-text="Graphic of employee entity":::
-
-* Typically, it's better to store duplicate data and ensure that you can retrieve all the data you need with a single query, than to use one query to locate an entity and another to look up the required data.
-
-#### When to use this pattern
-Use this pattern when:
--- Your client application needs to retrieve entities by using a variety of different keys.-- Your client needs to retrieve entities in different sort orders.-- You can identify each entity by using a variety of unique values.-
-However, be sure that you don't exceed the partition scalability limits when you're performing entity lookups by using the different `RowKey` values.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Inter-partition secondary index pattern](#inter-partition-secondary-index-pattern)
-* [Compound key pattern](#compound-key-pattern)
-* [Entity group transactions](#entity-group-transactions)
-* [Work with heterogeneous entity types](#work-with-heterogeneous-entity-types)
-
-### Inter-partition secondary index pattern
-Store multiple copies of each entity by using different `RowKey` values in separate partitions or in separate tables. This enables fast and efficient lookups, and alternate sort orders by using different `RowKey` values.
-
-#### Context and problem
-Table storage automatically indexes entities by using the `PartitionKey` and `RowKey` values. This enables a client application to retrieve an entity efficiently by using these values. For example, using the following table structure, a client application can use a point query to retrieve an individual employee entity by using the department name and the employee ID (the `PartitionKey` and `RowKey` values). A client can also retrieve entities sorted by employee ID within each department.
--
-If you also want to be able to find an employee entity based on the value of another property, such as email address, you must use a less efficient partition scan to find a match. This is because Table storage doesn't provide secondary indexes. In addition, there's no option to request a list of employees sorted in a different order than `RowKey` order.
-
-You're anticipating a high volume of transactions against these entities, and want to minimize the risk of the Table storage rate limiting your client.
-
-#### Solution
-To work around the lack of secondary indexes, you can store multiple copies of each entity, with each copy using different `PartitionKey` and `RowKey` values. If you store an entity with the following structures, you can efficiently retrieve employee entities based on email address or employee ID. The prefix values for `PartitionKey`, `empid_`, and `email_` enable you to identify which index you want to use for a query.
--
-The following two filter criteria (one looking up by employee ID, and one looking up by email address) both specify point queries:
-
-* $filter=(PartitionKey eq 'empid_Sales') and (RowKey eq '000223')
-* $filter=(PartitionKey eq 'email_Sales') and (RowKey eq 'jonesj@contoso.com')
-
-If you query for a range of employee entities, you can specify a range sorted in employee ID order, or a range sorted in email address order. Query for entities with the appropriate prefix in the `RowKey`.
-
-* To find all the employees in the Sales department with an employee ID in the range **000100** to **000199**, sorted in employee ID order, use:
- $filter=(PartitionKey eq 'empid_Sales') and (RowKey ge '000100') and (RowKey le '000199')
-* To find all the employees in the Sales department with an email address that starts with "a", sorted in email address order, use:
- $filter=(PartitionKey eq 'email_Sales') and (RowKey ge 'a') and (RowKey lt 'b')
-
-Note that the filter syntax used in the preceding examples is from the Table storage REST API. For more information, see [Query entities](/rest/api/storageservices/Query-Entities).
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* You can keep your duplicate entities eventually consistent with each other by using the [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern) to maintain the primary and secondary index entities.
-* Table storage is relatively cheap to use, so the cost overhead of storing duplicate data should not be a major concern. However, always evaluate the cost of your design based on your anticipated storage requirements, and only add duplicate entities to support the queries your client application will run.
-* The value used for the `RowKey` must be unique for each entity. Consider using compound key values.
-* Padding numeric values in the `RowKey` (for example, the employee ID 000223) enables correct sorting and filtering based on upper and lower bounds.
-* You don't necessarily need to duplicate all the properties of your entity. For example, if the queries that look up the entities by using the email address in the `RowKey` never need the employee's age, these entities can have the following structure:
-
- :::image type="content" source="./media/storage-table-design-guide/storage-table-design-IMAGE11.png" alt-text="Graphic showing employee entity with secondary index":::
-
-* Typically, it's better to store duplicate data and ensure that you can retrieve all the data you need with a single query, than to use one query to locate an entity by using the secondary index and another to look up the required data in the primary index.
-
-#### When to use this pattern
-Use this pattern when:
--- Your client application needs to retrieve entities by using a variety of different keys.-- Your client needs to retrieve entities in different sort orders.-- You can identify each entity by using a variety of unique values.-
-Use this pattern when you want to avoid exceeding the partition scalability limits when you are performing entity lookups by using the different `RowKey` values.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern)
-* [Intra-partition secondary index pattern](#intra-partition-secondary-index-pattern)
-* [Compound key pattern](#compound-key-pattern)
-* [Entity group transactions](#entity-group-transactions)
-* [Work with heterogeneous entity types](#work-with-heterogeneous-entity-types)
-
-### Eventually consistent transactions pattern
-Enable eventually consistent behavior across partition boundaries or storage system boundaries by using Azure queues.
-
-#### Context and problem
-EGTs enable atomic transactions across multiple entities that share the same partition key. For performance and scalability reasons, you might decide to store entities that have consistency requirements in separate partitions or in a separate storage system. In such a scenario, you can't use EGTs to maintain consistency. For example, you might have a requirement to maintain eventual consistency between:
-
-* Entities stored in two different partitions in the same table, in different tables, or in different storage accounts.
-* An entity stored in Table storage and a blob stored in Blob storage.
-* An entity stored in Table storage and a file in a file system.
-* An entity stored in Table storage, yet indexed by using Azure Cognitive Search.
-
-#### Solution
-By using Azure queues, you can implement a solution that delivers eventual consistency across two or more partitions or storage systems.
-
-To illustrate this approach, assume you have a requirement to be able to archive former employee entities. Former employee entities are rarely queried, and should be excluded from any activities that deal with current employees. To implement this requirement, you store active employees in the **Current** table and former employees in the **Archive** table. Archiving an employee requires you to delete the entity from the **Current** table, and add the entity to the **Archive** table.
-
-But you can't use an EGT to perform these two operations. To avoid the risk that a failure causes an entity to appear in both or neither tables, the archive operation must be eventually consistent. The following sequence diagram outlines the steps in this operation.
--
-A client initiates the archive operation by placing a message on an Azure queue (in this example, to archive employee #456). A worker role polls the queue for new messages; when it finds one, it reads the message and leaves a hidden copy on the queue. The worker role next fetches a copy of the entity from the **Current** table, inserts a copy in the **Archive** table, and then deletes the original from the **Current** table. Finally, if there were no errors from the previous steps, the worker role deletes the hidden message from the queue.
-
-In this example, step 4 in the diagram inserts the employee into the **Archive** table. It can add the employee to a blob in Blob storage or a file in a file system.
-
-#### Recover from failures
-It's important that the operations in steps 4-5 in the diagram be *idempotent* in case the worker role needs to restart the archive operation. If you're using Table storage, for step 4 you should use an "insert or replace" operation; for step 5, you should use a "delete if exists" operation in the client library you're using. If you're using another storage system, you must use an appropriate idempotent operation.
-
-If the worker role never completes step 6 in the diagram, then, after a timeout, the message reappears on the queue ready for the worker role to try to reprocess it. The worker role can check how many times a message on the queue has been read and, if necessary, flag it as a "poison" message for investigation by sending it to a separate queue. For more information about reading queue messages and checking the dequeue count, see [Get messages](/rest/api/storageservices/Get-Messages).
-
-Some errors from Table storage and Queue storage are transient errors, and your client application should include suitable retry logic to handle them.
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* This solution doesn't provide for transaction isolation. For example, a client might read the **Current** and **Archive** tables when the worker role was between steps 4-5 in the diagram, and see an inconsistent view of the data. The data will be consistent eventually.
-* You must be sure that steps 4-5 are idempotent in order to ensure eventual consistency.
-* You can scale the solution by using multiple queues and worker role instances.
-
-#### When to use this pattern
-Use this pattern when you want to guarantee eventual consistency between entities that exist in different partitions or tables. You can extend this pattern to ensure eventual consistency for operations across Table storage and Blob storage, and other non-Azure Storage data sources, such as a database or the file system.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Entity group transactions](#entity-group-transactions)
-* [Merge or replace](#merge-or-replace)
-
-> [!NOTE]
-> If transaction isolation is important to your solution, consider redesigning your tables to enable you to use EGTs.
->
->
-
-### Index entities pattern
-Maintain index entities to enable efficient searches that return lists of entities.
-
-#### Context and problem
-Table storage automatically indexes entities by using the `PartitionKey` and `RowKey` values. This enables a client application to retrieve an entity efficiently by using a point query. For example, using the following table structure, a client application can efficiently retrieve an individual employee entity by using the department name and the employee ID (the `PartitionKey` and `RowKey`).
--
-If you also want to be able to retrieve a list of employee entities based on the value of another non-unique property, such as last name, you must use a less efficient partition scan. This scan finds matches, rather than using an index to look them up directly. This is because Table storage doesn't provide secondary indexes.
-
-#### Solution
-To enable lookup by last name with the preceding entity structure, you must maintain lists of employee IDs. If you want to retrieve the employee entities with a particular last name, such as Jones, you must first locate the list of employee IDs for employees with Jones as their last name, and then retrieve those employee entities. There are three main options for storing the lists of employee IDs:
-
-* Use Blob storage.
-* Create index entities in the same partition as the employee entities.
-* Create index entities in a separate partition or table.
-
-Option 1: Use Blob storage
-
-Create a blob for every unique last name, and in each blob store a list of the `PartitionKey` (department) and `RowKey` (employee ID) values for employees who have that last name. When you add or delete an employee, ensure that the content of the relevant blob is eventually consistent with the employee entities.
-
-Option 2: Create index entities in the same partition
-
-Use index entities that store the following data:
--
-The `EmployeeIDs` property contains a list of employee IDs for employees with the last name stored in the `RowKey`.
-
-The following steps outline the process you should follow when you're adding a new employee. In this example, we're adding an employee with ID 000152 and last name Jones in the Sales department:
-
-1. Retrieve the index entity with a `PartitionKey` value "Sales", and the `RowKey` value "Jones". Save the ETag of this entity to use in step 2.
-2. Create an entity group transaction (that is, a batch operation) that inserts the new employee entity (`PartitionKey` value "Sales" and `RowKey` value "000152"), and updates the index entity (`PartitionKey` value "Sales" and `RowKey` value "Jones"). The EGT does this by adding the new employee ID to the list in the EmployeeIDs field. For more information about EGTs, see [Entity group transactions](#entity-group-transactions).
-3. If the EGT fails because of an optimistic concurrency error (that is, someone else has modified the index entity), then you need to start over at step 1.
-
-You can use a similar approach to deleting an employee if you're using the second option. Changing an employee's last name is slightly more complex, because you need to run an EGT that updates three entities: the employee entity, the index entity for the old last name, and the index entity for the new last name. You must retrieve each entity before making any changes, in order to retrieve the ETag values that you can then use to perform the updates by using optimistic concurrency.
-
-The following steps outline the process you should follow when you need to look up all the employees with a particular last name in a department. In this example, we're looking up all the employees with last name Jones in the Sales department:
-
-1. Retrieve the index entity with a `PartitionKey` value "Sales", and the `RowKey` value "Jones".
-2. Parse the list of employee IDs in the `EmployeeIDs` field.
-3. If you need additional information about each of these employees (such as their email addresses), retrieve each of the employee entities by using `PartitionKey` value "Sales", and `RowKey` values from the list of employees you obtained in step 2.
-
-Option 3: Create index entities in a separate partition or table
-
-For this option, use index entities that store the following data:
--
-The `EmployeeDetails` property contains a list of employee IDs and department name pairs for employees with the last name stored in the `RowKey`.
-
-You can't use EGTs to maintain consistency, because the index entities are in a separate partition from the employee entities. Ensure that the index entities are eventually consistent with the employee entities.
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* This solution requires at least two queries to retrieve matching entities: one to query the index entities to obtain the list of `RowKey` values, and then queries to retrieve each entity in the list.
-* Because an individual entity has a maximum size of 1 MB, option 2 and option 3 in the solution assume that the list of employee IDs for any particular last name is never more than 1 MB. If the list of employee IDs is likely to be more than 1 MB in size, use option 1 and store the index data in Blob storage.
-* If you use option 2 (using EGTs to handle adding and deleting employees, and changing an employee's last name), you must evaluate if the volume of transactions will approach the scalability limits in a particular partition. If this is the case, you should consider an eventually consistent solution (option 1 or option 3). These use queues to handle the update requests, and enable you to store your index entities in a separate partition from the employee entities.
-* Option 2 in this solution assumes that you want to look up by last name within a department. For example, you want to retrieve a list of employees with a last name Jones in the Sales department. If you want to be able to look up all the employees with a last name Jones across the whole organization, use either option 1 or option 3.
-* You can implement a queue-based solution that delivers eventual consistency. For more details, see the [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern).
-
-#### When to use this pattern
-Use this pattern when you want to look up a set of entities that all share a common property value, such as all employees with the last name Jones.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Compound key pattern](#compound-key-pattern)
-* [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern)
-* [Entity group transactions](#entity-group-transactions)
-* [Work with heterogeneous entity types](#work-with-heterogeneous-entity-types)
-
-### Denormalization pattern
-Combine related data together in a single entity to enable you to retrieve all the data you need with a single point query.
-
-#### Context and problem
-In a relational database, you typically normalize data to remove duplication that occurs when queries retrieve data from multiple tables. If you normalize your data in Azure tables, you must make multiple round trips from the client to the server to retrieve your related data. For example, with the following table structure, you need two round trips to retrieve the details for a department. One trip fetches the department entity that includes the manager's ID, and the second trip fetches the manager's details in an employee entity.
--
-#### Solution
-Instead of storing the data in two separate entities, denormalize the data and keep a copy of the manager's details in the department entity. For example:
--
-With department entities stored with these properties, you can now retrieve all the details you need about a department by using a point query.
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* There is some cost overhead associated with storing some data twice. The performance benefit resulting from fewer requests to Table storage typically outweighs the marginal increase in storage costs. Further, this cost is partially offset by a reduction in the number of transactions you require to fetch the details of a department.
-* You must maintain the consistency of the two entities that store information about managers. You can handle the consistency issue by using EGTs to update multiple entities in a single atomic transaction. In this case, the department entity and the employee entity for the department manager are stored in the same partition.
-
-#### When to use this pattern
-Use this pattern when you frequently need to look up related information. This pattern reduces the number of queries your client must make to retrieve the data it requires.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Compound key pattern](#compound-key-pattern)
-* [Entity group transactions](#entity-group-transactions)
-* [Work with heterogeneous entity types](#work-with-heterogeneous-entity-types)
-
-### Compound key pattern
-Use compound `RowKey` values to enable a client to look up related data with a single point query.
-
-#### Context and problem
-In a relational database, it's natural to use joins in queries to return related pieces of data to the client in a single query. For example, you might use the employee ID to look up a list of related entities that contain performance and review data for that employee.
-
-Assume you are storing employee entities in Table storage by using the following structure:
--
-You also need to store historical data relating to reviews and performance for each year the employee has worked for your organization, and you need to be able to access this information by year. One option is to create another table that stores entities with the following structure:
--
-With this approach, you might decide to duplicate some information (such as first name and last name) in the new entity, to enable you to retrieve your data with a single request. However, you can't maintain strong consistency because you can't use an EGT to update the two entities atomically.
-
-#### Solution
-Store a new entity type in your original table by using entities with the following structure:
--
-Notice how the `RowKey` is now a compound key, made up of the employee ID and the year of the review data. This enables you to retrieve the employee's performance and review data with a single request for a single entity.
-
-The following example outlines how you can retrieve all the review data for a particular employee (such as employee 000123 in the Sales department):
-
-$filter=(PartitionKey eq 'Sales') and (RowKey ge 'empid_000123') and (RowKey lt 'empid_000124')&$select=RowKey,Manager Rating,Peer Rating,Comments
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* You should use a suitable separator character that makes it easy to parse the `RowKey` value: for example, **000123_2012**.
-* You're also storing this entity in the same partition as other entities that contain related data for the same employee. This means you can use EGTs to maintain strong consistency.
-* You should consider how frequently you'll query the data to determine whether this pattern is appropriate. For example, if you access the review data infrequently, and the main employee data often, you should keep them as separate entities.
-
-#### When to use this pattern
-Use this pattern when you need to store one or more related entities that you query frequently.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Entity group transactions](#entity-group-transactions)
-* [Work with heterogeneous entity types](#work-with-heterogeneous-entity-types)
-* [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern)
-
-### Log tail pattern
-Retrieve the *n* entities most recently added to a partition by using a `RowKey` value that sorts in reverse date and time order.
-
-> [!NOTE]
-> Query results returned by the Azure Table API in Azure Cosmos DB aren't sorted by partition key or row key. Thus, while this pattern is suitable for Table storage, it isn't suitable for Azure Cosmos DB. For a detailed list of feature differences, see [differences between Table API in Azure Cosmos DB and Azure Table Storage](/azure/cosmos-db/table-storage-how-to-use-java).
-
-#### Context and problem
-A common requirement is to be able to retrieve the most recently created entities, for example the ten most recent expense claims submitted by an employee. Table queries support a `$top` query operation to return the first *n* entities from a set. There's no equivalent query operation to return the last *n* entities in a set.
-
-#### Solution
-Store the entities by using a `RowKey` that naturally sorts in reverse date/time order, so the most recent entry is always the first one in the table.
-
-For example, to be able to retrieve the ten most recent expense claims submitted by an employee, you can use a reverse tick value derived from the current date/time. The following C# code sample shows one way to create a suitable "inverted ticks" value for a `RowKey` that sorts from the most recent to the oldest:
-
-`string invertedTicks = string.Format("{0:D19}", DateTime.MaxValue.Ticks - DateTime.UtcNow.Ticks);`
-
-You can get back to the date/time value by using the following code:
-
-`DateTime dt = new DateTime(DateTime.MaxValue.Ticks - Int64.Parse(invertedTicks));`
-
-The table query looks like this:
-
-`https://myaccount.table.core.windows.net/EmployeeExpense(PartitionKey='empid')?$top=10`
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* You must pad the reverse tick value with leading zeroes, to ensure the string value sorts as expected.
-* You must be aware of the scalability targets at the level of a partition. Be careful to not create hot spot partitions.
-
-#### When to use this pattern
-Use this pattern when you need to access entities in reverse date/time order, or when you need to access the most recently added entities.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Prepend / append anti-pattern](#prepend-append-anti-pattern)
-* [Retrieve entities](#retrieve-entities)
-
-### High volume delete pattern
-Enable the deletion of a high volume of entities by storing all the entities for simultaneous deletion in their own separate table. You delete the entities by deleting the table.
-
-#### Context and problem
-Many applications delete old data that no longer needs to be available to a client application, or that the application has archived to another storage medium. You typically identify such data by a date. For example, you have a requirement to delete records of all sign-in requests that are more than 60 days old.
-
-One possible design is to use the date and time of the sign-in request in the `RowKey`:
--
-This approach avoids partition hotspots, because the application can insert and delete sign-in entities for each user in a separate partition. However, this approach can be costly and time consuming if you have a large number of entities. First, you need to perform a table scan in order to identify all the entities to delete, and then you must delete each old entity. You can reduce the number of round trips to the server required to delete the old entities by batching multiple delete requests into EGTs.
-
-#### Solution
-Use a separate table for each day of sign-in attempts. You can use the preceding entity design to avoid hotspots when you are inserting entities. Deleting old entities is now simply a question of deleting one table every day (a single storage operation), instead of finding and deleting hundreds and thousands of individual sign-in entities every day.
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* Does your design support other ways your application will use the data, such as looking up specific entities, linking with other data, or generating aggregate information?
-* Does your design avoid hot spots when you are inserting new entities?
-* Expect a delay if you want to reuse the same table name after deleting it. It's better to always use unique table names.
-* Expect some rate limiting when you first use a new table, while Table storage learns the access patterns and distributes the partitions across nodes. You should consider how frequently you need to create new tables.
-
-#### When to use this pattern
-Use this pattern when you have a high volume of entities that you must delete at the same time.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Entity group transactions](#entity-group-transactions)
-* [Modify entities](#modify-entities)
-
-### Data series pattern
-Store complete data series in a single entity to minimize the number of requests you make.
-
-#### Context and problem
-A common scenario is for an application to store a series of data that it typically needs to retrieve all at once. For example, your application might record how many IM messages each employee sends every hour, and then use this information to plot how many messages each user sent over the preceding 24 hours. One design might be to store 24 entities for each employee:
--
-With this design, you can easily locate and update the entity to update for each employee whenever the application needs to update the message count value. However, to retrieve the information to plot a chart of the activity for the preceding 24 hours, you must retrieve 24 entities.
-
-#### Solution
-Use the following design, with a separate property to store the message count for each hour:
--
-With this design, you can use a merge operation to update the message count for an employee for a specific hour. Now, you can retrieve all the information you need to plot the chart by using a request for a single entity.
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* If your complete data series doesn't fit into a single entity (an entity can have up to 252 properties), use an alternative data store such as a blob.
-* If you have multiple clients updating an entity simultaneously, use the **ETag** to implement optimistic concurrency. If you have many clients, you might experience high contention.
-
-#### When to use this pattern
-Use this pattern when you need to update and retrieve a data series associated with an individual entity.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Large entities pattern](#large-entities-pattern)
-* [Merge or replace](#merge-or-replace)
-* [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern) (if you're storing the data series in a blob)
-
-### Wide entities pattern
-Use multiple physical entities to store logical entities with more than 252 properties.
-
-#### Context and problem
-An individual entity can have no more than 252 properties (excluding the mandatory system properties), and can't store more than 1 MB of data in total. In a relational database, you would typically work around any limits on the size of a row by adding a new table, and enforcing a 1-to-1 relationship between them.
-
-#### Solution
-By using Table storage, you can store multiple entities to represent a single large business object with more than 252 properties. For example, if you want to store a count of the number of IM messages sent by each employee for the last 365 days, you can use the following design that uses two entities with different schemas:
--
-If you need to make a change that requires updating both entities to keep them synchronized with each other, you can use an EGT. Otherwise, you can use a single merge operation to update the message count for a specific day. To retrieve all the data for an individual employee, you must retrieve both entities. You can do this with two efficient requests that use both a `PartitionKey` and a `RowKey` value.
-
-#### Issues and considerations
-Consider the following point when deciding how to implement this pattern:
-
-* Retrieving a complete logical entity involves at least two storage transactions: one to retrieve each physical entity.
-
-#### When to use this pattern
-Use this pattern when you need to store entities whose size or number of properties exceeds the limits for an individual entity in Table storage.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Entity group transactions](#entity-group-transactions)
-* [Merge or replace](#merge-or-replace)
-
-### Large entities pattern
-Use Blob storage to store large property values.
-
-#### Context and problem
-An individual entity can't store more than 1 MB of data in total. If one or several of your properties store values that cause the total size of your entity to exceed this value, you can't store the entire entity in Table storage.
-
-#### Solution
-If your entity exceeds 1 MB in size because one or more properties contain a large amount of data, you can store data in Blob storage, and then store the address of the blob in a property in the entity. For example, you can store the photo of an employee in Blob storage, and store a link to the photo in the `Photo` property of your employee entity:
--
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* To maintain eventual consistency between the entity in Table storage and the data in Blob storage, use the [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern) to maintain your entities.
-* Retrieving a complete entity involves at least two storage transactions: one to retrieve the entity and one to retrieve the blob data.
-
-#### When to use this pattern
-Use this pattern when you need to store entities whose size exceeds the limits for an individual entity in Table storage.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Eventually consistent transactions pattern](#eventually-consistent-transactions-pattern)
-* [Wide entities pattern](#wide-entities-pattern)
-
-<a name="prepend-append-anti-pattern"></a>
-
-### Prepend/append anti-pattern
-When you have a high volume of inserts, increase scalability by spreading the inserts across multiple partitions.
-
-#### Context and problem
-Prepending or appending entities to your stored entities typically results in the application adding new entities to the first or last partition of a sequence of partitions. In this case, all of the inserts at any particular time are taking place in the same partition, creating a hotspot. This prevents Table storage from load-balancing inserts across multiple nodes, and possibly causes your application to hit the scalability targets for partition. For example, consider the case of an application that logs network and resource access by employees. An entity structure such as the following can result in the current hour's partition becoming a hotspot, if the volume of transactions reaches the scalability target for an individual partition:
--
-#### Solution
-The following alternative entity structure avoids a hotspot on any particular partition, as the application logs events:
--
-Notice with this example how both the `PartitionKey` and `RowKey` are compound keys. The `PartitionKey` uses both the department and employee ID to distribute the logging across multiple partitions.
-
-#### Issues and considerations
-Consider the following points when deciding how to implement this pattern:
-
-* Does the alternative key structure that avoids creating hot partitions on inserts efficiently support the queries your client application makes?
-* Does your anticipated volume of transactions mean that you're likely to reach the scalability targets for an individual partition, and be throttled by Table storage?
-
-#### When to use this pattern
-Avoid the prepend/append anti-pattern when your volume of transactions is likely to result in rate limiting by Table storage when you access a hot partition.
-
-#### Related patterns and guidance
-The following patterns and guidance might also be relevant when implementing this pattern:
-
-* [Compound key pattern](#compound-key-pattern)
-* [Log tail pattern](#log-tail-pattern)
-* [Modify entities](#modify-entities)
-
-### Log data anti-pattern
-Typically, you should use Blob storage instead of Table storage to store log data.
-
-#### Context and problem
-A common use case for log data is to retrieve a selection of log entries for a specific date/time range. For example, you want to find all the error and critical messages that your application logged between 15:04 and 15:06 on a specific date. You don't want to use the date and time of the log message to determine the partition you save log entities to. That results in a hot partition because at any particular time, all the log entities will share the same `PartitionKey` value (see the [Prepend/append anti-pattern](#prepend-append-anti-pattern)). For example, the following entity schema for a log message results in a hot partition, because the application writes all log messages to the partition for the current date and hour:
--
-In this example, the `RowKey` includes the date and time of the log message to ensure that log messages are sorted in date/time order. The `RowKey` also includes a message ID, in case multiple log messages share the same date and time.
-
-Another approach is to use a `PartitionKey` that ensures that the application writes messages across a range of partitions. For example, if the source of the log message provides a way to distribute messages across many partitions, you can use the following entity schema:
--
-However, the problem with this schema is that to retrieve all the log messages for a specific time span, you must search every partition in the table.
-
-#### Solution
-The previous section highlighted the problem of trying to use Table storage to store log entries, and suggested two unsatisfactory designs. One solution led to a hot partition with the risk of poor performance writing log messages. The other solution resulted in poor query performance, because of the requirement to scan every partition in the table to retrieve log messages for a specific time span. Blob storage offers a better solution for this type of scenario, and this is how Azure Storage analytics stores the log data it collects.
-
-This section outlines how Storage analytics stores log data in Blob storage, as an illustration of this approach to storing data that you typically query by range.
-
-Storage analytics stores log messages in a delimited format in multiple blobs. The delimited format makes it easy for a client application to parse the data in the log message.
-
-Storage analytics uses a naming convention for blobs that enables you to locate the blob (or blobs) that contain the log messages for which you are searching. For example, a blob named "queue/2014/07/31/1800/000001.log" contains log messages that relate to the queue service for the hour starting at 18:00 on July 31, 2014. The "000001" indicates that this is the first log file for this period. Storage analytics also records the timestamps of the first and last log messages stored in the file, as part of the blob's metadata. The API for Blob storage enables you locate blobs in a container based on a name prefix. To locate all the blobs that contain queue log data for the hour starting at 18:00, you can use the prefix "queue/2014/07/31/1800".
-
-Storage analytics buffers log messages internally, and then periodically updates the appropriate blob or creates a new one with the latest batch of log entries. This reduces the number of writes it must perform to Blob storage.
-
-If you're implementing a similar solution in your own application, consider how to manage the trade-off between reliability and cost and scalability. In other words, evaluate the effect of writing every log entry to Blob storage as it happens, compared to buffering updates in your application and writing them to Blob storage in batches.
-
-#### Issues and considerations
-Consider the following points when deciding how to store log data:
-
-* If you create a table design that avoids potential hot partitions, you might find that you can't access your log data efficiently.
-* To process log data, a client often needs to load many records.
-* Although log data is often structured, Blob storage might be a better solution.
-
-### Implementation considerations
-This section discusses some of the considerations to bear in mind when you implement the patterns described in the previous sections. Most of this section uses examples written in C# that use the Storage Client Library (version 4.3.0 at the time of writing).
-
-### Retrieve entities
-As discussed in the section [Design for querying](#design-for-querying), the most efficient query is a point query. However, in some scenarios you might need to retrieve multiple entities. This section describes some common approaches to retrieving entities by using the Storage Client Library.
-
-#### Run a point query by using the Storage Client Library
-The easiest way to run a point query is to use the **Retrieve** table operation. As shown in the following C# code snippet, this operation retrieves an entity with a `PartitionKey` of value "Sales", and a `RowKey` of value "212":
-
-```csharp
-TableOperation retrieveOperation = TableOperation.Retrieve<EmployeeEntity>("Sales", "212");
-var retrieveResult = employeeTable.Execute(retrieveOperation);
-if (retrieveResult.Result != null)
-{
- EmployeeEntity employee = (EmployeeEntity)retrieveResult.Result;
- ...
-}
-```
-
-Notice how this example expects the entity it retrieves to be of type `EmployeeEntity`.
-
-#### Retrieve multiple entities by using LINQ
-You can retrieve multiple entities by using LINQ with Storage Client Library, and specifying a query with a **where** clause. To avoid a table scan, you should always include the `PartitionKey` value in the where clause, and if possible the `RowKey` value to avoid table and partition scans. Table storage supports a limited set of comparison operators (greater than, greater than or equal, less than, less than or equal, equal, and not equal) to use in the where clause. The following C# code snippet finds all the employees whose last name starts with "B" (assuming that the `RowKey` stores the last name) in the Sales department (assuming the `PartitionKey` stores the department name):
-
-```csharp
-TableQuery<EmployeeEntity> employeeQuery = employeeTable.CreateQuery<EmployeeEntity>();
-var query = (from employee in employeeQuery
- where employee.PartitionKey == "Sales" &&
- employee.RowKey.CompareTo("B") >= 0 &&
- employee.RowKey.CompareTo("C") < 0
- select employee).AsTableQuery();
-var employees = query.Execute();
-```
-
-Notice how the query specifies both a `RowKey` and a `PartitionKey` to ensure better performance.
-
-The following code sample shows equivalent functionality by using the fluent API (for more information about fluent APIs in general, see [Best practices for designing a fluent API](https://visualstudiomagazine.com/articles/2013/12/01/best-practices-for-designing-a-fluent-api.aspx)):
-
-```csharp
-TableQuery<EmployeeEntity> employeeQuery = new TableQuery<EmployeeEntity>().Where(
- TableQuery.CombineFilters(
- TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition(
- "PartitionKey", QueryComparisons.Equal, "Sales"),
- TableOperators.And,
- TableQuery.GenerateFilterCondition(
- "RowKey", QueryComparisons.GreaterThanOrEqual, "B")
-),
-TableOperators.And,
-TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThan, "C")
- )
-);
-var employees = employeeTable.ExecuteQuery(employeeQuery);
-```
-
-> [!NOTE]
-> The sample nests multiple `CombineFilters` methods to include the three filter conditions.
->
->
-
-#### Retrieve large numbers of entities from a query
-An optimal query returns an individual entity based on a `PartitionKey` value and a `RowKey` value. However, in some scenarios you might have a requirement to return many entities from the same partition, or even from many partitions. You should always fully test the performance of your application in such scenarios.
-
-A query against Table storage can return a maximum of 1,000 entities at one time, and run for a maximum of five seconds. Table storage returns a continuation token to enable the client application to request the next set of entities, if any of the following are true:
--- The result set contains more than 1,000 entities.-- The query didn't complete within five seconds.-- The query crosses the partition boundary. -
-For more information about how continuation tokens work, see [Query timeout and pagination](/rest/api/storageservices/Query-Timeout-and-Pagination).
-
-If you're using the Storage Client Library, it can automatically handle continuation tokens for you as it returns entities from Table storage. For example, the following C# code sample automatically handles continuation tokens if Table storage returns them in a response:
-
-```csharp
-string filter = TableQuery.GenerateFilterCondition(
- "PartitionKey", QueryComparisons.Equal, "Sales");
-TableQuery<EmployeeEntity> employeeQuery =
- new TableQuery<EmployeeEntity>().Where(filter);
-
-var employees = employeeTable.ExecuteQuery(employeeQuery);
-foreach (var emp in employees)
-{
- ...
-}
-```
-
-The following C# code handles continuation tokens explicitly:
-
-```csharp
-string filter = TableQuery.GenerateFilterCondition(
- "PartitionKey", QueryComparisons.Equal, "Sales");
-TableQuery<EmployeeEntity> employeeQuery =
- new TableQuery<EmployeeEntity>().Where(filter);
-
-TableContinuationToken continuationToken = null;
-
-do
-{
- var employees = employeeTable.ExecuteQuerySegmented(
- employeeQuery, continuationToken);
- foreach (var emp in employees)
- {
- ...
- }
- continuationToken = employees.ContinuationToken;
-} while (continuationToken != null);
-```
-
-By using continuation tokens explicitly, you can control when your application retrieves the next segment of data. For example, if your client application enables users to page through the entities stored in a table, a user might decide not to page through all the entities retrieved by the query. Your application would only use a continuation token to retrieve the next segment when the user had finished paging through all the entities in the current segment. This approach has several benefits:
-
-* You can limit the amount of data to retrieve from Table storage and that you move over the network.
-* You can perform asynchronous I/O in .NET.
-* You can serialize the continuation token to persistent storage, so you can continue in the event of an application crash.
-
-> [!NOTE]
-> A continuation token typically returns a segment containing 1,000 entities, although it can contain fewer. This is also the case if you limit the number of entries a query returns by using **Take** to return the first n entities that match your lookup criteria. Table storage might return a segment containing fewer than n entities, along with a continuation token to enable you to retrieve the remaining entities.
->
->
-
-The following C# code shows how to modify the number of entities returned inside a segment:
-
-```csharp
-employeeQuery.TakeCount = 50;
-```
-
-#### Server-side projection
-A single entity can have up to 255 properties and be up to 1 MB in size. When you query the table and retrieve entities, you might not need all the properties, and can avoid transferring data unnecessarily (to help reduce latency and cost). You can use server-side projection to transfer just the properties you need. The following example retrieves just the `Email` property (along with `PartitionKey`, `RowKey`, `Timestamp`, and `ETag`) from the entities selected by the query.
-
-```csharp
-string filter = TableQuery.GenerateFilterCondition(
- "PartitionKey", QueryComparisons.Equal, "Sales");
-List<string> columns = new List<string>() { "Email" };
-TableQuery<EmployeeEntity> employeeQuery =
- new TableQuery<EmployeeEntity>().Where(filter).Select(columns);
-
-var entities = employeeTable.ExecuteQuery(employeeQuery);
-foreach (var e in entities)
-{
- Console.WriteLine("RowKey: {0}, EmployeeEmail: {1}", e.RowKey, e.Email);
-}
-```
-
-Notice how the `RowKey` value is available even though it isn't included in the list of properties to retrieve.
-
-### Modify entities
-The Storage Client Library enables you to modify your entities stored in Table storage by inserting, deleting, and updating entities. You can use EGTs to batch multiple inserts, update, and delete operations together, to reduce the number of round trips required and improve the performance of your solution.
-
-Exceptions thrown when the Storage Client Library runs an EGT typically include the index of the entity that caused the batch to fail. This is helpful when you are debugging code that uses EGTs.
-
-You should also consider how your design affects how your client application handles concurrency and update operations.
-
-#### Managing concurrency
-By default, Table storage implements optimistic concurrency checks at the level of individual entities for insert, merge, and delete operations, although it's possible for a client to force Table storage to bypass these checks. For more information, see [Managing concurrency in Microsoft Azure Storage](../storage/blobs/concurrency-manage.md).
-
-#### Merge or replace
-The `Replace` method of the `TableOperation` class always replaces the complete entity in Table storage. If you don't include a property in the request when that property exists in the stored entity, the request removes that property from the stored entity. Unless you want to remove a property explicitly from a stored entity, you must include every property in the request.
-
-You can use the `Merge` method of the `TableOperation` class to reduce the amount of data that you send to Table storage when you want to update an entity. The `Merge` method replaces any properties in the stored entity with property values from the entity included in the request. This method leaves intact any properties in the stored entity that aren't included in the request. This is useful if you have large entities, and only need to update a small number of properties in a request.
-
-> [!NOTE]
-> The `*Replace` and `Merge` methods fail if the entity doesn't exist. As an alternative, you can use the `InsertOrReplace` and `InsertOrMerge` methods that create a new entity if it doesn't exist.
->
->
-
-### Work with heterogeneous entity types
-Table storage is a *schema-less* table store. That means that a single table can store entities of multiple types, providing great flexibility in your design. The following example illustrates a table storing both employee and department entities:
-
-<table>
-<tr>
-<th>PartitionKey</th>
-<th>RowKey</th>
-<th>Timestamp</th>
-<th></th>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td></td>
-</tr>
-</table>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td></td>
-</tr>
-</table>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>DepartmentName</th>
-<th>EmployeeCount</th>
-</tr>
-<tr>
-<td></td>
-<td></td>
-</tr>
-</table>
-</td>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td></td>
-</tr>
-</table>
-</td>
-</tr>
-</table>
-
-Each entity must still have `PartitionKey`, `RowKey`, and `Timestamp` values, but can have any set of properties. Furthermore, there's nothing to indicate the type of an entity unless you choose to store that information somewhere. There are two options for identifying the entity type:
-
-* Prepend the entity type to the `RowKey` (or possibly the `PartitionKey`). For example, `EMPLOYEE_000123` or `DEPARTMENT_SALES` as `RowKey` values.
-* Use a separate property to record the entity type, as shown in the following table.
-
-<table>
-<tr>
-<th>PartitionKey</th>
-<th>RowKey</th>
-<th>Timestamp</th>
-<th></th>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>EntityType</th>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td>Employee</td>
-<td></td>
-<td></td>
-<td></td>
-<td></td>
-</tr>
-</table>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>EntityType</th>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td>Employee</td>
-<td></td>
-<td></td>
-<td></td>
-<td></td>
-</tr>
-</table>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>EntityType</th>
-<th>DepartmentName</th>
-<th>EmployeeCount</th>
-</tr>
-<tr>
-<td>Department</td>
-<td></td>
-<td></td>
-</tr>
-</table>
-</td>
-</tr>
-<tr>
-<td></td>
-<td></td>
-<td></td>
-<td>
-<table>
-<tr>
-<th>EntityType</th>
-<th>FirstName</th>
-<th>LastName</th>
-<th>Age</th>
-<th>Email</th>
-</tr>
-<tr>
-<td>Employee</td>
-<td></td>
-<td></td>
-<td></td>
-<td></td>
-</tr>
-</table>
-</td>
-</tr>
-</table>
-
-The first option, prepending the entity type to the `RowKey`, is useful if there is a possibility that two entities of different types might have the same key value. It also groups entities of the same type together in the partition.
-
-The techniques discussed in this section are especially relevant to the discussion about[Inheritance relationships](#inheritance-relationships).
-
-> [!NOTE]
-> Consider including a version number in the entity type value, to enable client applications to evolve POCO objects and work with different versions.
->
->
-
-The remainder of this section describes some of the features in the Storage Client Library that facilitate working with multiple entity types in the same table.
-
-#### Retrieve heterogeneous entity types
-If you're using the Storage Client Library, you have three options for working with multiple entity types.
-
-If you know the type of the entity stored with specific `RowKey` and `PartitionKey` values, then you can specify the entity type when you retrieve the entity. You saw this in the previous two examples that retrieve entities of type `EmployeeEntity`: [Run a point query by using the Storage Client Library](#run-a-point-query-by-using-the-storage-client-library) and [Retrieve multiple entities by using LINQ](#retrieve-multiple-entities-by-using-linq).
-
-The second option is to use the `DynamicTableEntity` type (a property bag), instead of a concrete POCO entity type. This option might also improve performance, because there's no need to serialize and deserialize the entity to .NET types. The following C# code potentially retrieves multiple entities of different types from the table, but returns all entities as `DynamicTableEntity` instances. It then uses the `EntityType` property to determine the type of each entity:
-
-```csharp
-string filter = TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition("PartitionKey",
- QueryComparisons.Equal, "Sales"),
- TableOperators.And,
- TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition("RowKey",
- QueryComparisons.GreaterThanOrEqual, "B"),
- TableOperators.And,
- TableQuery.GenerateFilterCondition("RowKey",
- QueryComparisons.LessThan, "F")
- )
-);
-TableQuery<DynamicTableEntity> entityQuery =
- new TableQuery<DynamicTableEntity>().Where(filter);
-var employees = employeeTable.ExecuteQuery(entityQuery);
-
-IEnumerable<DynamicTableEntity> entities = employeeTable.ExecuteQuery(entityQuery);
-foreach (var e in entities)
-{
-EntityProperty entityTypeProperty;
-if (e.Properties.TryGetValue("EntityType", out entityTypeProperty))
-{
- if (entityTypeProperty.StringValue == "Employee")
- {
- // Use entityTypeProperty, RowKey, PartitionKey, Etag, and Timestamp
- }
- }
-}
-```
-
-To retrieve other properties, you must use the `TryGetValue` method on the `Properties` property of the `DynamicTableEntity` class.
-
-A third option is to combine using the `DynamicTableEntity` type and an `EntityResolver` instance. This enables you to resolve to multiple POCO types in the same query. In this example, the `EntityResolver` delegate is using the `EntityType` property to distinguish between the two types of entity that the query returns. The `Resolve` method uses the `resolver` delegate to resolve `DynamicTableEntity` instances to `TableEntity` instances.
-
-```csharp
-EntityResolver<TableEntity> resolver = (pk, rk, ts, props, etag) =>
-{
-
- TableEntity resolvedEntity = null;
- if (props["EntityType"].StringValue == "Department")
- {
- resolvedEntity = new DepartmentEntity();
- }
- else if (props["EntityType"].StringValue == "Employee")
- {
- resolvedEntity = new EmployeeEntity();
- }
- else throw new ArgumentException("Unrecognized entity", "props");
-
- resolvedEntity.PartitionKey = pk;
- resolvedEntity.RowKey = rk;
- resolvedEntity.Timestamp = ts;
- resolvedEntity.ETag = etag;
- resolvedEntity.ReadEntity(props, null);
- return resolvedEntity;
-};
-
-string filter = TableQuery.GenerateFilterCondition(
- "PartitionKey", QueryComparisons.Equal, "Sales");
-TableQuery<DynamicTableEntity> entityQuery =
- new TableQuery<DynamicTableEntity>().Where(filter);
-
-var entities = employeeTable.ExecuteQuery(entityQuery, resolver);
-foreach (var e in entities)
-{
- if (e is DepartmentEntity)
- {
- ...
- }
- if (e is EmployeeEntity)
- {
- ...
- }
-}
-```
-
-#### Modify heterogeneous entity types
-You don't need to know the type of an entity to delete it, and you always know the type of an entity when you insert it. However, you can use the `DynamicTableEntity` type to update an entity without knowing its type, and without using a POCO entity class. The following code sample retrieves a single entity, and checks that the `EmployeeCount` property exists before updating it.
-
-```csharp
-TableResult result =
- employeeTable.Execute(TableOperation.Retrieve(partitionKey, rowKey));
-DynamicTableEntity department = (DynamicTableEntity)result.Result;
-
-EntityProperty countProperty;
-
-if (!department.Properties.TryGetValue("EmployeeCount", out countProperty))
-{
- throw new
- InvalidOperationException("Invalid entity, EmployeeCount property not found.");
-}
-countProperty.Int32Value += 1;
-employeeTable.Execute(TableOperation.Merge(department));
-```
-
-### Control access with shared access signatures
-You can use shared access signature (SAS) tokens to enable client applications to modify (and query) table entities directly, without the need to authenticate directly with Table storage. Typically, there are three main benefits to using SAS in your application:
-
-* You don't need to distribute your storage account key to an insecure platform (such as a mobile device) in order to allow that device to access and modify entities in Table storage.
-* You can offload some of the work that web and worker roles perform in managing your entities. You can offload to client devices such as end-user computers and mobile devices.
-* You can assign a constrained and time-limited set of permissions to a client (such as allowing read-only access to specific resources).
-
-For more information about using SAS tokens with Table storage, see [Using shared access signatures (SAS)](../storage/common/storage-sas-overview.md).
-
-However, you must still generate the SAS tokens that grant a client application to the entities in Table storage. Do this in an environment that has secure access to your storage account keys. Typically, you use a web or worker role to generate the SAS tokens and deliver them to the client applications that need access to your entities. Because there is still an overhead involved in generating and delivering SAS tokens to clients, you should consider how best to reduce this overhead, especially in high-volume scenarios.
-
-It's possible to generate a SAS token that grants access to a subset of the entities in a table. By default, you create a SAS token for an entire table. But it's also possible to specify that the SAS token grant access to either a range of `PartitionKey` values, or a range of `PartitionKey` and `RowKey` values. You might choose to generate SAS tokens for individual users of your system, such that each user's SAS token only allows them access to their own entities in Table storage.
-
-### Asynchronous and parallel operations
-Provided you are spreading your requests across multiple partitions, you can improve throughput and client responsiveness by using asynchronous or parallel queries.
-For example, you might have two or more worker role instances accessing your tables in parallel. You can have individual worker roles responsible for particular sets of partitions, or simply have multiple worker role instances, each able to access all the partitions in a table.
-
-Within a client instance, you can improve throughput by running storage operations asynchronously. The Storage Client Library makes it easy to write asynchronous queries and modifications. For example, you might start with the synchronous method that retrieves all the entities in a partition, as shown in the following C# code:
-
-```csharp
-private static void ManyEntitiesQuery(CloudTable employeeTable, string department)
-{
- string filter = TableQuery.GenerateFilterCondition(
- "PartitionKey", QueryComparisons.Equal, department);
- TableQuery<EmployeeEntity> employeeQuery =
- new TableQuery<EmployeeEntity>().Where(filter);
-
- TableContinuationToken continuationToken = null;
-
- do
- {
- var employees = employeeTable.ExecuteQuerySegmented(
- employeeQuery, continuationToken);
- foreach (var emp in employees)
- {
- ...
- }
- continuationToken = employees.ContinuationToken;
- } while (continuationToken != null);
-}
-```
-
-You can easily modify this code so that the query runs asynchronously, as follows:
-
-```csharp
-private static async Task ManyEntitiesQueryAsync(CloudTable employeeTable, string department)
-{
- string filter = TableQuery.GenerateFilterCondition(
- "PartitionKey", QueryComparisons.Equal, department);
- TableQuery<EmployeeEntity> employeeQuery =
- new TableQuery<EmployeeEntity>().Where(filter);
- TableContinuationToken continuationToken = null;
-
- do
- {
- var employees = await employeeTable.ExecuteQuerySegmentedAsync(
- employeeQuery, continuationToken);
- foreach (var emp in employees)
- {
- ...
- }
- continuationToken = employees.ContinuationToken;
- } while (continuationToken != null);
-}
-```
-
-In this asynchronous example, you can see the following changes from the synchronous version:
-
-* The method signature now includes the `async` modifier, and returns a `Task` instance.
-* Instead of calling the `ExecuteSegmented` method to retrieve results, the method now calls the `ExecuteSegmentedAsync` method. The method uses the `await` modifier to retrieve results asynchronously.
-
-The client application can call this method multiple times, with different values for the `department` parameter. Each query runs on a separate thread.
-
-There is no asynchronous version of the `Execute` method in the `TableQuery` class, because the `IEnumerable` interface doesn't support asynchronous enumeration.
-
-You can also insert, update, and delete entities asynchronously. The following C# example shows a simple, synchronous method to insert or replace an employee entity:
-
-```csharp
-private static void SimpleEmployeeUpsert(CloudTable employeeTable,
- EmployeeEntity employee)
-{
- TableResult result = employeeTable
- .Execute(TableOperation.InsertOrReplace(employee));
- Console.WriteLine("HTTP Status: {0}", result.HttpStatusCode);
-}
-```
-
-You can easily modify this code so that the update runs asynchronously, as follows:
-
-```csharp
-private static async Task SimpleEmployeeUpsertAsync(CloudTable employeeTable,
- EmployeeEntity employee)
-{
- TableResult result = await employeeTable
- .ExecuteAsync(TableOperation.InsertOrReplace(employee));
- Console.WriteLine("HTTP Status: {0}", result.HttpStatusCode);
-}
-```
-
-In this asynchronous example, you can see the following changes from the synchronous version:
-
-* The method signature now includes the `async` modifier, and returns a `Task` instance.
-* Instead of calling the `Execute` method to update the entity, the method now calls the `ExecuteAsync` method. The method uses the `await` modifier to retrieve results asynchronously.
-
-The client application can call multiple asynchronous methods like this one, and each method invocation runs on a separate thread.
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Previously updated : 04/27/2021 Last updated : 06/27/2021 # Troubleshoot CI-CD, Azure DevOps, and GitHub issues in ADF
Until recently, only way to publish ADF pipeline for deployments was using ADF P
#### Resolution
-CI/CD process has been enhanced. The **Automated** publish feature takes, validates, and exports all ARM template features from the ADF UX. It makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This method allows you to programmatically trigger these actions instead of having to go to the ADF UI and do a button click. This method gives your CI/CD pipelines a **true** continuous integration experience. Please follow [ADF CI/CD Publishing Improvements](./continuous-integration-deployment-improvements.md) for details.
+CI/CD process has been enhanced. The **Automated** publish feature takes, validates, and exports all ARM template features from the ADF UX. It makes the logic consumable via a publicly available npm package [@microsoft/azure-data-factory-utilities](https://www.npmjs.com/package/@microsoft/azure-data-factory-utilities). This method allows you to programmatically trigger these actions instead of having to go to the ADF UI and do a button click. This method gives your CI/CD pipelines a **true** continuous integration experience. Follow [ADF CI/CD Publishing Improvements](./continuous-integration-deployment-improvements.md) for details.
-### Cannot publish because of 4 MB ARM template limit
+### Cannot publish because of 4-MB ARM template limit
#### Issue
-You cannot deploy because you hit Azure Resource Manager limit of 4 MB total template size. You need a solution to deploy after crossing the limit.
+You cannot deploy because you hit Azure Resource Manager limit of 4-MB total template size. You need a solution to deploy after crossing the limit.
#### Cause
-Azure Resource Manager restricts template size to be 4 MB. Limit the size of your template to 4 MB, and each parameter file to 64 KB. The 4 MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. But, you have crossed the limit.
+Azure Resource Manager restricts template size to be 4-MB. Limit the size of your template to 4-MB, and each parameter file to 64 KB. The 4-MB limit applies to the final state of the template after it has been expanded with iterative resource definitions, and values for variables and parameters. But, you have crossed the limit.
#### Resolution
Unselect **Include in ARM template** and deploy global parameters with PowerShel
### Extra left "[" displayed in published JSON file #### Issue
-When publishing ADF with DevOps, there is one more left "[" displayed. ADF adds one more left "[" in ARMTemplate in DevOps automatically.
+When publishing ADF with DevOps, there is one more left "[" displayed. ADF adds one more left "[" in ARMTemplate in DevOps automatically. You will see expression like "[[" in JSON file.
#### Cause Because [ is a reserved character for ARM, an extra [ is added automatically to escape "[". #### Resolution This is normal behavior during ADF publishing process for CI/CD.
+
+### Perform **CI/CD** during progress/queued stage of pipeline run
+
+#### Issue
+You want to perform CI/CD during progress and queuing stage of pipeline run.
+
+#### Cause
+When pipeline is in progress/queued stage, you have to monitor the pipeline and activities at first. Then, you can decide to wait until pipeline to finish or you can cancel the pipeline run.
+
+#### Resolution
+You can monitor the pipeline using **SDK**, **Azure Monitor** or [ADF Monitor](https://docs.microsoft.com/azure/data-factory/monitor-visually). Then, you can follow [ADF CI/CD Best Practices](https://docs.microsoft.com/azure/data-factory/continuous-integration-deployment#best-practices-for-cicd) to guide you further.
## Next steps
defender-for-iot Quickstart Standalone Agent Binary Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/quickstart-standalone-agent-binary-installation.md
Title: 'Quickstart: Install Defender for IoT micro agent (Preview)' description: In this quickstart, learn how to install, and authenticate the Defender Micro Agent. Previously updated : 06/07/2021 Last updated : 06/27/2021
Before you install the Defender for IoT module, you must create a module identit
## Install the package
-Install, and configure the Microsoft package repository by following [these instructions](/windows-server/administration/linux-package-repository-for-microsoft-software).
+**To add the appropriate Microsoft package repository**:
-For Debian 9, the instructions do not include the repository that needs to be added, use the following commands to add the repository:
+1. Download the repository configuration that matches your device operating system.
-```bash
-curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
+ - For Ubuntu 18.04
-sudo apt-get install software-properties-common
+ ```bash
+ curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
+ ```
-sudo apt-add-repository https://packages.microsoft.com/debian/9/multiarch/prod
+ - For Ubuntu 20.04
-sudo apt-get update
-```
+ ```bash
+ curl https://packages.microsoft.com/config/ubuntu/20.04/prod.list > ./microsoft-prod.list
+ ```
+
+ - For Debian 9 (both AMD64 and ARM64)
+
+ ```bash
+ curl https://packages.microsoft.com/config/debian/stretch/multiarch/prod.list > ./microsoft-prod.list
+ ```
+
+1. Copy the repository configuration to the `sources.list.d` directory.
+
+ ```bash
+ sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
+ ```
+
+1. Update the list of packages from the repository that you added with the following command:
+
+ ```bash
+ sudo apt-get update
+ ```
To install the Defender micro agent package on Debian, and Ubuntu based Linux distributions, use the following command:
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-apis-sdks.md
To use the SDK, include the NuGet package **Azure.DigitalTwins.Core** with your
For a detailed walk-through of using the APIs in practice, see the [Tutorial: Code a client app](tutorial-code.md).
-### .NET SDK usage examples
-
-Here are some code samples illustrating use of the .NET SDK.
-
-Authenticate against the service:
---
-Upload a model:
--
-List models:
--
-Create twins:
--
-Query twins and loop through results:
--
-See the [Tutorial: Code a client app](tutorial-code.md) for a walk-through of this sample app code.
-
-You can also find additional samples in the [GitHub repo for the .NET (C#) SDK](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/digitaltwins/Azure.DigitalTwins.Core/samples).
-
-#### Serialization Helpers
+### Serialization helpers
Serialization helpers are helper functions available within the SDK for quickly creating or deserializing twin data for access to basic information. Since the core SDK methods return twin data as JSON by default, it can be helpful to use these helper classes to break the twin data down further.
The available helper classes are:
* `BasicRelationship`: Generically represents the core data of a relationship * `DigitalTwinsJsonPropertyName`: Contains the string constants for use in JSON serialization and deserialization for custom digital twin types
-##### Deserialize a digital twin
-
-You can always deserialize twin data using the JSON library of your choice, like `System.Text.Json` or `Newtonsoft.Json`. For basic access to a twin, the helper classes can make this more convenient.
-
-The `BasicDigitalTwin` helper class also gives you access to properties defined on the twin, through a `Dictionary<string, object>`. To list properties of the twin, you can use:
--
-> [!NOTE]
-> `BasicDigitalTwin` uses `System.Text.Json` attributes. In order to use `BasicDigitalTwin` with your [DigitalTwinsClient](/dotnet/api/azure.digitaltwins.core.digitaltwinsclient?view=azure-dotnet&preserve-view=true), you must either initialize the client with the default constructor, or, if you want to customize the serializer option, use the [JsonObjectSerializer](/dotnet/api/azure.core.serialization.jsonobjectserializer?view=azure-dotnet&preserve-view=true).
-
-##### Create a digital twin
-
-Using the `BasicDigitalTwin` class, you can prepare data for creating a twin instance:
--
-The code above is equivalent to the following "manual" variant:
--
-##### Deserialize a relationship
-
-You can always deserialize relationship data to a type of your choice. For basic access to a relationship, use the type `BasicRelationship`.
--
-The `BasicRelationship` helper class also gives you access to properties defined on the relationship, through an `IDictionary<string, object>`. To list properties, you can use:
--
-##### Create a relationship
-
-Using the `BasicRelationship` class, you can also prepare data for creating relationships on a twin instance:
--
-##### Create a patch for twin update
-
-Update calls for twins and relationships use [JSON Patch](http://jsonpatch.com/) structure. To create lists of JSON Patch operations, you can use the `JsonPatchDocument` as shown below.
-- ## General API/SDK usage notes > [!NOTE]
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-explorer-plugin.md
You can use an update policy to enrich your raw time series data with the corres
For example, say you created the following table to hold the raw time series data flowing into your ADX instance. ```kusto
-.createmerge table rawData (Timestamp:datetime, someId:string, Value:string, ValueType:string) 
+.create-merge table rawData (Timestamp:datetime, someId:string, Value:string, ValueType:string) 
``` You could create a mapping table to relate time series IDs with twin IDs, and other optional fields. ```kusto
-.createmerge table mappingTable (someId:string, twinId:string, otherMetadata:string)
+.create-merge table mappingTable (someId:string, twinId:string, otherMetadata:string)
``` Then, create a target table to hold the enriched time series data. ```kusto
-.createmerge table timeseriesSilver (twinId:string, Timestamp:datetime, someId:string, otherMetadata:string, ValueNumeric:real, ValueString:string) 
+.create-merge table timeseriesSilver (twinId:string, Timestamp:datetime, someId:string, otherMetadata:string, ValueNumeric:real, ValueString:string) 
``` Next, create a function `Update_rawData` to enrich the raw data by joining it with the mapping table. This will add the twin ID to the resulting target table. ```kusto
-.createoralter function with (folder = "Update", skipvalidation = "true") Update_rawData() {
+.create-or-alter function with (folder = "Update", skipvalidation = "true") Update_rawData() {
rawData | join kind=leftouter mappingTable on someId | project
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-graph.md
You can even create multiple instances of the same type of relationship between
## List relationships
-To access the list of **outgoing** relationships for a given twin in the graph, you can use the `GetRelationships()` method like this:
+### List properties of a single relationship
+
+You can always deserialize relationship data to a type of your choice. For basic access to a relationship, use the type `BasicRelationship`. The `BasicRelationship` helper class also gives you access to properties defined on the relationship, through an `IDictionary<string, object>`. To list properties, you can use:
++
+### Find outgoing relationships from a digital twin
+
+To access the list of **outgoing** relationships for a given twin in the graph, you can use the `GetRelationships()` method like this:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_sample.cs" id="GetRelationshipsCall":::
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
You can access the details of any digital twin by calling the `GetDigitalTwin()`
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="GetTwinCall":::
-This call returns twin data as a strongly-typed object type such as `BasicDigitalTwin`. `BasicDigitalTwin` is a serialization helper class included with the SDK, which will return the core twin metadata and properties in pre-parsed form. Here's an example of how to use this to view twin details:
+This call returns twin data as a strongly-typed object type such as `BasicDigitalTwin`. `BasicDigitalTwin` is a serialization helper class included with the SDK, which will return the core twin metadata and properties in pre-parsed form. You can always deserialize twin data using the JSON library of your choice, like `System.Text.Json` or `Newtonsoft.Json`. For basic access to a twin, however, the helper classes can make this more convenient.
+
+> [!NOTE]
+> `BasicDigitalTwin` uses `System.Text.Json` attributes. In order to use `BasicDigitalTwin` with your [DigitalTwinsClient](/dotnet/api/azure.digitaltwins.core.digitaltwinsclient?view=azure-dotnet&preserve-view=true), you must either initialize the client with the default constructor, or, if you want to customize the serializer option, use the [JsonObjectSerializer](/dotnet/api/azure.core.serialization.jsonobjectserializer?view=azure-dotnet&preserve-view=true).
+
+The `BasicDigitalTwin` helper class also gives you access to properties defined on the twin, through a `Dictionary<string, object>`. To list properties of the twin, you can use:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_sample.cs" id="GetTwin" highlight="2":::
The defined properties of the digital twin are returned as top-level properties
- Synchronization status for each writable property. This is most useful for devices, where it's possible that the service and the device have diverging statuses (for example, when a device is offline). Currently, this property only applies to physical devices connected to IoT Hub. With the data in the metadata section, it is possible to understand the full status of a property, as well as the last modified timestamps. For more information about sync status, see this [IoT Hub tutorial](../iot-hub/tutorial-device-twins.md) on synchronizing device state. - Service-specific metadata, like from IoT Hub or Azure Digital Twins.
-You can read more about the serialization helper classes like `BasicDigitalTwin` in [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md).
+You can read more about the serialization helper classes like `BasicDigitalTwin` in [Concepts: Azure Digital Twins APIs and SDKs](concepts-apis-sdks.md#serialization-helpers).
## View all digital twins
Here is an example of JSON Patch code. This document replaces the *mass* and *ra
:::code language="json" source="~/digital-twins-docs-samples/models/patch.json":::
-You can create patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/api/azure.jsonpatchdocument?view=azure-dotnet&preserve-view=true). Here is an example.
+Update calls for twins and relationships use [JSON Patch](http://jsonpatch.com/) structure. You can create patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/api/azure.jsonpatchdocument?view=azure-dotnet&preserve-view=true). Here is an example.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="UpdateTwin":::
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-managed-service-identity.md
Title: Authenticate with managed identities
-description: Access resources protected by Azure Active Directory without signing in with credentials or secrets by using a managed identity
+ Title: Authenticate workflows with managed identities
+description: Use a managed identity for authenticating triggers and actions to Azure AD protected resources without credentials or secrets
ms.suite: integration-+ Previously updated : 03/30/2021 - Last updated : 06/25/2021+
-# Authenticate access to Azure resources by using managed identities in Azure Logic Apps
+# Authenticate access to Azure resources using managed identities in Azure Logic Apps
-To easily access other resources that are protected by Azure Active Directory (Azure AD) and authenticate your identity, your logic app can use a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) (formerly Managed Service Identity or MSI), rather than credentials, secrets, or Azure AD tokens. Azure manages this identity for you and helps secure your credentials because you don't have to manage secrets or directly use Azure AD tokens.
+Some triggers and actions in logic app workflows support using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md), previously known as a *Managed Service Identity (MSI)*, for authentication when connecting to resources protected by Azure Active Directory (Azure AD). When your logic app resource has a managed identity enabled and set up, you don't have to use your own credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage secrets or tokens.
-Azure Logic Apps supports both [*system-assigned*](../active-directory/managed-identities-azure-resources/overview.md) and [*user-assigned*](../active-directory/managed-identities-azure-resources/overview.md) managed identities. Your logic app or individual connections can use either the system-assigned identity or a *single* user-assigned identity, which you can share across a group of logic apps, but not both.
+This article shows how to set up both kinds of managed identities for your logic app. For more information, review the following documentation:
-<a name="triggers-actions-managed-identity"></a>
+* [Triggers and actions that support managed identities](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions)
+* [Limits on managed identities for logic apps](../logic-apps/logic-apps-limits-and-config.md#managed-identity)
+* [Azure services that support Azure AD authentication with managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
-## Where can logic apps use managed identities?
+<a name="triggers-actions-managed-identity"></a>
-Currently, only [specific built-in triggers and actions](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions) and [specific managed connectors](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions) that support Azure AD OAuth can use a managed identity for authentication. For example, here's a selection:
+## Where to use managed identities
-<a name="built-in-managed-identity"></a>
+Azure Logic Apps supports both [*system-assigned* managed identities](../active-directory/managed-identities-azure-resources/overview.md) and [*user-assigned* managed identities](../active-directory/managed-identities-azure-resources/overview.md), which you can share across a group of logic apps, based on where your logic app workflows run:
-**Built-in triggers and actions**
+* A multi-tenant (Consumption plan) based logic app supports both the system-assigned identity and a *single* user-assigned identity. However, at the logic app level or the connection level, you can use only one managed identity type because you can't enable both at the same time.
-* Azure API Management
-* Azure App Services
-* Azure Functions
-* HTTP
-* HTTP + Webhook
+ A single-tenant (Standard plan) based logic app currently supports only the system-assigned identity.
-> [!NOTE]
-> While the HTTP trigger and action can authenticate connections to Azure Storage accounts behind Azure firewalls by using the
-> system-assigned managed identity, they can't use the user-assigned managed identity to authenticate the same connections.
+ For more information about multi-tenant (Consumption plan) and single-tenant (Standard plan), review the documentation, [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+<a name="built-in-managed-identity"></a>
<a name="managed-connectors-managed-identity"></a>
-**Managed connectors**
-
-* Azure Automation
-* Azure Event Grid
-* Azure Key Vault
-* Azure Resource Manager
-* HTTP with Azure AD
+* Only specific built-in and managed connector operations that support Azure AD Open Authentication can use a managed identity for authentication. The following table provides only a *sample selection*. For a more complete list, review [Authentication types for triggers and actions that support authentication](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
-Support for managed connectors is currently in preview. For the current list, see [Authentication types for triggers and actions that support authentication](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
-
-This article shows how to set up both kinds of managed identities for your logic app. For more information, see these topics:
-
-* [Triggers and actions that support managed identities](../logic-apps/logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions)
-* [Limits on managed identities for logic apps](../logic-apps/logic-apps-limits-and-config.md#managed-identity)
-* [Azure services that support Azure AD authentication with managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication)
+ | Operation type | Supported operations |
+ |-|-|
+ | Built-in | - Azure API Management <br>- Azure App Services <br>- Azure Functions <br>- HTTP <br>- HTTP + Webhook <p><p> **Note**: While HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity, they don't support the user-assigned managed identity for authenticating the same connections. |
+ | Managed connector (**Preview**) | - Azure Automation <br>- Azure Event Grid <br>- Azure Key Vault <br>- Azure Resource Manager <br>- HTTP with Azure AD |
+ |||
## Prerequisites
To set up the managed identity that you want to use, follow the link for that id
Unlike user-assigned identities, you don't have to manually create the system-assigned identity. To set up the system-assigned identity for your logic app, here are the options that you can use: * [Azure portal](#azure-portal-system-logic-app)
-* [Azure Resource Manager templates](#template-system-logic-app)
+* [Azure Resource Manager template (ARM template)](#template-system-logic-app)
<a name="azure-portal-system-logic-app"></a> #### Enable system-assigned identity in Azure portal
-1. In the [Azure portal](https://portal.azure.com), open your logic app in Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app in designer view.
1. On the logic app menu, under **Settings**, select **Identity**. Select **System assigned** > **On** > **Save**. When Azure prompts you to confirm, select **Yes**.
Unlike user-assigned identities, you don't have to manually create the system-as
> [!NOTE] > If you get an error that you can have only a single managed identity, your logic app is already > associated with the user-assigned identity. Before you can add the system-assigned identity,
- > you must first *remove* the user-assigned identity from your logic app.
+ > you have to first *remove* the user-assigned identity from your logic app.
Your logic app can now use the system-assigned identity, which is registered with Azure AD and is represented by an object ID.
Unlike user-assigned identities, you don't have to manually create the system-as
<a name="template-system-logic-app"></a>
-#### Enable system-assigned identity in Azure Resource Manager template
+#### Enable system-assigned identity in an ARM template
-To automate creating and deploying Azure resources such as logic apps, you can use [Azure Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md). To enable the system-assigned managed identity for your logic app in the template, add the `identity` object and the `type` child property to the logic app's resource definition in the template, for example:
+To automate creating and deploying Azure resources such as logic apps, you can use an [ARM template](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md). To enable the system-assigned managed identity for your logic app in the template, add the `identity` object and the `type` child property to the logic app's resource definition in the template, for example:
```json {
To automate creating and deploying Azure resources such as logic apps, you can u
} ```
-When Azure creates your logic app resource definition, the `identity` object gets these additional properties:
+When Azure creates your logic app resource definition, the `identity` object gets these other properties:
```json "identity": {
When Azure creates your logic app resource definition, the `identity` object get
### Enable user-assigned identity
-To set up a user-assigned managed identity for your logic app, you must first create that identity as a separate standalone Azure resource. Here are the options that you can use:
+To set up a user-assigned managed identity for your logic app, you have to first create that identity as a separate standalone Azure resource. Here are the options that you can use:
* [Azure portal](#azure-portal-user-identity)
-* [Azure Resource Manager templates](#template-user-identity)
+* [ARM template](#template-user-identity)
* Azure PowerShell * [Create user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md) * [Add role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md)
To set up a user-assigned managed identity for your logic app, you must first cr
1. In the [Azure portal](https://portal.azure.com), in the search box on any page, enter `managed identities`, and select **Managed Identities**.
- ![Find and select "Managed Identities"](./media/create-managed-service-identity/find-select-managed-identities.png)
+ ![Screenshot that shows the portal with "Managed Identities" selected.](./media/create-managed-service-identity/find-select-managed-identities.png)
1. Under **Managed Identities**, select **Add**.
To set up a user-assigned managed identity for your logic app, you must first cr
After validating these details, Azure creates your managed identity. Now you can add the user-assigned identity to your logic app. You can't add more than one user-assigned identity to your logic app.
-1. In the Azure portal, find and open your logic app in Logic App Designer.
+1. In the Azure portal, open your logic app in designer view.
1. On the logic app menu, under **Settings**, select **Identity**, and then select **User assigned** > **Add**. ![Add user-assigned managed identity](./media/create-managed-service-identity/add-user-assigned-identity-logic-app.png)
-1. On the **Add user assigned managed identity** pane, from the **Subscription** list, select your Azure subscription if not already selected. From the list that shows *all* the managed identities in that subscription, find and select the user-assigned identity that you want. To filter the list, in the **User assigned managed identities** search box, enter the name for the identity or resource group. When you're done, select **Add**.
+1. On the **Add user assigned managed identity** pane, from the **Subscription** list, select your Azure subscription if not already selected. From the list that shows *all* the managed identities in that subscription, select the user-assigned identity that you want. To filter the list, in the **User assigned managed identities** search box, enter the name for the identity or resource group. When you're done, select **Add**.
![Select the user-assigned identity to use](./media/create-managed-service-identity/select-user-assigned-identity.png) > [!NOTE] > If you get an error that you can have only a single managed identity, your logic app is already > associated with the system-assigned identity. Before you can add the user-assigned identity,
- > you must first disable the system-assigned identity on your logic app.
+ > you have to first disable the system-assigned identity on your logic app.
Your logic app is now associated with the user-assigned managed identity.
To set up a user-assigned managed identity for your logic app, you must first cr
<a name="template-user-identity"></a>
-#### Create user-assigned identity in an Azure Resource Manager template
+#### Create user-assigned identity in an ARM template
-To automate creating and deploying Azure resources such as logic apps, you can use [Azure Resource Manager templates](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), which support [user-assigned identities for authentication](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md). In your template's `resources` section, your logic app's resource definition requires these items:
+To automate creating and deploying Azure resources such as logic apps, you can use an [ARM template](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), which support [user-assigned identities for authentication](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md). In your template's `resources` section, your logic app's resource definition requires these items:
* An `identity` object with the `type` property set to `UserAssigned`
If your template also includes the managed identity's resource definition, you c
## Give identity access to resources
-Before you can use your logic app's managed identity for authentication, set up access for that identity on the Azure resource where you plan to use the identity. To complete this task, assign the appropriate role to that identity on the target Azure resource. Here are the options that you can use:
+Before you can use your logic app's managed identity for authentication, on the Azure resource where you want to use the identity, you have to set up access for your identity by using Azure role-based access control (Azure RBAC).
+
+To complete this task, assign the appropriate role to that identity on the Azure resource through any of the following options:
* [Azure portal](#azure-portal-assign-access)
-* [Azure Resource Manager template](../role-based-access-control/role-assignments-template.md)
-* Azure PowerShell ([New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment)) - For more information, see [Add role assignment by using Azure RBAC and Azure PowerShell](../role-based-access-control/role-assignments-powershell.md).
-* Azure CLI ([az role assignment create](/cli/azure/role/assignment#az_role_assignment_create)) - For more information, see [Add role assignment by using Azure RBAC and Azure CLI](../role-based-access-control/role-assignments-cli.md).
+* [ARM template](../role-based-access-control/role-assignments-template.md)
+* [Azure PowerShell](../role-based-access-control/role-assignments-powershell.md)
+* [Azure CLI](../role-based-access-control/role-assignments-cli.md)
* [Azure REST API](../role-based-access-control/role-assignments-rest.md) <a name="azure-portal-assign-access"></a>
-### Assign access in the Azure portal
-
-On the target Azure resource where you want the managed identity to have access, give that identity role-based access to the target resource.
-
-1. In the [Azure portal](https://portal.azure.com), go to the Azure resource where you want your managed identity to have access.
+### Assign managed identity role-based access in the Azure portal
-1. From the resource's menu, select **Access control (IAM)** > **Role assignments** where you can review the current role assignments for that resource. On the toolbar, select **Add** > **Add role assignment**.
+On the Azure resource where you want to use the managed identity, you have to assign your identity to a role that can access the target resource. For more general information about this task, review [Assign a managed identity access to another resource using Azure RBAC](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
- ![Select "Add" > "Add role assignment"](./media/create-managed-service-identity/add-role-to-resource.png)
+1. In the [Azure portal](https://portal.azure.com), open the resource where you want to use the identity.
- > [!TIP]
- > If the **Add role assignment** option is disabled, you most likely don't have permissions.
- > For more information about the permissions that let you manage roles for resources, see
- > [Administrator role permissions in Azure Active Directory](../active-directory/roles/permissions-reference.md).
-
-1. Under **Add role assignment**, select a **Role** that gives your identity the necessary access to the target resource.
-
- For this topic's example, your identity needs a [role that can access the blob in an Azure Storage container](../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights), so select the **Storage Blob Data Contributor** role for the managed identity.
-
- ![Select "Storage Blob Data Contributor" role](./media/create-managed-service-identity/select-role-for-identity.png)
-
-1. Follow these steps for your managed identity:
-
- * **System-assigned identity**
+1. On the resource's menu, select **Access control (IAM)** > **Add** > **Add role assignment**.
- 1. In the **Assign access to** box, select **Logic App**. When the **Subscription** property appears, select the Azure subscription that's associated with your identity.
-
- ![Select access for system-assigned identity](./media/create-managed-service-identity/assign-access-system.png)
-
- 1. Under the **Select** box, select your logic app from the list. If the list is too long, use the **Select** box to filter the list.
-
- ![Select logic app for system-assigned identity](./media/create-managed-service-identity/add-permissions-select-logic-app.png)
-
- * **User-assigned identity**
-
- 1. In the **Assign access to** box, select **User assigned managed identity**. When the **Subscription** property appears, select the Azure subscription that's associated with your identity.
-
- ![Select access for user-assigned identity](./media/create-managed-service-identity/assign-access-user.png)
+ > [!NOTE]
+ > If the **Add role assignment** option is disabled, you don't have permissions to assign roles.
+ > For more information, review [Azure AD built-in roles](../active-directory/roles/permissions-reference.md).
- 1. Under the **Select** box, select your identity from the list. If the list is too long, use the **Select** box to filter the list.
+1. Now, assign the necessary role to your managed identity. On the **Role** tab, assign a role that gives your identity the required access to the current resource.
- ![Select your user-assigned identity](./media/create-managed-service-identity/add-permissions-select-user-assigned-identity.png)
+ For this example, assign the role that's named **Storage Blob Data Contributor**, which includes write access for blobs in an Azure Storage container. For more information about specific storage container roles, review [Roles that can access blobs in an Azure Storage container](../storage/common/storage-auth-aad.md#assign-azure-roles-for-access-rights).
-1. When you're done, select **Save**.
+1. Next, choose the managed identity where you want to assign the role. Under **Assign access to**, select **Managed identity** > **Add members**.
- The target resource's role assignments list now shows the selected managed identity and role. This example shows how you can use the system-assigned identity for one logic app and a user-assigned identity for a group of other logic apps.
+1. Based on your managed identity's type, select or provide the following values:
- ![Added managed identities and roles to target resource](./media/create-managed-service-identity/added-roles-for-identities.png)
+ | Type | Azure service instance | Subscription | Member |
+ |||--|--|
+ | **System-assigned** | **Logic App** | <*Azure-subscription-name*> | <*your-logic-app-name*> |
+ | **User-assigned** | Not applicable | <*Azure-subscription-name*> | <*your-user-assigned-identity-name*> |
+ |||||
- For more information, [Assign a managed identity access to a resource by using the Azure portal](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
+ For more information about assigning roles, review the documentation, [Assign roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-1. Now follow the [steps to authenticate access with the identity](#authenticate-access-with-identity) in a trigger or action that supports managed identities.
+1. After you finish setting up access for the identity, you can then use the identity to [authenticate access for triggers and actions that support managed identities](#authenticate-access-with-identity).
<a name="authenticate-access-with-identity"></a>
After you [enable the managed identity for your logic app](#azure-portal-system-
These steps show how to use the managed identity with a trigger or action through the Azure portal. To specify the managed identity in a trigger or action's underlying JSON definition, see [Managed identity authentication](../logic-apps/logic-apps-securing-a-logic-app.md#managed-identity-authentication).
-1. In the [Azure portal](https://portal.azure.com), open your logic app in the Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app in designer view.
1. If you haven't done so yet, add the [trigger or action that supports managed identities](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
The Azure Resource Manager action, **Read a resource**, can use the managed iden
If the managed identity isn't enabled, the following error appears when you try to create the connection:
- *You must enable managed identity for your logic app and then grant required access to the identity in the target resource.*
+ *You have to enable the managed identity for your logic app and then grant required access to the identity in the target resource.*
![Screenshot that shows Azure Resource Manager action with error when no managed identity is enabled.](./media/create-managed-service-identity/system-assigned-managed-identity-disabled.png)
For example, here's the underlying connection resource definition for an Azure A
To stop using a managed identity for your logic app, you have these options: * [Azure portal](#azure-portal-disable)
-* [Azure Resource Manager templates](#template-disable)
+* [ARM template](#template-disable)
* Azure PowerShell * [Remove role assignment](../role-based-access-control/role-assignments-powershell.md) * [Delete user-assigned identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-powershell.md)
The managed identity is now removed and no longer has access to the target resou
#### Disable managed identity on logic app
-1. In the [Azure portal](https://portal.azure.com), open your logic app in Logic App Designer.
+1. In the [Azure portal](https://portal.azure.com), open your logic app in designer view.
1. On the logic app menu, under **Settings**, select **Identity**, and then follow the steps for your identity:
The managed identity is now disabled on your logic app.
<a name="template-disable"></a>
-### Disable managed identity in Azure Resource Manager template
+### Disable managed identity in an ARM template
-If you created the logic app's managed identity by using an Azure Resource Manager template, set the `identity` object's `type` child property to `None`.
+If you created the logic app's managed identity by using an ARM template, set the `identity` object's `type` child property to `None`.
```json "identity": {
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-access-data.md
Title: Connect to storage services on Azure
+ Title: Connect to storage services on Azure
description: Learn how to use datastores to securely connect to Azure storage services during training with Azure Machine Learning
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-connect-data-ui.md
Title: Connect to data in storage services on Azure
+ Title: Connect to data storage with the studio UI
description: Create datastores and datasets to securely connect to data in storage services in Azure with the Azure Machine Learning studio.
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-inferencing-vnet.md
az ml computetarget update aks \
-g myresourcegroup ``` + For more information, see the [az ml computetarget create aks](/cli/azure/ml(v1)/computetarget/create#az_ml_computetarget_create_aks) and [az ml computetarget update aks](/cli/azure/ml(v1)/computetarget/update#az_ml_computetarget_update_aks) reference.
open-datasets Dataset 1000 Genomes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-1000-genomes.md
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
## Data Access
-West US 2: https://dataset1000genomes.blob.core.windows.net/dataset
+West US 2: 'https://dataset1000genomes.blob.core.windows.net/dataset'
-West Central US: https://dataset1000genomes-secondary.blob.core.windows.net/dataset
+West Central US: 'https://dataset1000genomes-secondary.blob.core.windows.net/dataset'
[SAS Token](../storage/common/storage-sas-overview.md): sv=2019-10-10&si=prod&sr=c&sig=9nzcxaQn0NprMPlSh4RhFQHcXedLQIcFgbERiooHEqM%3D
https://www.internationalgenome.org/contact
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Clinvar Annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-clinvar-annotations.md
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
## Data Access
-West US 2: https://datasetclinvar.blob.core.windows.net/dataset
+West US 2: 'https://datasetclinvar.blob.core.windows.net/dataset'
-West Central US: https://datasetclinvar-secondary.blob.core.windows.net/dataset
+West Central US: 'https://datasetclinvar-secondary.blob.core.windows.net/dataset'
[SAS Token](../storage/common/storage-sas-overview.md): sv=2019-02-02&se=2050-01-01T08%3A00%3A00Z&si=prod&sr=c&sig=qFPPwPba1RmBvaffkzkLuzabYU5dZstSTgMwxuLNME8%3D
For any questions or feedback about this dataset, contact clinvar@ncbi.nlm.nih.g
Several public genomics data has been uploaded as an Azure Open Dataset [here](https://azure.microsoft.com/services/open-datasets/catalog/). We create a blob service linked to this open dataset. You can find examples of data calling procedure from Azure Open Dataset for `ClinVar` dataset in below:
-Users can call and download the following path with this notebook: https://datasetclinvar.blob.core.windows.net/dataset/ClinVarFullRelease_00-latest.xml.gz.md5
+Users can call and download the following path with this notebook: 'https://datasetclinvar.blob.core.windows.net/dataset/ClinVarFullRelease_00-latest.xml.gz.md5'
> [!NOTE] > Users needs to log-in their Azure Account via Azure CLI for viewing the data with Azure ML SDK. On the other hand, they do not need do any actions for downloading the data.
blob_service_client.get_blob_to_path('dataset', 'ClinVarFullRelease_00-latest.xm
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Covid Tracking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-covid-tracking.md
Modified versions of the dataset are available in CSV, JSON, JSON-Lines, and Par
All modified versions have ISO 3166 subdivision codes and load times added, and use lower case column names with underscore separators. Raw data:
-https://pandemicdatalake.blob.core.windows.net/public/raw/covid-19/covid_tracking/latest/daily.json
+'https://pandemicdatalake.blob.core.windows.net/public/raw/covid-19/covid_tracking/latest/daily.json'
Previous versions of modified and raw data: https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_tracking/
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Encode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-encode.md
This dataset is stored in the West US 2 and West Central US Azure regions. We re
## Data Access
-West US 2: https://datasetencode.blob.core.windows.net/dataset
+West US 2: 'https://datasetencode.blob.core.windows.net/dataset'
-West Central US: https://datasetencode-secondary.blob.core.windows.net/dataset
+West Central US: 'https://datasetencode-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): ?sv=2019-10-10&si=prod&sr=c&sig=9qSQZo4ggrCNpybBExU8SypuUZV33igI11xw0P7rB3c%3D
If you have any questions, concerns, or comments, email our help desk at encode-
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Gatk Resource Bundle https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-gatk-resource-bundle.md
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
1. datasetgatkbestpractices
- West US 2: https://datasetgatkbestpractices.blob.core.windows.net/dataset
+ West US 2: 'https://datasetgatkbestpractices.blob.core.windows.net/dataset'
- West Central US: https://datasetgatkbestpractices-secondary.blob.core.windows.net/dataset
+ West Central US: 'https://datasetgatkbestpractices-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): ?sv=2020-04-08&si=prod&sr=c&sig=6SaDfKtXAIfdpO%2BkvNA%2FsTNmNij%2Byh%2F%2F%2Bf98WAUqs7I%3D 2. datasetgatklegacybundles
- West US 2: https://datasetgatklegacybundles.blob.core.windows.net/dataset
+ West US 2: 'https://datasetgatklegacybundles.blob.core.windows.net/dataset'
- West Central US: https://datasetgatklegacybundles-secondary.blob.core.windows.net/dataset
+ West Central US: 'https://datasetgatklegacybundles-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): ?sv=2020-04-08&si=prod&sr=c&sig=xBfxOPBqHKUCszzwbNCBYF0k9osTQjKnZbEjXCW7gU0%3D 3. datasetgatktestdata
- West US 2: https://datasetgatktestdata.blob.core.windows.net/dataset
+ West US 2: 'https://datasetgatktestdata.blob.core.windows.net/dataset'
- West Central US: https://datasetgatktestdata-secondary.blob.core.windows.net/dataset
+ West Central US: 'https://datasetgatktestdata-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): ?sv=2020-04-08&si=prod&sr=c&sig=fzLts1Q2vKjuvR7g50vE4HteEHBxTcJbNvf%2FZCeDMO4%3D 4. datasetpublicbroadref
- West US 2: https://datasetpublicbroadref.blob.core.windows.net/dataset
+ West US 2: 'https://datasetpublicbroadref.blob.core.windows.net/dataset'
- West Central US: https://datasetpublicbroadref-secondary.blob.core.windows.net/dataset
+ West Central US: 'https://datasetpublicbroadref-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): ?sv=2020-04-08&si=prod&sr=c&sig=DQxmjB4D1lAfOW9AxIWbXwZx6ksbwjlNkixw597JnvQ%3D 5. datasetbroadpublic
- West US 2: https://datasetpublicbroadpublic.blob.core.windows.net/dataset
+ West US 2: 'https://datasetpublicbroadpublic.blob.core.windows.net/dataset'
- West Central US: https://datasetpublicbroadpublic-secondary.blob.core.windows.net/dataset
+ West Central US: 'https://datasetpublicbroadpublic-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): ?sv=2020-04-08&si=prod&sr=c&sig=u%2Bg2Ab7WKZEGiAkwlj6nKiEeZ5wdoJb10Az7uUwis%2Fg%3D
Visit the [GATK resource bundle official site](https://gatk.broadinstitute.org/h
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Gnomad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-gnomad.md
The Storage Account hosting this dataset is in the East US Azure region. Allocat
## Data Access
-Storage Account: https://azureopendatastorage.blob.core.windows.net/gnomad
+Storage Account: 'https://azureopendatastorage.blob.core.windows.net/gnomad'
Th data is available publicly without restrictions, and the azcopy tool is recommended for bulk operations. For example, to view the VCFs in release 3.0 of gnomAD:
For any questions or feedback about this dataset, contact the [gnomAD team](http
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Human Reference Genomes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-human-reference-genomes.md
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
## Data Access
-West US 2: https://datasetreferencegenomes.blob.core.windows.net/dataset
+West US 2: 'https://datasetreferencegenomes.blob.core.windows.net/dataset'
-West Central US: https://datasetreferencegenomes-secondary.blob.core.windows.net/dataset
+West Central US: 'https://datasetreferencegenomes-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): sv=2019-02-02&se=2050-01-01T08%3A00%3A00Z&si=prod&sr=c&sig=JtQoPFqiC24GiEB7v9zHLi4RrA2Kd1r%2F3iFt2l9%2FlV8%3D
For any questions or feedback about this dataset, contact the [Genome Reference
Several public genomics data has been uploaded as an Azure Open Dataset [here](https://azure.microsoft.com/services/open-datasets/catalog/). We create a blob service linked to this open dataset. You can find examples of data calling procedure from Azure Open Datasets for `Reference Genomes` dataset in below:
-Users can call and download the following path with this notebook: https://datasetreferencegenomes.blob.core.windows.net/dataset/vertebrate_mammalian/Homo_sapiens/latest_assembly_versions/GCF_000001405.39_GRCh38.p13/GCF_000001405.39_GRCh38.p13_assembly_structure/genomic_regions_definitions.txt
+Users can call and download the following path with this notebook: 'https://datasetreferencegenomes.blob.core.windows.net/dataset/vertebrate_mammalian/Homo_sapiens/latest_assembly_versions/GCF_000001405.39_GRCh38.p13/GCF_000001405.39_GRCh38.p13_assembly_structure/genomic_regions_definitions.txt'
**Important note:** Users need to log in their Azure Account via Azure CLI for viewing the data with Azure ML SDK. On the other hand, they do not need do any actions for downloading the data.
blob_service_client.get_blob_to_path('dataset/vertebrate_mammalian/Homo_sapiens/
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Illumina Platinum Genomes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-illumina-platinum-genomes.md
This dataset is stored in the West US 2 and West Central US Azure regions. We re
## Data Access
-West US 2: https://datasetplatinumgenomes.blob.core.windows.net/dataset
+West US 2: 'https://datasetplatinumgenomes.blob.core.windows.net/dataset'
-West Central US: https://datasetplatinumgenomes-secondary.blob.core.windows.net/dataset
+West Central US: 'https://datasetplatinumgenomes-secondary.blob.core.windows.net/dataset'
[SAS Token](/azure/storage/common/storage-sas-overview): sv=2019-02-02&se=2050-01-01T08%3A00%3A00Z&si=prod&sr=c&sig=FFfZ0QaDcnEPQmWsshtpoYOjbzd4jtwIWeK%2Fc4i9MqM%3D
run gatk VariantsToTable -V NA12877.vcf.gz -F CHROM -F POS -F TYPE -F AC -F AD -
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Microsoft News https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-microsoft-news.md
Due to some reasons in learning embedding from the subgraph, a few entities may
## Storage location
-The data are stored in blobs in the West/East US data center, in the following blob container: https://mind201910small.blob.core.windows.net/release/.
+The data are stored in blobs in the West/East US data center, in the following blob container: 'https://mind201910small.blob.core.windows.net/release/'.
Within the container, the training and validation set are compressed into MINDlarge_train.zip and MINDlarge_dev.zip respectively.
See the following examples of how to use the Microsoft News Recommender dataset:
Check out several baseline news recommendation models developed on MIND from [Microsoft Recommenders Repository](https://github.com/microsoft/recommenders)
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Open Cravat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-open-cravat.md
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
## Data Access
-West US 2: https://datasetopencravat.blob.core.windows.net/dataset
+West US 2: 'https://datasetopencravat.blob.core.windows.net/dataset'
-West Central US: https://datasetopencravat-secondary.blob.core.windows.net/dataset
+West Central US: 'https://datasetopencravat-secondary.blob.core.windows.net/dataset'
[SAS Token](../storage/common/storage-sas-overview.md): sv=2020-04-08&st=2021-03-11T23%3A50%3A01Z&se=2025-07-26T22%3A50%3A00Z&sr=c&sp=rl&sig=J9J9wnJOXsmEy7TFMq9wjcxjXDE%2B7KhGpCUL4elsC14%3D
rkarchi1@jhmi.edu
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Open Speech Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-open-speech-text.md
The dataset is provided in two forms:
- Archives available via Azure blob storage and/or direct links; - Original files available via Azure blob storage;
-Everything is stored in https://azureopendatastorage.blob.core.windows.net/openstt/
+Everything is stored in 'https://azureopendatastorage.blob.core.windows.net/openstt/'
Folder structure:
ldisplay.waveplot(wav, sr=sr, max_points=50000.0, x_axis='time', offset=0.0, max
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Oxford Covid Government Response Tracker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-oxford-covid-government-response-tracker.md
This information can help decision-makers and citizens understand governmental r
## Datasets Modified versions of the dataset are available in CSV, JSON, JSON-Lines, and Parquet, updated daily:-- https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_policy_tracker/latest/- covid_policy_tracker.csv-- https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_policy_tracker/latest/- covid_policy_tracker.json-- https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_policy_tracker/latest/- covid_policy_tracker.jsonl
+- https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_policy_tracker/latest/covid_policy_tracker.csv
+- https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_policy_tracker/latest/covid_policy_tracker.json
+- https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_policy_tracker/latest/covid_policy_tracker.jsonl
- https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/covid_policy_tracker/latest/covid_policy_tracker.parquet All modified versions have iso_country codes and load times added, and use lower case column names with underscore separators.
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset San Francisco Safety https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-san-francisco-safety.md
Fire department calls for service and 311 cases in San Francisco.
Fire Calls-For-Service includes all fire unit responses to calls. Each record includes the call number, incident number, address, unit identifier, call type, and disposition. All relevant time intervals are also included. Because this dataset is based on responses, and since most calls involved multiple units, there are multiple records for each call number. Addresses are associated with a block number, intersection, or call box, not a specific address.
-311 Cases include cases associated with a place or thing (for example parks, streets, or buildings) and created after July 1, 2008. Cases that are logged by a user about their own needs are excluded. For example, property or business tax questions, parking permit requests, and so on. For more information, see the [Program Link](http://support.datasf.org/customer/en/portal/articles/2429403-311-case-datafaq?b_id=13410).
+311 Cases include cases associated with a place or thing (for example parks, streets, or buildings) and created after July 1, 2008. Cases that are logged by a user about their own needs are excluded. For example, property or business tax questions, parking permit requests, and so on. For more information, see the [Program Link](https://support.datasf.org/help/311-case-data-faq).
## Volume and retention
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Snpeff https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-snpeff.md
This dataset is stored in the West US 2 and West Central US Azure regions. Alloc
## Data Access
-West US 2: https://datasetsnpeff.blob.core.windows.net/dataset
+West US 2: 'https://datasetsnpeff.blob.core.windows.net/dataset'
-West Central US: https://datasetsnpeff-secondary.blob.core.windows.net/dataset
+West Central US: 'https://datasetsnpeff-secondary.blob.core.windows.net/dataset'
[SAS Token](../storage/common/storage-sas-overview.md): sv=2019-10-10&st=2020-09-01T00%3A00%3A00Z&se=2050-09-01T00%3A00%3A00Z&si=prod&sr=c&sig=isafOa9tGnYBAvsXFUMDGMTbsG2z%2FShaihzp7JE5dHw%3D ## Use Terms
-Data is available without restrictions. More information and citation details, see [Accessing and using data in ClinVar](https://pcingola.github.io/SnpEff/SnpEff_manual.html#intro).
+Data is available without restrictions. More information and citation details, see [Accessing and using data in ClinVar](https://pcingola.github.io/SnpEff/se_introduction/).
## Contact
For any questions or feedback about this dataset, contact [Pablo Cingolani](http
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
purview Create Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-purview-python.md
EventHub namespace. For more information refer to [Create Catalog Portal](create
2. Add the following code to the **Main** method that creates an instance of PurviewManagementClient class. You use this object to create a purview account, delete purview account, check name availability and other resource provider operations.
- ```python
+ ```python
def main(): # Azure subscription ID
EventHub namespace. For more information refer to [Create Catalog Portal](create
## Create a purview account
-Add the following code to the **Main** method that creates a **purview account**. If your resource group already exists, comment out the first `create_or_update` statement.
+1. Add the following code to the **Main** method that creates a **purview account**. If your resource group already exists, comment out the first `create_or_update` statement.
-```python
+ ```python
# create the resource group # comment out if the resource group already exits resource_client.resource_groups.create_or_update(rg_name, rg_params)
Add the following code to the **Main** method that creates a **purview account**
print("Error in creating Purview account") break time.sleep(30)
-
-```
+ ```
-Now, add the following statement to invoke the **main** method when the program is run:
+2. Now, add the following statement to invoke the **main** method when the program is run:
-```python
-# Start the main method
-main()
-```
+ ```python
+ # Start the main method
+ main()
+ ```
## Full script
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/built-in-roles.md
Previously updated : 05/26/2021 Last updated : 06/25/2021
The following table provides a brief description of each built-in role. Click th
> | [Azure Maps Data Contributor](#azure-maps-data-contributor) | Grants access to read, write, and delete access to map related data from an Azure maps account. | 8f5e0ce6-4f7b-4dcf-bddf-e6f48634a204 | > | [Azure Maps Data Reader](#azure-maps-data-reader) | Grants access to read map related data from an Azure maps account. | 423170ca-a8f6-4b0f-8487-9e4eb8f49bfa | > | [Azure Spring Cloud Data Reader](#azure-spring-cloud-data-reader) | Allow read access to Azure Spring Cloud Data | b5537268-8956-4941-a8f0-646150406f0c |
-> | [Media Services Account Administrator](#media-services-account-administrator) | Create, read, modify and delete Media Services accounts; read-only access to other Media Services resources. | 054126f8-9a2b-4f1c-a9ad-eca461f08466 |
-> | [Media Services Live Events Administrator](#media-services-live-events-administrator) | Create, read and modify Live Events, Assets, Asset Filters and Streaming Locators; read-only access to other Media Services resources. | 532bc159-b25e-42c0-969e-a1d439f60d77 |
-> | [Media Services Media Operator](#media-services-media-operator) | Create, read, modify, and delete of Assets, Asset Filters, Streaming Locators and Jobs; read-only access to other Media Services resources. | e4395492-1534-4db2-bedf-88c14621589c |
-> | [Media Services Policy Administrator](#media-services-policy-administrator) | Create, read, modify, and delete Account Filters, Streaming Policies, Content Key Policies and Transforms; read-only access to other Media Services resources. Cannot create Jobs, Assets or Streaming resources. | c4bba371-dacd-4a26-b320-7250bca963ae |
-> | [Media Services Streaming Endpoints Administrator](#media-services-streaming-endpoints-administrator) | Create, read, modify and delete Streaming Endpoints; read-only access to other Media Services resources. | 99dba123-b5fe-44d5-874c-ced7199a5804 |
+> | [Media Services Account Administrator](#media-services-account-administrator) | Create, read, modify, and delete Media Services accounts; read-only access to other Media Services resources. | 054126f8-9a2b-4f1c-a9ad-eca461f08466 |
+> | [Media Services Live Events Administrator](#media-services-live-events-administrator) | Create, read, modify, and delete Live Events, Assets, Asset Filters, and Streaming Locators; read-only access to other Media Services resources. | 532bc159-b25e-42c0-969e-a1d439f60d77 |
+> | [Media Services Media Operator](#media-services-media-operator) | Create, read, modify, and delete Assets, Asset Filters, Streaming Locators, and Jobs; read-only access to other Media Services resources. | e4395492-1534-4db2-bedf-88c14621589c |
+> | [Media Services Policy Administrator](#media-services-policy-administrator) | Create, read, modify, and delete Account Filters, Streaming Policies, Content Key Policies, and Transforms; read-only access to other Media Services resources. Cannot create Jobs, Assets or Streaming resources. | c4bba371-dacd-4a26-b320-7250bca963ae |
+> | [Media Services Streaming Endpoints Administrator](#media-services-streaming-endpoints-administrator) | Create, read, modify, and delete Streaming Endpoints; read-only access to other Media Services resources. | 99dba123-b5fe-44d5-874c-ced7199a5804 |
> | [Search Service Contributor](#search-service-contributor) | Lets you manage Search services, but not access to them. | 7ca78c08-252a-4471-8644-bb5ff32d4ba0 | > | [SignalR AccessKey Reader](#signalr-accesskey-reader) | Read SignalR Service Access Keys | 04165923-9d83-45d5-8227-78b77b0a687e | > | [SignalR App Server (Preview)](#signalr-app-server-preview) | Lets your app server access SignalR Service with AAD auth options. | 420fcaa2-552c-430f-98ca-3264be4806c7 |
Lets you manage backup service, but can't create vaults and give access to other
> | [Microsoft.RecoveryServices](resource-provider-operations.md#microsoftrecoveryservices)/locations/operationStatus/read | Gets Operation Status for a given Operation | > | [Microsoft.RecoveryServices](resource-provider-operations.md#microsoftrecoveryservices)/Vaults/backupProtectionIntents/read | List all backup Protection Intents | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/getBackupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/write | Creates a Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/delete | Deletes the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/write | Creates Backup Policy |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/delete | Deletes the Backup Policy |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/findRestorableTimeRanges/action | Finds Restorable Time Ranges |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/write | Create BackupVault operation creates an Azure resource of type 'Backup Vault' |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/operationResults/read | Gets Operation Result of a Patch Operation for a Backup Vault |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/checkNameAvailability/action | Checks if the requested BackupVault Name is Available |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/providers/operations/read | Operation returns the list of Operations for a Resource Provider |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage backup service, but can't create vaults and give access to other
"Microsoft.RecoveryServices/operations/read", "Microsoft.RecoveryServices/locations/operationStatus/read", "Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read",
- "Microsoft.Support/*"
+ "Microsoft.Support/*",
+ "Microsoft.DataProtection/locations/getBackupStatus/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/write",
+ "Microsoft.DataProtection/backupVaults/backupInstances/delete",
+ "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/backup/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/restore/action",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/write",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/delete",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/read",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/findRestorableTimeRanges/action",
+ "Microsoft.DataProtection/backupVaults/write",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/backupVaults/operationResults/read",
+ "Microsoft.DataProtection/locations/checkNameAvailability/action",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/locations/operationStatus/read",
+ "Microsoft.DataProtection/locations/operationResults/read",
+ "Microsoft.DataProtection/backupVaults/validateForBackup/action",
+ "Microsoft.DataProtection/providers/operations/read"
], "notActions": [], "dataActions": [],
Lets you manage backup services, except removal of backup, vault creation and gi
> | [Microsoft.RecoveryServices](resource-provider-operations.md#microsoftrecoveryservices)/locations/operationStatus/read | Gets Operation Status for a given Operation | > | [Microsoft.RecoveryServices](resource-provider-operations.md#microsoftrecoveryservices)/Vaults/backupProtectionIntents/read | List all backup Protection Intents | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/findRestorableTimeRanges/action | Finds Restorable Time Ranges |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/operationResults/read | Gets Operation Result of a Patch Operation for a Backup Vault |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/providers/operations/read | Operation returns the list of Operations for a Resource Provider |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage backup services, except removal of backup, vault creation and gi
"Microsoft.RecoveryServices/operations/read", "Microsoft.RecoveryServices/locations/operationStatus/read", "Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read",
- "Microsoft.Support/*"
+ "Microsoft.Support/*",
+ "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/read",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/findRestorableTimeRanges/action",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/backupVaults/operationResults/read",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/locations/operationStatus/read",
+ "Microsoft.DataProtection/locations/operationResults/read",
+ "Microsoft.DataProtection/providers/operations/read"
], "notActions": [], "dataActions": [],
Can view backup services, but can't make changes [Learn more](../backup/backup-r
> | [Microsoft.RecoveryServices](resource-provider-operations.md#microsoftrecoveryservices)/locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. | > | [Microsoft.RecoveryServices](resource-provider-operations.md#microsoftrecoveryservices)/locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. | > | [Microsoft.RecoveryServices](resource-provider-operations.md#microsoftrecoveryservices)/locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/getBackupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/write | Creates a Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupPolicies/read | Returns all Backup Policies |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/backupInstances/findRestorableTimeRanges/action | Finds Restorable Time Ranges |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/operationResults/read | Gets Operation Result of a Patch Operation for a Backup Vault |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/backupVaults/validateForBackup/action | Validates for backup of Backup Instance |
+> | [Microsoft.DataProtection](resource-provider-operations.md#microsoftdataprotection)/providers/operations/read | Operation returns the list of Operations for a Resource Provider |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can view backup services, but can't make changes [Learn more](../backup/backup-r
"Microsoft.RecoveryServices/locations/backupCrrJobs/action", "Microsoft.RecoveryServices/locations/backupCrrJob/action", "Microsoft.RecoveryServices/locations/backupCrrOperationResults/read",
- "Microsoft.RecoveryServices/locations/backupCrrOperationsStatus/read"
+ "Microsoft.RecoveryServices/locations/backupCrrOperationsStatus/read",
+ "Microsoft.DataProtection/locations/getBackupStatus/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/write",
+ "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/backup/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action",
+ "Microsoft.DataProtection/backupVaults/backupInstances/restore/action",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/read",
+ "Microsoft.DataProtection/backupVaults/backupPolicies/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read",
+ "Microsoft.DataProtection/backupVaults/backupInstances/findRestorableTimeRanges/action",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/backupVaults/operationResults/read",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/backupVaults/read",
+ "Microsoft.DataProtection/locations/operationStatus/read",
+ "Microsoft.DataProtection/locations/operationResults/read",
+ "Microsoft.DataProtection/backupVaults/validateForBackup/action",
+ "Microsoft.DataProtection/providers/operations/read"
], "notActions": [], "dataActions": [],
Allow read access to Azure Spring Cloud Data [Learn more](../spring-cloud/how-to
### Media Services Account Administrator
-Create, read, modify and delete Media Services accounts; read-only access to other Media Services resources.
+Create, read, modify, and delete Media Services accounts; read-only access to other Media Services resources.
> [!div class="mx-tableFixed"] > | Actions | Description |
Create, read, modify and delete Media Services accounts; read-only access to oth
"assignableScopes": [ "/" ],
- "description": "Create, read, modify and delete Media Services accounts; read-only access to other Media Services resources.",
+ "description": "Create, read, modify, and delete Media Services accounts; read-only access to other Media Services resources.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/054126f8-9a2b-4f1c-a9ad-eca461f08466", "name": "054126f8-9a2b-4f1c-a9ad-eca461f08466", "permissions": [
Create, read, modify and delete Media Services accounts; read-only access to oth
### Media Services Live Events Administrator
-Create, read and modify Live Events, Assets, Asset Filters and Streaming Locators; read-only access to other Media Services resources.
+Create, read, modify, and delete Live Events, Assets, Asset Filters, and Streaming Locators; read-only access to other Media Services resources.
> [!div class="mx-tableFixed"] > | Actions | Description |
Create, read and modify Live Events, Assets, Asset Filters and Streaming Locator
"assignableScopes": [ "/" ],
- "description": "Create, read and modify Live Events, Assets, Asset Filters and Streaming Locators; read-only access to other Media Services resources.",
+ "description": "Create, read, modify, and delete Live Events, Assets, Asset Filters, and Streaming Locators; read-only access to other Media Services resources.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/532bc159-b25e-42c0-969e-a1d439f60d77", "name": "532bc159-b25e-42c0-969e-a1d439f60d77", "permissions": [
Create, read and modify Live Events, Assets, Asset Filters and Streaming Locator
### Media Services Media Operator
-Create, read, modify, and delete of Assets, Asset Filters, Streaming Locators and Jobs; read-only access to other Media Services resources.
+Create, read, modify, and delete Assets, Asset Filters, Streaming Locators, and Jobs; read-only access to other Media Services resources.
> [!div class="mx-tableFixed"] > | Actions | Description |
Create, read, modify, and delete of Assets, Asset Filters, Streaming Locators an
"assignableScopes": [ "/" ],
- "description": "Create, read, modify, and delete of Assets, Asset Filters, Streaming Locators and Jobs; read-only access to other Media Services resources.",
+ "description": "Create, read, modify, and delete Assets, Asset Filters, Streaming Locators, and Jobs; read-only access to other Media Services resources.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/e4395492-1534-4db2-bedf-88c14621589c", "name": "e4395492-1534-4db2-bedf-88c14621589c", "permissions": [
Create, read, modify, and delete of Assets, Asset Filters, Streaming Locators an
### Media Services Policy Administrator
-Create, read, modify, and delete Account Filters, Streaming Policies, Content Key Policies and Transforms; read-only access to other Media Services resources. Cannot create Jobs, Assets or Streaming resources.
+Create, read, modify, and delete Account Filters, Streaming Policies, Content Key Policies, and Transforms; read-only access to other Media Services resources. Cannot create Jobs, Assets or Streaming resources.
> [!div class="mx-tableFixed"] > | Actions | Description |
Create, read, modify, and delete Account Filters, Streaming Policies, Content Ke
"assignableScopes": [ "/" ],
- "description": "Create, read, modify, and delete Account Filters, Streaming Policies, Content Key Policies and Transforms; read-only access to other Media Services resources. Cannot create Jobs, Assets or Streaming resources.",
+ "description": "Create, read, modify, and delete Account Filters, Streaming Policies, Content Key Policies, and Transforms; read-only access to other Media Services resources. Cannot create Jobs, Assets or Streaming resources.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/c4bba371-dacd-4a26-b320-7250bca963ae", "name": "c4bba371-dacd-4a26-b320-7250bca963ae", "permissions": [
Create, read, modify, and delete Account Filters, Streaming Policies, Content Ke
### Media Services Streaming Endpoints Administrator
-Create, read, modify and delete Streaming Endpoints; read-only access to other Media Services resources.
+Create, read, modify, and delete Streaming Endpoints; read-only access to other Media Services resources.
> [!div class="mx-tableFixed"] > | Actions | Description |
Create, read, modify and delete Streaming Endpoints; read-only access to other M
"assignableScopes": [ "/" ],
- "description": "Create, read, modify and delete Streaming Endpoints; read-only access to other Media Services resources.",
+ "description": "Create, read, modify, and delete Streaming Endpoints; read-only access to other Media Services resources.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/99dba123-b5fe-44d5-874c-ced7199a5804", "name": "99dba123-b5fe-44d5-874c-ced7199a5804", "permissions": [
Push trusted images to or pull trusted images from a container registry enabled
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | *none* | |
+> | [Microsoft.ContainerRegistry](resource-provider-operations.md#microsoftcontainerregistry)/registries/trustedCollections/write | Allows push or publish of trusted collections of container registry content. This is similar to Microsoft.ContainerRegistry/registries/sign/write action except that this is a data action |
> | **NotDataActions** | | > | *none* | |
Push trusted images to or pull trusted images from a container registry enabled
"Microsoft.ContainerRegistry/registries/sign/write" ], "notActions": [],
- "dataActions": [],
+ "dataActions": [
+ "Microsoft.ContainerRegistry/registries/trustedCollections/write"
+ ],
"notDataActions": [] } ],
Pull quarantined images from a container registry. [Learn more](../container-reg
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | *none* | |
+> | [Microsoft.ContainerRegistry](resource-provider-operations.md#microsoftcontainerregistry)/registries/quarantinedArtifacts/read | Allows pull or get of the quarantined artifacts from container registry. This is similar to Microsoft.ContainerRegistry/registries/quarantine/read except that it is a data action |
> | **NotDataActions** | | > | *none* | |
Pull quarantined images from a container registry. [Learn more](../container-reg
"Microsoft.ContainerRegistry/registries/quarantine/read" ], "notActions": [],
- "dataActions": [],
+ "dataActions": [
+ "Microsoft.ContainerRegistry/registries/quarantinedArtifacts/read"
+ ],
"notDataActions": [] } ],
Lets you manage SQL databases, but not access to them. Also, you can't manage th
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metrics/read | Read metrics | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricDefinitions/read | Read metric definitions | > | **NotActions** | |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/databases/ledgerDigestUploads/write | Enable uploading ledger digests |
+> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/servers/databases/ledgerDigestUploads/disable/action | Disable uploading ledger digests |
> | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/databases/currentSensitivityLabels/* | | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/databases/recommendedSensitivityLabels/* | | > | [Microsoft.Sql](resource-provider-operations.md#microsoftsql)/managedInstances/databases/schemas/tables/columns/sensitivityLabels/* | |
Lets you manage SQL databases, but not access to them. Also, you can't manage th
"Microsoft.Insights/metricDefinitions/read" ], "notActions": [
+ "Microsoft.Sql/servers/databases/ledgerDigestUploads/write",
+ "Microsoft.Sql/servers/databases/ledgerDigestUploads/disable/action",
"Microsoft.Sql/managedInstances/databases/currentSensitivityLabels/*", "Microsoft.Sql/managedInstances/databases/recommendedSensitivityLabels/*", "Microsoft.Sql/managedInstances/databases/schemas/tables/columns/sensitivityLabels/*",
Lets you manage logic apps, but not change access to them. [Learn more](../logic
"assignableScopes": [ "/" ],
- "description": "Lets you manage logic app, but not change access to them.",
+ "description": "Lets you manage logic app, but not access to them.",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/87a39d53-fc1b-424a-814c-f7e04687dc9e", "name": "87a39d53-fc1b-424a-814c-f7e04687dc9e", "permissions": [
Can read, write, delete and re-onboard Azure Connected Machines.
> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/read | Read any Azure Arc machines | > | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/write | Writes an Azure Arc machines | > | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/delete | Deletes an Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/UpgradeExtensions/action | Upgrades Extensions on Azure Arc machines |
> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/read | Reads any Azure Arc extensions | > | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/write | Installs or Updates an Azure Arc extensions | > | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/delete | Deletes an Azure Arc extensions |
Can read, write, delete and re-onboard Azure Connected Machines.
"Microsoft.HybridCompute/machines/read", "Microsoft.HybridCompute/machines/write", "Microsoft.HybridCompute/machines/delete",
+ "Microsoft.HybridCompute/machines/UpgradeExtensions/action",
"Microsoft.HybridCompute/machines/extensions/read", "Microsoft.HybridCompute/machines/extensions/write", "Microsoft.HybridCompute/machines/extensions/delete",
Can view costs and manage cost configuration (e.g. budgets, exports) [Learn more
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
-> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/configurations/read | Get configurations |
-> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/recommendations/read | Reads recommendations |
+> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/configurations/read | |
+> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/recommendations/read | |
> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. | > | [Microsoft.Billing](resource-provider-operations.md#microsoftbilling)/billingProperty/read | | > | **NotActions** | |
Can view cost data and configuration (e.g. budgets, exports) [Learn more](../cos
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
-> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/configurations/read | Get configurations |
-> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/recommendations/read | Reads recommendations |
+> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/configurations/read | |
+> | [Microsoft.Advisor](resource-provider-operations.md#microsoftadvisor)/recommendations/read | |
> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. | > | [Microsoft.Billing](resource-provider-operations.md#microsoftbilling)/billingProperty/read | | > | **NotActions** | |
Allows user to use the applications in an application group. [Learn more](../vir
### Desktop Virtualization User Session Operator
-Operator of the Desktop Virtualization Uesr Session. [Learn more](../virtual-desktop/rbac.md)
+Operator of the Desktop Virtualization User Session. [Learn more](../virtual-desktop/rbac.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 05/26/2021 Last updated : 06/25/2021
Click the resource provider name in the following table to see the list of opera
| [Microsoft.Commerce](#microsoftcommerce) | | [Microsoft.Consumption](#microsoftconsumption) | | [Microsoft.CostManagement](#microsoftcostmanagement) |
+| [Microsoft.DataProtection](#microsoftdataprotection) |
| [Microsoft.Features](#microsoftfeatures) | | [Microsoft.GuestConfiguration](#microsoftguestconfiguration) | | [Microsoft.HybridCompute](#microsofthybridcompute) |
Azure service: [Azure Data Box](../databox/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
-> | Microsoft.DataBox/jobs/read | List or get the Orders |
-> | Microsoft.DataBox/jobs/delete | Delete the Orders |
-> | Microsoft.DataBox/jobs/write | Create or update the Orders |
-> | Microsoft.DataBox/locations/availableSkus/read | List or get the Available Skus |
-> | Microsoft.DataBox/locations/operationResults/read | List or get the Operation Results |
-> | Microsoft.DataBox/operations/read | List or get the Operations |
-> | **DataAction** | **Description** |
> | Microsoft.DataBox/register/action | Register Provider Microsoft.Databox | > | Microsoft.DataBox/unregister/action | Un-Register Provider Microsoft.Databox | > | Microsoft.DataBox/jobs/cancel/action | Cancels an order in progress. | > | Microsoft.DataBox/jobs/bookShipmentPickUp/action | Allows to book a pick up for return shipments. | > | Microsoft.DataBox/jobs/mitigate/action | This method helps in performing mitigation action on a job with a resolution code | > | Microsoft.DataBox/jobs/markDevicesShipped/action | |
+> | Microsoft.DataBox/jobs/read | List or get the Orders |
+> | Microsoft.DataBox/jobs/delete | Delete the Orders |
+> | Microsoft.DataBox/jobs/write | Create or update the Orders |
> | Microsoft.DataBox/jobs/listCredentials/action | Lists the unencrypted credentials related to the order. | > | Microsoft.DataBox/locations/validateInputs/action | This method does all type of validations. | > | Microsoft.DataBox/locations/validateAddress/action | Validates the shipping address and provides alternate addresses if any. | > | Microsoft.DataBox/locations/availableSkus/action | This method returns the list of available skus. | > | Microsoft.DataBox/locations/regionConfiguration/action | This method returns the configurations for the region. |
+> | Microsoft.DataBox/locations/availableSkus/read | List or get the Available Skus |
+> | Microsoft.DataBox/locations/operationResults/read | List or get the Operation Results |
+> | Microsoft.DataBox/operations/read | List or get the Operations |
> | Microsoft.DataBox/subscriptions/resourceGroups/moveResources/action | This method performs the resource move. | > | Microsoft.DataBox/subscriptions/resourceGroups/validateMoveResources/action | This method validates whether resource move is allowed or not. |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/read | Reads a snapshot resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/write | Writes a snapshot resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/volumes/snapshots/delete | Deletes a snapshot resource. |
+> | Microsoft.NetApp/netAppAccounts/ipsecPolicies/read | Reads a IPSec policy resource. |
+> | Microsoft.NetApp/netAppAccounts/ipsecPolicies/write | Writes an IPSec policy resource. |
+> | Microsoft.NetApp/netAppAccounts/ipsecPolicies/delete | Deletes a IPSec policy resource. |
> | Microsoft.NetApp/netAppAccounts/snapshotPolicies/read | Reads a snapshot policy resource. | > | Microsoft.NetApp/netAppAccounts/snapshotPolicies/write | Writes a snapshot policy resource. | > | Microsoft.NetApp/netAppAccounts/snapshotPolicies/delete | Deletes a snapshot policy resource. |
Azure service: [Storage](../storage/index.yml)
> | microsoft.storagesync/storageSyncServices/registeredServers/read | Read any Registered Server | > | microsoft.storagesync/storageSyncServices/registeredServers/write | Create or Update any Registered Server | > | microsoft.storagesync/storageSyncServices/registeredServers/delete | Delete any Registered Server |
-> | microsoft.storagesync/storageSyncServices/registeredServers/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for Registered Server |
> | microsoft.storagesync/storageSyncServices/syncGroups/read | Read any Sync Groups | > | microsoft.storagesync/storageSyncServices/syncGroups/write | Create or Update any Sync Groups | > | microsoft.storagesync/storageSyncServices/syncGroups/delete | Delete any Sync Groups |
Azure service: [Storage](../storage/index.yml)
> | microsoft.storagesync/storageSyncServices/syncGroups/cloudEndpoints/restoreheartbeat/action | Restore heartbeat | > | microsoft.storagesync/storageSyncServices/syncGroups/cloudEndpoints/triggerChangeDetection/action | Call this action to trigger detection of changes on a cloud endpoint's file share | > | microsoft.storagesync/storageSyncServices/syncGroups/cloudEndpoints/operationresults/read | Gets the status of an asynchronous backup/restore operation |
-> | microsoft.storagesync/storageSyncServices/syncGroups/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for Sync Groups |
> | microsoft.storagesync/storageSyncServices/syncGroups/serverEndpoints/read | Read any Server Endpoints | > | microsoft.storagesync/storageSyncServices/syncGroups/serverEndpoints/write | Create or Update any Server Endpoints | > | microsoft.storagesync/storageSyncServices/syncGroups/serverEndpoints/delete | Delete any Server Endpoints | > | microsoft.storagesync/storageSyncServices/syncGroups/serverEndpoints/recallAction/action | Call this action to recall files to a server |
-> | microsoft.storagesync/storageSyncServices/syncGroups/serverEndpoints/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for Server Endpoints |
> | microsoft.storagesync/storageSyncServices/workflows/read | Read Workflows | > | microsoft.storagesync/storageSyncServices/workflows/operationresults/read | Gets the status of an asynchronous operation | > | microsoft.storagesync/storageSyncServices/workflows/operations/read | Gets the status of an asynchronous operation |
Azure service: [Azure Maps](../azure-maps/index.yml)
> | Microsoft.Maps/operations/read | Read the provider operations | > | Microsoft.Maps/resourceTypes/read | Read the provider resourceTypes | > | **DataAction** | **Description** |
+> | Microsoft.Maps/accounts/services/analytics/read | Allows reading of data for Analytics services. |
+> | Microsoft.Maps/accounts/services/analytics/delete | Allows deleting of data for Analytic services. |
+> | Microsoft.Maps/accounts/services/analytics/write | Allows writing of data for Analytic services. |
> | Microsoft.Maps/accounts/services/data/read | Allows reading of data for data upload services and Private Atlas. | > | Microsoft.Maps/accounts/services/data/delete | Allows deleting of data for data upload services and Private Atlas | > | Microsoft.Maps/accounts/services/data/write | Allows writing or updating of data for data upload services and Private Atlas. |
+> | Microsoft.Maps/accounts/services/dataordering/read | Allows reading of data for DataOrdering services. |
+> | Microsoft.Maps/accounts/services/dataordering/write | Allows writing of data for Data Ordering services. |
> | Microsoft.Maps/accounts/services/elevation/read | Allows reading of data for Elevation services. | > | Microsoft.Maps/accounts/services/geolocation/read | Allows reading of data for Geolocation services. | > | Microsoft.Maps/accounts/services/mobility/read | Allows reading of data for Mobility services. |
Azure service: [Media Services](../media-services/index.yml)
> | Microsoft.Media/mediaservices/liveEvents/liveOutputs/write | Create or Update any Live Output | > | Microsoft.Media/mediaservices/liveEvents/liveOutputs/delete | Delete any Live Output | > | Microsoft.Media/mediaservices/liveOutputOperations/read | Read any Live Output Operation |
-> | Microsoft.Media/mediaservices/mediaGraphs/read | Read any Media Graph |
-> | Microsoft.Media/mediaservices/mediaGraphs/write | Create or Update any Media Graph |
-> | Microsoft.Media/mediaservices/mediaGraphs/delete | Delete any Media Graph |
-> | Microsoft.Media/mediaservices/mediaGraphs/start/action | Start any Media Graph Operation |
-> | Microsoft.Media/mediaservices/mediaGraphs/stop/action | Stop any Media Graph Operation |
> | Microsoft.Media/mediaservices/privateEndpointConnectionOperations/read | Read any Private Endpoint Connection Operation | > | Microsoft.Media/mediaservices/privateEndpointConnectionProxies/read | Read any Private Endpoint Connection Proxy | > | Microsoft.Media/mediaservices/privateEndpointConnectionProxies/write | Create Private Endpoint Connection Proxy |
Azure service: [Media Services](../media-services/index.yml)
> | Microsoft.Media/videoAnalyzers/edgeModules/write | Create or Update any Edge Module | > | Microsoft.Media/videoAnalyzers/edgeModules/delete | Delete any Edge Module | > | Microsoft.Media/videoAnalyzers/edgeModules/listProvisioningToken/action | Creates a new provisioning token.<br>A provisioning token allows for a single instance of Azure Video analyzer IoT edge module to be initialized and authorized to the cloud account.<br>The provisioning token itself is short lived and it is only used for the initial handshake between IoT edge module and the cloud.<br>After the initial handshake, the IoT edge module will agree on a set of authentication keys which will be auto-rotated as long as the module is able to periodically connect to the cloud.<br>A new provisioning token can be generated for the same IoT edge module in case the module state lost or reset |
+> | Microsoft.Media/videoAnalyzers/livePipelines/read | Read any Live Pipeline |
+> | Microsoft.Media/videoAnalyzers/livePipelines/write | Create or Update any Live Pipeline |
+> | Microsoft.Media/videoAnalyzers/livePipelines/delete | Delete any Live Pipeline |
+> | Microsoft.Media/videoAnalyzers/livePipelines/activate/action | Activate any Live Pipeline |
+> | Microsoft.Media/videoAnalyzers/livePipelines/deactivate/action | Deactivate any Live Pipeline |
+> | Microsoft.Media/videoAnalyzers/livePipelines/operationsStatus/read | Read any Live Pipeline operation status |
+> | Microsoft.Media/videoAnalyzers/pipelineTopologies/read | Read any Pipeline Topology |
+> | Microsoft.Media/videoAnalyzers/pipelineTopologies/write | Create or Update any Pipeline Topology |
+> | Microsoft.Media/videoAnalyzers/pipelineTopologies/delete | Delete any Pipeline Topology |
> | Microsoft.Media/videoAnalyzers/videos/read | Read any Video | > | Microsoft.Media/videoAnalyzers/videos/write | Create or Update any Video | > | Microsoft.Media/videoAnalyzers/videos/delete | Delete any Video |
Azure service: [Azure Search](../search/index.yml)
> | Microsoft.Search/searchServices/regenerateAdminKey/action | Regenerates the admin key. | > | Microsoft.Search/searchServices/listQueryKeys/action | Returns the list of query API keys for the given Azure Search service. | > | Microsoft.Search/searchServices/createQueryKey/action | Creates the query key. |
+> | Microsoft.Search/searchServices/dataSources/read | Return a data source or a list of data sources. |
+> | Microsoft.Search/searchServices/dataSources/write | Create a data source or modify its properties. |
+> | Microsoft.Search/searchServices/dataSources/delete | Delete a data source. |
+> | Microsoft.Search/searchServices/debugSessions/read | Return a debug session or a list of debug sessions. |
+> | Microsoft.Search/searchServices/debugSessions/write | Create a debug session or modify its properties. |
+> | Microsoft.Search/searchServices/debugSessions/delete | Delete a debug session. |
+> | Microsoft.Search/searchServices/debugSessions/execute/action | Use a debug session, get execution data, or evaluate expressions on it. |
> | Microsoft.Search/searchServices/deleteQueryKey/delete | Deletes the query key. |
+> | Microsoft.Search/searchServices/indexers/read | Return an indexer or its status, or return a list of indexers or their statuses. |
+> | Microsoft.Search/searchServices/indexers/write | Create an indexer, modify its properties, or manage its execution. |
+> | Microsoft.Search/searchServices/indexers/delete | Delete an indexer. |
+> | Microsoft.Search/searchServices/indexes/read | Return an index or its statistics, return a list of indexes or their statistics, or test the lexical analysis components of an index. |
+> | Microsoft.Search/searchServices/indexes/write | Create an index or modify its properties. |
+> | Microsoft.Search/searchServices/indexes/delete | Delete an index. |
> | Microsoft.Search/searchServices/privateEndpointConnectionProxies/validate/action | Validates a private endpoint connection create call from NRP side | > | Microsoft.Search/searchServices/privateEndpointConnectionProxies/write | Creates a private endpoint connection proxy with the specified parameters or updates the properties or tags for the specified private endpoint connection proxy | > | Microsoft.Search/searchServices/privateEndpointConnectionProxies/read | Returns the list of private endpoint connection proxies or gets the properties for the specified private endpoint connection proxy |
Azure service: [Azure Search](../search/index.yml)
> | Microsoft.Search/searchServices/sharedPrivateLinkResources/read | Returns the list of shared private link resources or gets the properties for the specified shared private link resource | > | Microsoft.Search/searchServices/sharedPrivateLinkResources/delete | Deletes an existing shared private link resource | > | Microsoft.Search/searchServices/sharedPrivateLinkResources/operationStatuses/read | Get the details of a long running shared private link resource operation |
+> | Microsoft.Search/searchServices/skillsets/read | Return a skillset or a list of skillsets. |
+> | Microsoft.Search/searchServices/skillsets/write | Create a skillset or modify its properties. |
+> | Microsoft.Search/searchServices/skillsets/delete | Delete a skillset. |
+> | Microsoft.Search/searchServices/synonymMaps/read | Return a synonym map or a list of synonym maps. |
+> | Microsoft.Search/searchServices/synonymMaps/write | Create a synonym map or modify its properties. |
+> | Microsoft.Search/searchServices/synonymMaps/delete | Delete a synonym map. |
+> | **DataAction** | **Description** |
+> | Microsoft.Search/searchServices/indexes/documents/read | Read documents or suggested query terms from an index. |
+> | Microsoft.Search/searchServices/indexes/documents/write | Upload documents to an index or modify existing documents. |
+> | Microsoft.Search/searchServices/indexes/documents/delete | Delete documents from an index. |
### Microsoft.SignalRService
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/SignalR/eventGridFilters/read | Get the properties of the specified event grid filter or lists all the event grid filters for the specified SignalR resource. | > | Microsoft.SignalRService/SignalR/eventGridFilters/write | Create or update an event grid filter for a SignalR resource with the specified parameters. | > | Microsoft.SignalRService/SignalR/eventGridFilters/delete | Delete an event grid filter from a SignalR resource. |
+> | Microsoft.SignalRService/SignalR/operationResults/read | |
+> | Microsoft.SignalRService/SignalR/operationStatuses/read | |
> | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy | > | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/write | Write Private Endpoint Connection Proxy | > | Microsoft.SignalRService/SignalR/privateEndpointConnectionProxies/read | Read Private Endpoint Connection Proxy |
Azure service: [Azure SignalR Service](../azure-signalr/index.yml)
> | Microsoft.SignalRService/WebPubSub/regeneratekey/action | Change the value of WebPubSub access keys in the management portal or through API | > | Microsoft.SignalRService/WebPubSub/restart/action | To restart a WebPubSub resource in the management portal or through API. There will be certain downtime. | > | Microsoft.SignalRService/WebPubSub/detectors/read | Read Detector |
+> | Microsoft.SignalRService/WebPubSub/operationResults/read | |
+> | Microsoft.SignalRService/WebPubSub/operationStatuses/read | |
> | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/validate/action | Validate Private Endpoint Connection Proxy | > | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/write | Write Private Endpoint Connection Proxy | > | Microsoft.SignalRService/WebPubSub/privateEndpointConnectionProxies/read | Read Private Endpoint Connection Proxy |
Azure service: [App Service](../app-service/index.yml), [Azure Functions](../azu
> | Microsoft.Web/staticSites/Read | Get the properties of a Static Site | > | Microsoft.Web/staticSites/Write | Create a new Static Site or update an existing one | > | Microsoft.Web/staticSites/Delete | Delete an existing Static Site |
+> | Microsoft.Web/staticSites/validateCustomDomainOwnership/action | Validate the custom domain ownership for a static site |
> | Microsoft.Web/staticSites/createinvitation/action | Creates invitiation link for static site user for a set of roles | > | Microsoft.Web/staticSites/listConfiguredRoles/action | Lists the roles configured for the static site. | > | Microsoft.Web/staticSites/listfunctionappsettings/Action | List function app settings for a Static Site |
Azure service: [Container Registry](../container-registry/index.yml)
> | Microsoft.ContainerRegistry/registries/webhooks/ping/action | Triggers a ping event to be sent to the webhook. | > | Microsoft.ContainerRegistry/registries/webhooks/listEvents/action | Lists recent events for the specified webhook. | > | Microsoft.ContainerRegistry/registries/webhooks/operationStatuses/read | Gets a webhook async operation status |
+> | **DataAction** | **Description** |
+> | Microsoft.ContainerRegistry/registries/quarantinedArtifacts/read | Allows pull or get of the quarantined artifacts from container registry. This is similar to Microsoft.ContainerRegistry/registries/quarantine/read except that it is a data action |
+> | Microsoft.ContainerRegistry/registries/quarantinedArtifacts/write | Allows write or update of the quarantine state of quarantined artifacts. This is similar to Microsoft.ContainerRegistry/registries/quarantine/write action except that it is a data action |
+> | Microsoft.ContainerRegistry/registries/trustedCollections/write | Allows push or publish of trusted collections of container registry content. This is similar to Microsoft.ContainerRegistry/registries/sign/write action except that this is a data action |
### Microsoft.ContainerService
Azure service: [Data Factory](../data-factory/index.yml)
> | Microsoft.DataFactory/factories/integrationruntimes/nodes/delete | Deletes the Node for the specified Integration Runtime. | > | Microsoft.DataFactory/factories/integrationruntimes/nodes/write | Updates a self-hosted Integration Runtime Node. | > | Microsoft.DataFactory/factories/integrationruntimes/nodes/ipAddress/action | Returns the IP Address for the specified node of the Integration Runtime. |
+> | Microsoft.DataFactory/factories/integrationruntimes/outboundNetworkDependenciesEndpoints/read | Get Azure-SSIS Integration Runtime outbound network dependency endpoints for the specified Integration Runtime. |
> | Microsoft.DataFactory/factories/linkedServices/read | Reads Linked Service. | > | Microsoft.DataFactory/factories/linkedServices/delete | Deletes Linked Service. | > | Microsoft.DataFactory/factories/linkedServices/write | Create or Update Linked Service |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups/update/action | Update a long term retention backup | > | Microsoft.Sql/locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups/read | Lists the long term retention backups for a database | > | Microsoft.Sql/locations/longTermRetentionServers/longTermRetentionDatabases/longTermRetentionBackups/delete | Deletes a long term retention backup |
+> | Microsoft.Sql/locations/managedDatabaseMoveAzureAsyncOperation/read | Gets Managed Instance database move Azure async operation. |
+> | Microsoft.Sql/locations/managedDatabaseMoveOperationResults/read | Gets Managed Instance database move operation result. |
> | Microsoft.Sql/locations/managedDatabaseRestoreAzureAsyncOperation/completeRestore/action | Completes managed database restore operation | > | Microsoft.Sql/locations/managedInstanceEncryptionProtectorAzureAsyncOperation/read | Gets in-progress operations on transparent data encryption managed instance encryption protector | > | Microsoft.Sql/locations/managedInstanceEncryptionProtectorOperationResults/read | Gets in-progress operations on transparent data encryption managed instance encryption protector |
Azure service: [Azure SQL Database](../azure-sql/database/index.yml), [Azure SQL
> | Microsoft.Sql/managedInstances/azureADOnlyAuthentications/read | Reads a specific managed server Azure Active Directory only authentication object | > | Microsoft.Sql/managedInstances/azureADOnlyAuthentications/write | Adds or updates a specific managed server Azure Active Directory only authentication object | > | Microsoft.Sql/managedInstances/azureADOnlyAuthentications/delete | Deletes a specific managed server Azure Active Directory only authentication object |
+> | Microsoft.Sql/managedInstances/databases/cancelMove/action | Cancels Managed Instance database move. |
+> | Microsoft.Sql/managedInstances/databases/completeMove/action | Completes Managed Instance database move. |
+> | Microsoft.Sql/managedInstances/databases/startMove/action | Starts Managed Instance database move. |
> | Microsoft.Sql/managedInstances/databases/read | Gets existing managed database | > | Microsoft.Sql/managedInstances/databases/delete | Deletes an existing managed database | > | Microsoft.Sql/managedInstances/databases/write | Creates a new database or updates an existing database. |
Azure service: [Azure Data Explorer](/azure/data-explorer/)
> | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/read | Reads a private endpoint connection proxy | > | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/write | Writes a private endpoint connection proxy | > | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/delete | Deletes a private endpoint connection proxy |
-> | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/Validate/action | Validates a private endpoint connection proxy |
+> | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/read | Reads a private endpoint connection proxy |
+> | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/write | Writes a private endpoint connection proxy |
+> | Microsoft.Kusto/Clusters/PrivateEndpointConnectionProxies/delete | Deletes a private endpoint connection proxy |
+> | Microsoft.Kusto/Clusters/PrivateEndpointConnections/read | Reads a private endpoint connection |
+> | Microsoft.Kusto/Clusters/PrivateEndpointConnections/write | Writes a private endpoint connection |
> | Microsoft.Kusto/Clusters/PrivateLinkResources/read | Reads private link resources | > | Microsoft.Kusto/Clusters/SKUs/read | Reads a cluster SKU resource. |
+> | Microsoft.Kusto/Clusters/SKUs/PrivateEndpointConnectionProxyValidation/action | Validates a private endpoint connection proxy |
> | Microsoft.Kusto/Locations/CheckNameAvailability/action | Checks resource name availability. | > | Microsoft.Kusto/Locations/GetNetworkPolicies/action | Gets Network Intent Policies | > | Microsoft.Kusto/locations/operationresults/read | Reads operations resources |
Azure service: [Azure Synapse Analytics](../synapse-analytics/index.yml)
> | Microsoft.Synapse/workspaces/kustoPools/PrincipalAssignments/read | Reads a Cluster principal assignments resource. | > | Microsoft.Synapse/workspaces/kustoPools/PrincipalAssignments/write | Writes a Cluster principal assignments resource. | > | Microsoft.Synapse/workspaces/kustoPools/PrincipalAssignments/delete | Deletes a Cluster principal assignments resource. |
+> | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnectionProxies/read | Reads a private endpoint connection proxy |
+> | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnectionProxies/write | Writes a private endpoint connection proxy |
+> | Microsoft.Synapse/workspaces/kustoPools/PrivateEndpointConnectionProxies/delete | Deletes a private endpoint connection proxy |
+> | Microsoft.Synapse/workspaces/kustoPools/PrivateLinkResources/read | Reads private link resources |
> | Microsoft.Synapse/workspaces/libraries/read | Read Library Artifacts | > | Microsoft.Synapse/workspaces/managedIdentitySqlControlSettings/write | Update Managed Identity SQL Control Settings on the workspace | > | Microsoft.Synapse/workspaces/managedIdentitySqlControlSettings/read | Get Managed Identity SQL Control Settings |
Azure service: [Azure Bot Service](/azure/bot-service/)
> | Microsoft.BotService/botServices/connections/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource | > | Microsoft.BotService/botServices/connections/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for &lt;Name of the resource&gt; | > | Microsoft.BotService/botServices/connections/providers/Microsoft.Insights/metricDefinitions/read | Creates or updates the diagnostic setting for the resource |
+> | Microsoft.BotService/botServices/privateEndpointConnectionProxies/read | Read a connection proxy resource |
+> | Microsoft.BotService/botServices/privateEndpointConnectionProxies/write | Write a connection proxy resource |
+> | Microsoft.BotService/botServices/privateEndpointConnectionProxies/delete | Delete a connection proxy resource |
+> | Microsoft.BotService/botServices/privateEndpointConnectionProxies/validate/action | Validate a connection proxy resource |
+> | Microsoft.BotService/botServices/privateEndpointConnections/read | Read a Private Endpoint Connections Resource |
+> | Microsoft.BotService/botServices/privateEndpointConnections/write | Write a Private Endpoint Connections Resource |
+> | Microsoft.BotService/botServices/privateEndpointConnections/delete | Delete a Private Endpoint Connections Resource |
+> | Microsoft.BotService/botServices/privateLinkResources/read | Read a Private Links Resource |
> | Microsoft.BotService/botServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the resource | > | Microsoft.BotService/botServices/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the resource | > | Microsoft.BotService/botServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for &lt;Name of the resource&gt; |
Azure service: [Azure Bot Service](/azure/bot-service/)
> | Microsoft.BotService/listauthserviceproviders/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for &lt;Name of the resource&gt; | > | Microsoft.BotService/listauthserviceproviders/providers/Microsoft.Insights/metricDefinitions/read | Creates or updates the diagnostic setting for the resource | > | Microsoft.BotService/locations/operationresults/read | Read the status of an asynchronous operation |
+> | Microsoft.BotService/operationresults/read | Read the status of an asynchronous operation |
> | Microsoft.BotService/Operations/read | Read the operations for all resource types | ### Microsoft.CognitiveServices
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/ComputerVision/detect/action | This operation Performs object detection on the specified image. | > | Microsoft.CognitiveServices/accounts/ComputerVision/models/read | This operation returns the list of domain-specific models that are supported by the Computer Vision API. Currently, the API supports following domain-specific models: celebrity recognizer, landmark recognizer. | > | Microsoft.CognitiveServices/accounts/ComputerVision/models/analyze/action | This operation recognizes content within an image by applying a domain-specific model.<br> The list of domain-specific models that are supported by the Computer Vision API can be retrieved using the /models GET request.<br> Currently, the API provides following domain-specific models: celebrities, landmarks. |
-> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyze/action | Use this interface to perform a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents.<br>It can handle hand-written, printed or mixed documents.<br>When you use the Read interface, the response contains a header called 'Operation-Location'.<br>The 'Operation-Location' header contains the URL that you must use for your Get Read Result operation to access OCR results.* |
+> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyze/action | Use this interface to perform a Read operation, employing the state-of-the-art Optical Character Recognition (OCR) algorithms optimized for text-heavy documents.<br>It can handle hand-written, printed or mixed documents.<br>When you use the Read interface, the response contains a header called 'Operation-Location'.<br>The 'Operation-Location' header contains the URL that you must use for your Get Read Result operation to access OCR results.** |
> | Microsoft.CognitiveServices/accounts/ComputerVision/read/analyzeresults/read | Use this interface to retrieve the status and OCR result of a Read operation. The URL containing the 'operationId' is returned in the Read operation 'Operation-Location' response header.* | > | Microsoft.CognitiveServices/accounts/ComputerVision/read/core/asyncbatchanalyze/action | Use this interface to get the result of a Batch Read File operation, employing the state-of-the-art Optical Character | > | Microsoft.CognitiveServices/accounts/ComputerVision/read/operations/read | This interface is used for getting OCR results of Read operation. The URL to this interface should be retrieved from <b>"Operation-Location"</b> field returned from Batch Read File interface. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/alert/anomaly/configurations/alerts/query/action | Query alerts under anomaly alerting configuration | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/alert/anomaly/configurations/alerts/anomalies/read | Query anomalies under a specific alert | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/alert/anomaly/configurations/alerts/incidents/read | Query incidents under a specific alert |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/credentials/write | Create or update a new data source credential |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/credentials/delete | Delete a data source credential |
+> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/credentials/read | Get a data source credential or list all credentials |
> | Microsoft.CognitiveServices/accounts/MetricsAdvisor/datafeeds/write | Create or update a data feed. | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/datafeeds/delete | Delete a data feed | > | Microsoft.CognitiveServices/accounts/MetricsAdvisor/datafeeds/read | Get a data feed by its id or list all data feeds |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/NewsSearch/categorysearch/action | Returns news for a provided category. | > | Microsoft.CognitiveServices/accounts/NewsSearch/search/action | Get news articles relevant for a given query. | > | Microsoft.CognitiveServices/accounts/NewsSearch/trendingtopics/action | Get trending topics identified by Bing. These are the same topics shown in the banner at the bottom of the Bing home page. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/engines/read | Read engine information. |
+> | Microsoft.CognitiveServices/accounts/OpenAI/engines/completions/action | Create a completion from a chosen model |
+> | Microsoft.CognitiveServices/accounts/OpenAI/engines/search/action | Search for the most relevant documents using the current engine. |
> | Microsoft.CognitiveServices/accounts/Personalizer/rank/action | A personalization rank request. | > | Microsoft.CognitiveServices/accounts/Personalizer/evaluations/action | Submit a new evaluation. | > | Microsoft.CognitiveServices/accounts/Personalizer/configurations/client/action | Get the client configuration. |
Azure service: [Machine Learning Service](../machine-learning/index.yml)
> | Microsoft.MachineLearningServices/workspaces/datastores/read | Gets datastores in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/datastores/write | Creates or updates datastores in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/datastores/delete | Deletes datastores in Machine Learning Services Workspace(s) |
+> | Microsoft.MachineLearningServices/workspaces/diagnose/read | Diagnose setup problems of Machine Learning Services Workspace |
> | Microsoft.MachineLearningServices/workspaces/endpoints/pipelines/read | Gets published pipelines and pipeline endpoints in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/endpoints/pipelines/write | Creates or updates published pipelines and pipeline endpoints in Machine Learning Services Workspace(s) | > | Microsoft.MachineLearningServices/workspaces/environments/read | Gets environments in Machine Learning Services Workspace(s) |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/diagnostics/read | Lists all diagnostics of the API Management service instance. or Gets the details of the Diagnostic specified by its identifier. | > | Microsoft.ApiManagement/service/diagnostics/write | Creates a new Diagnostic or updates an existing one. or Updates the details of the Diagnostic specified by its identifier. | > | Microsoft.ApiManagement/service/diagnostics/delete | Deletes the specified Diagnostic. |
-> | Microsoft.ApiManagement/service/eventGridFilters/read | Get Event Grid Filters |
> | Microsoft.ApiManagement/service/eventGridFilters/write | Set Event Grid Filters | > | Microsoft.ApiManagement/service/eventGridFilters/delete | Delete Event Grid Filters | > | Microsoft.ApiManagement/service/gateways/read | Lists a collection of gateways registered with service instance. or Gets the details of the Gateway specified by its identifier. |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/portalSettings/read | Lists a collection of portal settings. or Get Sign In Settings for the Portal or Get Sign Up Settings for the Portal or Get Delegation Settings for the Portal. | > | Microsoft.ApiManagement/service/portalSettings/write | Update Sign-In settings. or Create or Update Sign-In settings. or Update Sign Up settings or Update Sign Up settings or Update Delegation settings. or Create or Update Delegation settings. | > | Microsoft.ApiManagement/service/portalSettings/listSecrets/action | Gets validation key of portal delegation settings. or Get media content blob container uri. |
+> | Microsoft.ApiManagement/service/privateEndpointConnections/read | Get Private Endpoint Connections |
+> | Microsoft.ApiManagement/service/privateEndpointConnections/write | Approve Or Reject Private Endpoint Connections |
+> | Microsoft.ApiManagement/service/privateEndpointConnections/delete | Delete Private Endpoint Connections |
> | Microsoft.ApiManagement/service/privateLinkResources/read | Get Private Link Group resources | > | Microsoft.ApiManagement/service/products/read | Lists a collection of products in the specified service instance. or Gets the details of the product specified by its identifier. | > | Microsoft.ApiManagement/service/products/write | Creates or Updates a product. or Update existing product details. |
Azure service: [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md
> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/installUpdates/action | Install Updates on device | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/uploadCertificate/action | Upload certificate for device registration | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/generateCertificate/action | ArmApiDesc_action_generateCertificate_dataBoxEdgeDevices |
-> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/diagnosticSettings/action | ArmApiDesc_action_diagnosticSettings_dataBoxEdgeDevices |
+> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/diagnosticProactiveLogCollectionSettings/action | ArmApiDesc_action_diagnosticProactiveLogCollectionSettings_dataBoxEdgeDevices |
+> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/diagnosticRemoteSupportSettings/action | ArmApiDesc_action_diagnosticRemoteSupportSettings_dataBoxEdgeDevices |
> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/triggerSupportPackage/action | ArmApiDesc_action_triggerSupportPackage_dataBoxEdgeDevices | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/alerts/read | Lists or gets the alerts | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/alerts/read | Lists or gets the alerts |
Azure service: [Azure Stack Edge](../databox-online/azure-stack-edge-overview.md
> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/bandwidthSchedules/write | Creates or updates the bandwidth schedules | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/bandwidthSchedules/delete | Deletes the bandwidth schedules | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/bandwidthSchedules/operationResults/read | Lists or gets the operation result |
-> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/diagnosticSettings/read | Lists or gets the ArmApiRes_diagnosticSettings |
-> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/diagnosticSettings/operationResults/read | Lists or gets the operation result |
+> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/diagnosticProactiveLogCollectionSettings/operationResults/read | Lists or gets the operation result |
+> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/diagnosticRemoteSupportSettings/operationResults/read | Lists or gets the operation result |
> | Microsoft.DataBoxEdge/dataBoxEdgeDevices/jobs/read | Lists or gets the jobs | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/networkSettings/read | Lists or gets the Device network settings | > | Microsoft.DataBoxEdge/dataBoxEdgeDevices/nodes/read | Lists or gets the nodes |
Azure service: [Event Grid](../event-grid/index.yml)
> | Microsoft.EventGrid/partnerNamespaces/delete | Delete a partner namespace | > | Microsoft.EventGrid/partnerNamespaces/listKeys/action | List keys for a partner namespace | > | Microsoft.EventGrid/partnerNamespaces/regenerateKey/action | Regenerate key for a partner namespace |
+> | Microsoft.EventGrid/partnerNamespaces/PrivateEndpointConnectionsApproval/action | Approve PrivateEndpointConnections for partner namespaces |
> | Microsoft.EventGrid/partnerNamespaces/eventChannels/read | Read an event channel | > | Microsoft.EventGrid/partnerNamespaces/eventChannels/write | Create or update an event channel | > | Microsoft.EventGrid/partnerNamespaces/eventChannels/delete | Delete an event channel |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/validate/action | Validate PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/read | Read PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/write | Write PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/delete | Delete PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnections/read | Read PrivateEndpointConnections for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnections/write | Write PrivateEndpointConnections for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnections/delete | Delete PrivateEndpointConnections for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateLinkResources/read | Read PrivateLinkResources for partner namespaces |
> | Microsoft.EventGrid/partnerNamespaces/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for partner namespaces | > | Microsoft.EventGrid/partnerNamespaces/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for partner namespaces | > | Microsoft.EventGrid/partnerNamespaces/providers/Microsoft.Insights/logDefinitions/read | Allows access to diagnostic logs |
Azure service: [Event Grid](../event-grid/index.yml)
> | Microsoft.EventGrid/topictypes/read | Read a topictype | > | Microsoft.EventGrid/topictypes/eventSubscriptions/read | List global event subscriptions by topic type | > | Microsoft.EventGrid/topictypes/eventtypes/read | Read eventtypes supported by a topictype |
+> | **DataAction** | **Description** |
+> | Microsoft.EventGrid/events/send/action | Send events to topics |
### Microsoft.Logic
Azure service: [Azure Migrate](../migrate/migrate-services-overview.md)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
+> | Microsoft.Migrate/register/action | Subscription Registration Action |
> | Microsoft.Migrate/register/action | Registers Subscription with Microsoft.Migrate resource provider | > | Microsoft.Migrate/unregister/action | Unregisters Subscription with Microsoft.Migrate resource provider | > | Microsoft.Migrate/assessmentprojects/read | Gets the properties of assessment project |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | **DataAction** | **Description** | > | Microsoft.Insights/DataCollectionRules/Data/Write | Send data to a data collection rule | > | Microsoft.Insights/Metrics/Write | Write metrics |
+> | Microsoft.Insights/Telemetry/Write | Write telemetry |
### Microsoft.OperationalInsights
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/MAWindowsDeploymentStatusNRT/read | Read data from the MAWindowsDeploymentStatusNRT table | > | Microsoft.OperationalInsights/workspaces/query/MAWindowsSysReqInstanceReadiness/read | Read data from the MAWindowsSysReqInstanceReadiness table | > | Microsoft.OperationalInsights/workspaces/query/McasShadowItReporting/read | Read data from the McasShadowItReporting table |
+> | Microsoft.OperationalInsights/workspaces/query/MCCEventLogs/read | Read data from the MCCEventLogs table |
+> | Microsoft.OperationalInsights/workspaces/query/MCVPOperationLogs/read | Read data from the MCVPOperationLogs table |
> | Microsoft.OperationalInsights/workspaces/query/MicrosoftAzureBastionAuditLogs/read | Read data from the MicrosoftAzureBastionAuditLogs table | > | Microsoft.OperationalInsights/workspaces/query/MicrosoftDataShareReceivedSnapshotLog/read | Read data from the MicrosoftDataShareReceivedSnapshotLog table | > | Microsoft.OperationalInsights/workspaces/query/MicrosoftDataShareSentSnapshotLog/read | Read data from the MicrosoftDataShareSentSnapshotLog table |
Azure service: [Azure Monitor](../azure-monitor/index.yml)
> | Microsoft.OperationalInsights/workspaces/query/SqlAtpStatus/read | Read data from the SqlAtpStatus table | > | Microsoft.OperationalInsights/workspaces/query/SqlDataClassification/read | Read data from the SqlDataClassification table | > | Microsoft.OperationalInsights/workspaces/query/SQLQueryPerformance/read | Read data from the SQLQueryPerformance table |
+> | Microsoft.OperationalInsights/workspaces/query/SQLSecurityAuditEvents/read | Read data from the SQLSecurityAuditEvents table |
> | Microsoft.OperationalInsights/workspaces/query/SqlVulnerabilityAssessmentResult/read | Read data from the SqlVulnerabilityAssessmentResult table | > | Microsoft.OperationalInsights/workspaces/query/SqlVulnerabilityAssessmentScanStatus/read | Read data from the SqlVulnerabilityAssessmentScanStatus table | > | Microsoft.OperationalInsights/workspaces/query/StorageBlobLogs/read | Read data from the StorageBlobLogs table |
Azure service: [Azure Advisor](../advisor/index.yml)
> [!div class="mx-tableFixed"] > | Action | Description | > | | |
-> | Microsoft.Advisor/generateRecommendations/action | Gets generate recommendations status |
-> | Microsoft.Advisor/register/action | Registers the subscription for the Microsoft Advisor |
-> | Microsoft.Advisor/unregister/action | Unregisters the subscription for the Microsoft Advisor |
-> | Microsoft.Advisor/configurations/read | Get configurations |
-> | Microsoft.Advisor/configurations/write | Creates/updates configuration |
-> | Microsoft.Advisor/generateRecommendations/read | Gets generate recommendations status |
-> | Microsoft.Advisor/metadata/read | Get Metadata |
-> | Microsoft.Advisor/operations/read | Gets the operations for the Microsoft Advisor |
-> | Microsoft.Advisor/recommendations/read | Reads recommendations |
-> | Microsoft.Advisor/recommendations/available/action | New recommendation is available in Microsoft Advisor |
-> | Microsoft.Advisor/recommendations/suppressions/read | Gets suppressions |
-> | Microsoft.Advisor/recommendations/suppressions/write | Creates/updates suppressions |
-> | Microsoft.Advisor/recommendations/suppressions/delete | Deletes suppression |
-> | Microsoft.Advisor/suppressions/read | Gets suppressions |
-> | Microsoft.Advisor/suppressions/write | Creates/updates suppressions |
-> | Microsoft.Advisor/suppressions/delete | Deletes suppression |
+> | Microsoft.Advisor/advisorScore/read | Gets the score data for given subscription |
### Microsoft.Authorization
Azure service: [Azure Policy](../governance/policy/overview.md), [Azure RBAC](ov
> | Microsoft.Authorization/roleAssignments/read | Get information about a role assignment. | > | Microsoft.Authorization/roleAssignments/write | Create a role assignment at the specified scope. | > | Microsoft.Authorization/roleAssignments/delete | Delete a role assignment at the specified scope. |
+> | Microsoft.Authorization/roleAssignmentScheduleInstances/read | Gets the role assignment schedule instances at given scope. |
+> | Microsoft.Authorization/roleAssignmentScheduleRequests/read | Gets the role assignment schedule requests at given scope. |
+> | Microsoft.Authorization/roleAssignmentScheduleRequests/write | Creates a role assignment schedule request at given scope. |
+> | Microsoft.Authorization/roleAssignmentScheduleRequests/cancel/action | Cancels a pending role assignment schedule request. |
+> | Microsoft.Authorization/roleAssignmentSchedules/read | Gets the role assignment schedules at given scope. |
> | Microsoft.Authorization/roleDefinitions/read | Get information about a role definition. | > | Microsoft.Authorization/roleDefinitions/write | Create or update a custom role definition with specified permissions and assignable scopes. | > | Microsoft.Authorization/roleDefinitions/delete | Delete the specified custom role definition. |
+> | Microsoft.Authorization/roleEligibilityScheduleInstances/read | Gets the role eligibility schedule instances at given scope. |
+> | Microsoft.Authorization/roleEligibilityScheduleRequests/read | Gets the role eligibility schedule requests at given scope. |
+> | Microsoft.Authorization/roleEligibilityScheduleRequests/write | Creates a role eligibility schedule request at given scope. |
+> | Microsoft.Authorization/roleEligibilityScheduleRequests/cancel/action | Cancels a pending role eligibility schedule request. |
+> | Microsoft.Authorization/roleEligibilitySchedules/read | Gets the role eligibility schedules at given scope. |
+> | Microsoft.Authorization/roleManagementPolicies/read | Get Role management policies |
+> | Microsoft.Authorization/roleManagementPolicies/write | Update a role management policy |
+> | Microsoft.Authorization/roleManagementPolicyAssignments/read | Get role management policy assignments |
### Microsoft.Automation
Azure service: [Cost Management](../cost-management-billing/index.yml)
> | Microsoft.CostManagement/views/delete | Delete saved views. | > | Microsoft.CostManagement/views/write | Update view. |
+### Microsoft.DataProtection
+
+Azure service: Microsoft.DataProtection
+
+> [!div class="mx-tableFixed"]
+> | Action | Description |
+> | | |
+> | Microsoft.DataProtection/backupVaults/write | Create BackupVault operation creates an Azure resource of type 'Backup Vault' |
+> | Microsoft.DataProtection/backupVaults/read | Create BackupVault operation creates an Azure resource of type 'Backup Vault' |
+> | Microsoft.DataProtection/backupVaults/delete | Create BackupVault operation creates an Azure resource of type 'Backup Vault' |
+> | Microsoft.DataProtection/backupVaults/read | Gets list of Backup Vaults in a Resource Group |
+> | Microsoft.DataProtection/backupVaults/read | Gets list of Backup Vaults in a Subscription |
+> | Microsoft.DataProtection/backupVaults/validateForBackup/action | Validates for backup of Backup Instance |
+> | Microsoft.DataProtection/backupVaults/backupInstances/write | Creates a Backup Instance |
+> | Microsoft.DataProtection/backupVaults/backupInstances/delete | Deletes the Backup Instance |
+> | Microsoft.DataProtection/backupVaults/backupInstances/read | Returns details of the Backup Instance |
+> | Microsoft.DataProtection/backupVaults/backupInstances/read | Returns all Backup Instances |
+> | Microsoft.DataProtection/backupVaults/backupInstances/backup/action | Performs Backup on the Backup Instance |
+> | Microsoft.DataProtection/backupVaults/backupInstances/sync/action | Sync operation retries last failed operation on backup instance to bring it to a valid state. |
+> | Microsoft.DataProtection/backupVaults/backupInstances/stopProtection/action | Stop Protection operation stops both backup and retention schedules of backup instance. Existing data will be retained forever. |
+> | Microsoft.DataProtection/backupVaults/backupInstances/suspendBackups/action | Suspend Backups operation stops only backups of backup instance. Retention activities will continue and hence data will be ratained as per policy. |
+> | Microsoft.DataProtection/backupVaults/backupInstances/resumeProtection/action | Resume protection of a ProtectionStopped BI. |
+> | Microsoft.DataProtection/backupVaults/backupInstances/resumeBackups/action | Resume Backups for a BackupsSuspended BI. |
+> | Microsoft.DataProtection/backupVaults/backupInstances/validateRestore/action | Validates for Restore of the Backup Instance |
+> | Microsoft.DataProtection/backupVaults/backupInstances/restore/action | Triggers restore on the Backup Instance |
+> | Microsoft.DataProtection/backupVaults/backupInstances/findRestorableTimeRanges/action | Finds Restorable Time Ranges |
+> | Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read | Returns details of the Recovery Point |
+> | Microsoft.DataProtection/backupVaults/backupInstances/recoveryPoints/read | Returns all Recovery Points |
+> | Microsoft.DataProtection/backupVaults/backupPolicies/write | Creates Backup Policy |
+> | Microsoft.DataProtection/backupVaults/backupPolicies/delete | Deletes the Backup Policy |
+> | Microsoft.DataProtection/backupVaults/backupPolicies/read | Returns details of the Backup Policy |
+> | Microsoft.DataProtection/backupVaults/backupPolicies/read | Returns all Backup Policies |
+> | Microsoft.DataProtection/backupVaults/operationResults/read | Gets Operation Result of a Patch Operation for a Backup Vault |
+> | Microsoft.DataProtection/locations/getBackupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | Microsoft.DataProtection/locations/checkNameAvailability/action | Checks if the requested BackupVault Name is Available |
+> | Microsoft.DataProtection/locations/operationResults/read | Returns Backup Operation Result for Backup Vault. |
+> | Microsoft.DataProtection/locations/operationStatus/read | Returns Backup Operation Status for Backup Vault. |
+> | Microsoft.DataProtection/providers/operations/read | Operation returns the list of Operations for a Resource Provider |
+> | Microsoft.DataProtection/subscriptions/providers/resourceGuards/read | Gets list of ResourceGuards in a Subscription |
+> | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/write | Create ResourceGuard operation creates an Azure resource of type 'ResourceGuard' |
+> | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/read | The Get ResourceGuard operation gets an object representing the Azure resource of type 'ResourceGuard' |
+> | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/delete | The Delete ResourceGuard operation deletes the specified Azure resource of type 'ResourceGuard' |
+> | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/read | Gets list of ResourceGuards in a Resource Group |
+> | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/write | Update ResouceGuard operation updates an Azure resource of type 'ResourceGuard' |
+> | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/{operationName}/read | Gets ResourceGuard operation request info |
+> | Microsoft.DataProtection/subscriptions/resourceGroups/providers/resourceGuards/{operationName}/read | Gets ResourceGuard default operation request info |
+ ### Microsoft.Features Azure service: [Azure Resource Manager](../azure-resource-manager/index.yml)
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/removeDisks/action | Remove disks | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/ResolveHealthErrors/action | | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/failoverCancel/action | Failover Cancel |
+> | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/updateAppliance/action | |
> | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/operationresults/read | Track the results of an asynchronous operation on the resource Protected Items | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/recoveryPoints/read | Read any Replication Recovery Points | > | Microsoft.RecoveryServices/vaults/replicationFabrics/replicationProtectionContainers/replicationProtectedItems/targetComputeSizes/read | Read any Target Compute Sizes |
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/index-similarity-and-scoring.md
Last updated 03/02/2021
# Similarity and scoring in Azure Cognitive Search
-This article describes the two similarity ranking algorithms in Azure Cognitive Search. It also introduces two related features: *scoring profiles* (criteria for adjusting a search score) and the *featuresMode* parameter (unpacks a search score to show more detail).
+This article describes the two similarity ranking algorithms used by Azure Cognitive Search to determine which matching documents are the most relevant to the query. this article also introduces two related features: *scoring profiles* (criteria for adjusting a search score) and the *featuresMode* parameter (unpacks a search score to show more detail).
-A third semantic re-ranking algorithm is currently in public preview. For more information, start with [Semantic search overview](semantic-search-overview.md).
+> [!NOTE]
+> A third [semantic re-ranking algorithm](semantic-ranking.md) is currently in public preview. For more information, start with [Semantic search overview](semantic-search-overview.md).
## Similarity ranking algorithms
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
Title: Search over Azure SQL data
+ Title: Index data from Azure SQL
-description: Import data from Azure SQL Database or SQL Managed Instance using indexers, for full text search in Azure Cognitive Search. This article covers connections, indexer configuration, and data ingestion.
+description: Set up an Azure SQL indexer to automate indexing of content and metadata for full text search in Azure Cognitive Search.
ms.devlang: rest-api Previously updated : 07/12/2020 Last updated : 06/26/2021
-# Connect to and index Azure SQL content using an Azure Cognitive Search indexer
+# Index data from Azure SQL
-Before you can query an [Azure Cognitive Search index](search-what-is-an-index.md), you must populate it with your data. If the data lives in Azure SQL Database or SQL Managed Instance, an **Azure Cognitive Search indexer for Azure SQL Database** (or **Azure SQL indexer** for short) can automate the indexing process, which means less code to write and less infrastructure to care about.
+This article shows you how to configure an Azure SQL indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from Azure SQL Database and Azure SQL managed instances.
-This article covers the mechanics of using [indexers](search-indexer-overview.md), but also describes features only available with Azure SQL Database or SQL Managed Instance (for example, integrated change tracking).
+This article covers the mechanics of using [indexers](search-indexer-overview.md), but also describes features only available with Azure SQL Database or SQL Managed Instance (for example, integrated change tracking).
-In addition to Azure SQL Database and SQL Managed Instance, Azure Cognitive Search provides indexers for [Azure Cosmos DB](search-howto-index-cosmosdb.md), [Azure Blob storage](search-howto-indexing-azure-blob-storage.md), and [Azure table storage](search-howto-indexing-azure-tables.md). To request support for other data sources, provide your feedback on the [Azure Cognitive Search feedback forum](https://feedback.azure.com/forums/263029-azure-search/).
+You can set up an Azure SQL indexer by using any of these clients:
-## Indexers and data sources
-
-A **data source** specifies which data to index, credentials for data access, and policies that efficiently identify changes in the data (new, modified, or deleted rows). It's defined as an independent resource so that it can be used by multiple indexers.
+* [Azure portal](https://ms.portal.azure.com)
+* Azure Cognitive Search [REST API](/rest/api/searchservice/Indexer-operations)
+* Azure Cognitive Search [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
-An **indexer** is a resource that connects a single data source with a targeted search index. An indexer is used in the following ways:
+This article uses the REST APIs.
-* Perform a one-time copy of the data to populate an index.
-* Update an index with changes in the data source on a schedule.
-* Run on-demand to update an index as needed.
+## Prerequisites
-A single indexer can only consume one table or view, but you can create multiple indexers if you want to populate multiple search indexes. For more information on concepts, see [Indexer Operations: Typical workflow](/rest/api/searchservice/Indexer-operations#typical-workflow).
+* Data originates from a single table or view. If the data is scattered across multiple tables, you can create a single view of the data. A drawback to using view is that you wonΓÇÖt be able to use SQL Server integrated change detection to refresh an index with incremental changes. For more information, see [Capturing Changed and Deleted Rows](#CaptureChangedRows) below.
-You can set up and configure an Azure SQL indexer using:
+* Data types must compatible. Most but not all the SQL types are supported in a search index. For a list, see [Mapping data types](#TypeMapping).
-* Import Data wizard in the [Azure portal](https://portal.azure.com)
-* Azure Cognitive Search [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
-* Azure Cognitive Search [REST API](/rest/api/searchservice/indexer-operations)
+* Connections to a SQL Managed Instance must be over a public endpoint. For more information, see [Indexer connections through a public endpoint](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md).
-In this article, we'll use the REST API to create **indexers** and **data sources**.
+* Connections to SQL Server on an Azure virtual machine requires manual set up of a security certificate. For more information, see [Indexer connections to a SQL Server on an Azure VM](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md).
-## When to use Azure SQL Indexer
-Depending on several factors relating to your data, the use of Azure SQL indexer may or may not be appropriate. If your data fits the following requirements, you can use Azure SQL indexer.
+Real-time data synchronization must not be an application requirement. An indexer can reindex your table at most every five minutes. If your data changes frequently, and those changes need to be reflected in the index within seconds or single minutes, we recommend using the [REST API](/rest/api/searchservice/AddUpdate-or-Delete-Documents) or [.NET SDK](search-get-started-dotnet.md) to push updated rows directly.
-| Criteria | Details |
-|-||
-| Data originates from a single table or view | If the data is scattered across multiple tables, you can create a single view of the data. However, if you use a view, you wonΓÇÖt be able to use SQL Server integrated change detection to refresh an index with incremental changes. For more information, see [Capturing Changed and Deleted Rows](#CaptureChangedRows) below. |
-| Data types are compatible | Most but not all the SQL types are supported in an Azure Cognitive Search index. For a list, see [Mapping data types](#TypeMapping). |
-| Real-time data synchronization is not required | An indexer can reindex your table at most every five minutes. If your data changes frequently, and the changes need to be reflected in the index within seconds or single minutes, we recommend using the [REST API](/rest/api/searchservice/AddUpdate-or-Delete-Documents) or [.NET SDK](./search-get-started-dotnet.md) to push updated rows directly. |
-| Incremental indexing is possible | If you have a large data set and plan to run the indexer on a schedule, Azure Cognitive Search must be able to efficiently identify new, changed, or deleted rows. Non-incremental indexing is only allowed if you're indexing on demand (not on schedule), or indexing fewer than 100,000 rows. For more information, see [Capturing Changed and Deleted Rows](#CaptureChangedRows) below. |
+Incremental indexing is possible. If you have a large data set and plan to run the indexer on a schedule, Azure Cognitive Search must be able to efficiently identify new, changed, or deleted rows. Non-incremental indexing is only allowed if you're indexing on demand (not on schedule), or indexing fewer than 100,000 rows. For more information, see [Capturing Changed and Deleted Rows](#CaptureChangedRows) below.
-> [!NOTE]
-> Azure Cognitive Search supports SQL Server authentication only. If you require support for Azure Active Directory Password authentication, please vote for this [UserVoice suggestion](https://feedback.azure.com/forums/263029-azure-search/suggestions/33595465-support-azure-active-directory-password-authentica).
+Azure Cognitive Search supports SQL Server authentication, where the username and password are provided on the connection string. Alternatively, you can set up a managed identity and use Azure roles to omit credentials on the connection. For more information, see [Set up an indexer connection using a managed identity](search-howto-managed-identities-sql.md).
## Create an Azure SQL Indexer 1. Create the data source:
- ```
+ ```http
POST https://myservice.search.windows.net/datasources?api-version=2020-06-30 Content-Type: application/json api-key: admin-key
Depending on several factors relating to your data, the use of Azure SQL indexer
3. Create the indexer by giving it a name and referencing the data source and target index:
- ```
+ ```http
POST https://myservice.search.windows.net/indexers?api-version=2020-06-30 Content-Type: application/json api-key: admin-key
Depending on several factors relating to your data, the use of Azure SQL indexer
An indexer created in this way doesnΓÇÖt have a schedule. It automatically runs once when itΓÇÖs created. You can run it again at any time using a **run indexer** request:
-```
+```http
POST https://myservice.search.windows.net/indexers/myindexer/run?api-version=2020-06-30 api-key: admin-key ```
You may need to allow Azure services to connect to your database. See [Connectin
To monitor the indexer status and execution history (number of items indexed, failures, etc.), use an **indexer status** request:
-```
+```http
GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30 api-key: admin-key ``` The response should look similar to the following:
-```
+```json
{
- "\@odata.context":"https://myservice.search.windows.net/$metadata#Microsoft.Azure.Search.V2015_02_28.IndexerExecutionInfo",
+ "@odata.context":"https://myservice.search.windows.net/$metadata#Microsoft.Azure.Search.V2015_02_28.IndexerExecutionInfo",
"status":"running", "lastResult": { "status":"success",
Execution history contains up to 50 of the most recently completed executions, w
Additional information about the response can be found in [Get Indexer Status](/rest/api/searchservice/get-indexer-status) ## Run indexers on a schedule+ You can also arrange the indexer to run periodically on a schedule. To do this, add the **schedule** property when creating or updating the indexer. The example below shows a PUT request to update the indexer:
-```
+```http
PUT https://myservice.search.windows.net/indexers/myindexer?api-version=2020-06-30 Content-Type: application/json api-key: admin-key
For more information about defining indexer schedules see [How to schedule index
Azure Cognitive Search uses **incremental indexing** to avoid having to reindex the entire table or view every time an indexer runs. Azure Cognitive Search provides two change detection policies to support incremental indexing. ### SQL Integrated Change Tracking Policy+ If your SQL database supports [change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server), we recommend using **SQL Integrated Change Tracking Policy**. This is the most efficient policy. In addition, it allows Azure Cognitive Search to identify deleted rows without you having to add an explicit "soft delete" column to your table. #### Requirements
search Search Howto Connecting Azure Sql Iaas To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md
Title: Azure SQL VM connection for search indexing
+ Title: Indexer connection to SQL Server on Azure VMs
description: Enable encrypted connections and configure the firewall to allow connections to SQL Server on an Azure virtual machine (VM) from an indexer on Azure Cognitive Search.
Last updated 03/19/2021
-# Configure a connection from an Azure Cognitive Search indexer to SQL Server on an Azure VM
+# Indexer connections to SQL Server on an Azure virtual machine
When configuring an [Azure SQL indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#faq) to extract content from a database on an Azure virtual machine, additional steps are required for secure connections.
search Search Howto Connecting Azure Sql Mi To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md
Title: Azure SQL Managed Instance connection for search indexing
+ Title: Indexer connection to SQL Managed Instances
description: Enable public endpoint to allow connections to SQL Managed Instances from an indexer on Azure Cognitive Search.
Previously updated : 11/04/2019 Last updated : 06/26/2021
-# Configure a connection from an Azure Cognitive Search indexer to SQL Managed Instance
+# Indexer connections to Azure SQL Managed Instance through a public endpoint
-As noted in [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#faq), creating indexers against **SQL Managed Instances** is supported by Azure Cognitive Search through the public endpoint.
+If you are setting up an Azure Cognitive Search indexer that connects to an Azure SQL managed instance, you will need to enable a public endpoint on the managed instance as a prerequisite. An indexer connects to a managed instance over a public endpoint.
-## Create Azure SQL Managed Instance with public endpoint
-Create a SQL Managed Instance with the **Enable public endpoint** option selected.
+This article provides basic steps that include collecting information necessary for data source configuration. For more information and methodologies, see [Configure public endpoint in Azure SQL Managed Instance](../azure-sql/managed-instance/public-endpoint-configure.md).
+
+## Enable a public endpoint
+
+For a new SQL Managed Instance, create the resource with the **Enable public endpoint** option selected.
![Enable public endpoint](media/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers/enable-public-endpoint.png "Enable public endpoint")
-## Enable Azure SQL Managed Instance public endpoint
-You can also enable public endpoint on an existing SQL Managed Instance under **Security** > **Virtual network** > **Public endpoint** > **Enable**.
+Alternatively, if the instance already exists, you can enable public endpoint on an existing SQL Managed Instance under **Security** > **Virtual network** > **Public endpoint** > **Enable**.
![Enable public endpoint using managed instance VNET](media/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers/mi-vnet.png "Enable public endpoint") ## Verify NSG rules+ Check the Network Security Group has the correct **Inbound security rules** that allow connections from Azure services. ![NSG Inbound security rule](media/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers/nsg-rule.png "NSG Inbound security rule")
-> [!NOTE]
-> Indexers still require that SQL Managed Instance be configured with a public endpoint in order to read data.
-> However, you can choose to restrict the inbound access to that public endpoint by replacing the current rule (`public_endpoint_inbound`) with the following 2 rules:
->
-> * Allowing inbound access from the `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) ("SOURCE" = `AzureCognitiveSearch`, "NAME" = `cognitive_search_inbound`)
->
-> * Allowing inbound access from the IP address of the search service, which can be obtained by pinging its fully qualified domain name (eg., `<your-search-service-name>.search.windows.net`). ("SOURCE" = `IP address`, "NAME" = `search_service_inbound`)
->
-> For each of those 2 rules, set "PORT" = `3342`, "PROTOCOL" = `TCP`, "DESTINATION" = `Any`, "ACTION" = `Allow`
+## Restrict inbound access to the endpoint
+
+You can restrict inbound access to the public endpoint by replacing the current rule (`public_endpoint_inbound`) with the following two rules:
+
+* Allowing inbound access from the `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) ("SOURCE" = `AzureCognitiveSearch`, "NAME" = `cognitive_search_inbound`)
+
+* Allowing inbound access from the IP address of the search service, which can be obtained by pinging its fully qualified domain name (eg., `<your-search-service-name>.search.windows.net`). ("SOURCE" = `IP address`, "NAME" = `search_service_inbound`)
+
+For each rule, set "PORT" = `3342`, "PROTOCOL" = `TCP`, "DESTINATION" = `Any`, "ACTION" = `Allow`.
## Get public endpoint connection string
-Make sure you use the connection string for the **public endpoint** (port 3342, not port 1433).
+
+Copy the connection string to use in the search indexer's data source connection. Be sure to copy the connection string for the **public endpoint** (port 3342, not port 1433).
![Public endpoint connection string](media/search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers/mi-connection-string.png "Public endpoint connection string") ## Next steps
-With configuration out of the way, you can now specify a SQL Managed Instance as the data source for an Azure Cognitive Search indexer using either the portal or REST API. See [Connecting Azure SQL Database to Azure Cognitive Search using indexers](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) for more information.
+
+With configuration out of the way, you can now specify a [SQL Managed Instance as an indexer data source](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md).
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-azure-data-lake-storage.md
Title: How to configure an indexer to pull content and metadata from Azure Data Lake Storage Gen2
+ Title: Index data from Azure Data Lake Storage Gen2
-description: Learn how to index content and metadata in Azure Data Lake Storage Gen2.
+description: Set up an Azure Data Lake Storage Gen2 indexer to automate indexing of content and metadata for full text search in Azure Cognitive Search.
-
Last updated 05/17/2021
-# How to configure an indexer to pull content and metadata from Azure Data Lake Storage Gen2
+# Index data from Azure Data Lake Storage Gen2
-When setting up an Azure storage account, you have the option to enable [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md). This allows the collection of content in an account to be organized into a hierarchy of directories and nested subdirectories. By enabling hierarchical namespace, you enable [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
+This article shows you how to configure an Azure Data Lake Storage Gen2 indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from Azure Data Lake Storage Gen2.
-This article describes how to get started with indexing documents that are in Azure Data Lake Storage Gen2.
+Azure Data Lake Storage Gen2 is available through Azure Storage. When setting up an Azure storage account, you have the option to enable [hierarchical namespace](../storage/blobs/data-lake-storage-namespace.md). This allows the collection of content in an account to be organized into a hierarchy of directories and nested subdirectories. By enabling hierarchical namespace, you enable [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md).
## Supported access tiers
To run multiple indexers in parallel, scale out your search service by creating
Errors that commonly occur during indexing include unsupported content types, missing content, or oversized blobs.
-By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an image). You could use the `excludedFileNameExtensions` parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Troubleshooting common indexer issues](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
+By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an image). You could use the `excludedFileNameExtensions` parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
### Respond to errors
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-cosmosdb-gremlin.md
Title: Search over Azure Cosmos DB Gremlin API data (preview)
+ Title: Index data from Gremlin API (preview)
-description: Import data from Azure Cosmos DB Gremlin API into a searchable index in Azure Cognitive Search. Indexers automate data ingestion for selected data sources like Azure Cosmos DB.
+description: Set up an Azure Cosmos DB indexer to automate indexing of Gremlin API content for full text search in Azure Cognitive Search.
- ms.devlang: rest-api
Last updated 04/11/2021
-# How to index data available through Cosmos DB Gremlin API using an indexer (preview)
+# Index data using Azure Cosmos DB Gremlin API
> [!IMPORTANT]
-> The Cosmos DB Gremlin API indexer is currently in preview. Preview functionality is provided without a service level agreement and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> You can request access to the preview by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview).
-> For this preview, we recommend using the [REST API version 2020-06-30-Preview](search-api-preview.md). There is currently limited portal support and no .NET SDK support.
+> The Cosmos DB Gremlin API indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview) to this feature, and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
-> [!WARNING]
-> In order for Azure Cognitive Search to index data in Cosmos DB through the Gremlin API, [Cosmos DB's own indexing](../cosmos-db/index-overview.md) must also be enabled and set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration for Cosmos DB. Azure Cognitive Search indexing will not work without Cosmos DB indexing already enabled.
+This article shows you how to configure an Azure Cosmos DB indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from Azure Cosmos DB using the Gremlin API.
-[Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are distinct operations, unique to each service. Before you start Azure Cognitive Search indexing, your Azure Cosmos DB database must already exist.
+Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are distinct operations, unique to each service. Before you start Azure Cognitive Search indexing, your Azure Cosmos DB database must already exist and contain data.
-This article shows you how to configure Azure Cognitive Search to index content from Azure Cosmos DB using the Gremlin API. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from Azure Cosmos DB using the Gremlin API.
+## Prerequisites
+
+In order for Azure Cognitive Search to index data in Cosmos DB through the Gremlin API, [Cosmos DB's own indexing](../cosmos-db/index-overview.md) must also be enabled and set to [Consistent](../cosmos-db/index-policy.md#indexing-mode). This is the default configuration for Cosmos DB. Azure Cognitive Search indexing will not work without Cosmos DB indexing already enabled.
## Get started
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-cosmosdb.md
Title: Search over Azure Cosmos DB data using SQL, MongoDB, or Cassandra API
+ Title: Index data from Azure Cosmos DB
-description: Import data from Azure Cosmos DB into a searchable index in Azure Cognitive Search. Indexers automate data ingestion for selected data sources like Azure Cosmos DB.
+description: Set up a search indexer to index data stored in Azure Cosmos DB for full text search in Azure Cognitive Search. This article explains how index data using SQL, MongoDB, or Cassandra API protocols.
- Last updated 07/11/2020
-# How to index data available through Cosmos DB SQL, MongoDB, or Cassandra API using an indexer in Azure Cognitive Search
+# Index data from Azure Cosmos DB using SQL, MongoDB, or Cassandra APIs
> [!IMPORTANT] > SQL API is generally available.
-> MongoDB API and Cassandra API support are currently in public preview under [supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> You can sign up for access by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview).
-> [REST API preview versions](search-api-preview.md) provide these features. There is currently limited portal support, and no .NET SDK support.
-
-> [!WARNING]
-> Only Cosmos DB collections with an [indexing policy](../cosmos-db/index-policy.md) set to [Consistent](../cosmos-db/index-policy.md#indexing-mode) are supported by Azure Cognitive Search. Indexing collections with a Lazy indexing policy is not recommended and may result in missing data. Collections with indexing disabled are not supported.
+> MongoDB API and Cassandra API support are currently in public preview under [supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview), and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to access your data. There is currently limited portal support, and no .NET SDK support.
This article shows you how to configure an Azure Cosmos DB [indexer](search-indexer-overview.md) to extract content and make it searchable in Azure Cognitive Search. This workflow creates an Azure Cognitive Search index and loads it with existing text extracted from Azure Cosmos DB. Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Azure Cognitive Search indexing](search-what-is-an-index.md) are distinct operations, unique to each service. Before you start Azure Cognitive Search indexing, your Azure Cosmos DB database must already exist and contain data.
-The Cosmos DB indexer in Azure Cognitive Search can crawl [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-items) accessed through different protocols.
+The Cosmos DB indexer in Azure Cognitive Search can crawl [Azure Cosmos DB items](../cosmos-db/account-databases-containers-items.md#azure-cosmos-items) accessed through the following protocols.
-+ For [SQL API](../cosmos-db/sql-query-getting-started.md), which is generally available, you can use the [portal](#cosmos-indexer-portal), [REST API](/rest/api/searchservice/indexer-operations), or [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer) to create the data source and indexer.
++ For [SQL API](../cosmos-db/sql-query-getting-started.md), which is generally available, you can use the [portal](#cosmos-indexer-portal), [REST API](/rest/api/searchservice/indexer-operations), [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer), or another Azure SDK to create the data source and indexer. + For [MongoDB API (preview)](../cosmos-db/mongodb-introduction.md), you can use either the [portal](#cosmos-indexer-portal) or the [REST API version 2020-06-30-Preview](search-api-preview.md) to create the data source and indexer. + For [Cassandra API (preview)](../cosmos-db/cassandra-introduction.md), you can only use the [REST API version 2020-06-30-Preview](search-api-preview.md) to create the data source and indexer. - > [!Note] > You can cast a vote on User Voice for the [Table API](https://feedback.azure.com/forums/263029-azure-search/suggestions/32759746-azure-search-should-be-able-to-index-cosmos-db-tab) if you'd like to see it supported in Azure Cognitive Search. >
+## Prerequisites
+
+Only Cosmos DB collections with an [indexing policy](../cosmos-db/index-policy.md) set to [Consistent](../cosmos-db/index-policy.md#indexing-mode) are supported by Azure Cognitive Search. Indexing collections with a Lazy indexing policy is not recommended and may result in missing data. Collections with indexing disabled are not supported.
+ <a name="cosmos-indexer-portal"></a> ## Use the portal
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-mysql.md
Title: Connect to and index Azure MySQL content using an Azure Cognitive Search indexer (preview)
+ Title: Index data from Azure MySQL (preview)
-description: Import data from Azure MySQL into a searchable index in Azure Cognitive Search. Indexers automate data ingestion for selected data sources like MySQL.
+description: Set up a search indexer to index data stored in Azure MySQL for full text search in Azure Cognitive Search.
Last updated 05/17/2021
-# Connect to and index Azure MySQL content using an Azure Cognitive Search indexer (preview)
+# Index data from Azure MySQL
> [!IMPORTANT]
-> MySQL support is currently in public preview. Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> You can request access to the previews by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview).
-> The [REST API version 2020-06-30-Preview](search-api-preview.md) provides this feature. There is currently no SDK support and no portal support.
+> MySQL support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview) to this feature, and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently no SDK support and no portal support.
-The Azure Cognitive Search indexer for MySQL will crawl your MySQL database on Azure, extract searchable data, and index it in Azure Cognitive Search. The indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in Azure Cognitive Search.
+The Azure Cognitive Search indexer for MySQL will crawl your MySQL database on Azure, extract searchable data, and index it in Azure Cognitive Search. The indexer will take all changes, uploads, and deletes for your MySQL database and reflect these changes in your search index.
+
+You can set up an Azure MySQL indexer by using any of these clients:
+
+* [Azure portal](https://ms.portal.azure.com)
+* Azure Cognitive Search [REST API](/rest/api/searchservice/Indexer-operations)
+* Azure Cognitive Search [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
+
+This article uses the REST APIs.
## Create an Azure MySQL indexer
search Search Howto Index Sharepoint Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-index-sharepoint-online.md
Title: Configure a SharePoint Online indexer (preview)
+ Title: Index data from SharePoint Online (preview)
description: Set up a SharePoint Online indexer to automate indexing of document library content in Azure Cognitive Search. -
Last updated 03/01/2021
-# How to configure SharePoint Online indexing in Cognitive Search (preview)
+# Index data from SharePoint Online
> [!IMPORTANT]
-> SharePoint Online support is currently in a **gated public preview**. You can request access to the gated preview by filling out [this form](https://aka.ms/azure-cognitive-search/indexer-preview).
->
-> Preview functionality is provided without a service level agreement, and is not recommended for production workloads. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
->
-> The [REST API version 2020-06-30-Preview](search-api-preview.md) provides this feature. There is currently no portal or SDK support.
+> SharePoint Online support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). [Request access](https://aka.ms/azure-cognitive-search/indexer-preview) to this feature, and after access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support.
+
+This article describes how to use Azure Cognitive Search to index documents (such as PDFs, Microsoft Office documents, and several other common formats) stored in SharePoint Online document libraries into an Azure Cognitive Search index. First, it explains the basics of setting up and configuring the indexer. Then, it offers a deeper exploration of behaviors and scenarios you are likely to encounter.
> [!NOTE] > SharePoint Online supports a granular authorization model that determines per-user access at the document level. The SharePoint Online indexer does not pull these permissions into the search index, and Cognitive Search does not support document-level authorization. When a document is indexed from SharePoint Online into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should investigate security filters to trim results of unauthorized content. For more information, see [Security trimming using Active Directory identities](search-security-trimming-for-azure-search-with-aad.md).
-This article describes how to use Azure Cognitive Search to index documents (such as PDFs, Microsoft Office documents, and several other common formats) stored in SharePoint Online document libraries into an Azure Cognitive Search index. First, it explains the basics of setting up and configuring the indexer. Then, it offers a deeper exploration of behaviors and scenarios you are likely to encounter.
- ## Functionality+ An indexer in Azure Cognitive Search is a crawler that extracts searchable data and metadata from a data source. The SharePoint Online indexer will connect to your SharePoint Online site and index documents from one or more Document Libraries. The indexer provides the following functionality: + Index content from one or more SharePoint Online Document Libraries. + Index content from SharePoint Online Document Libraries that are in the same tenant as your Azure Cognitive Search service. The indexer will not work with SharePoint sites that are in a different tenant than your Azure Cognitive Search service.
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-blob-storage.md
Title: Configure a Blob indexer
+ Title: Index data from Azure Blob Storage
-description: Set up an Azure Blob indexer to automate indexing of blob content for full text search operations in Azure Cognitive Search.
+description: Set up an Azure Blob indexer to automate indexing of blob content for full text search operations and knowledge mining in Azure Cognitive Search.
Last updated 05/14/2021
-# How to configure blob indexing in Cognitive Search
+# Index data from Azure Blob Storage
-A blob indexer is used for ingesting content from Azure Blob Storage into a Cognitive Search index. Blob indexers are frequently used in [AI enrichment](cognitive-search-concept-intro.md), where an attached [skillset](cognitive-search-working-with-skillsets.md) adds image and natural language processing to create searchable content. But you can also use blob indexers without AI enrichment, to ingest content from text-based documents such as PDFs, Microsoft Office documents, and file formats.
+This article shows you how to configure an Azure blob indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content and metadata extracted from Azure Blob Storage.
-This article shows you how to configure a blob indexer for either scenario. If you're unfamiliar with indexer concepts, start with [Indexers in Azure Cognitive Search](search-indexer-overview.md) and [Create a search indexer](search-howto-create-indexers.md) before diving into blob indexing.
+Blob indexers are frequently used in [AI enrichment](cognitive-search-concept-intro.md), where an attached [skillset](cognitive-search-working-with-skillsets.md) adds image and natural language processing to create searchable content out of non-searchable content types in blob containers.
+
+This article shows you how to configure an Azure blob indexer for text-focused indexing. You can set up an Azure Blob Storage indexer by using any of these clients:
+
+* [Azure portal](https://ms.portal.azure.com)
+* Azure Cognitive Search [REST API](/rest/api/searchservice/Indexer-operations)
+* Azure Cognitive Search [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
+
+This article uses the REST APIs.
## Supported access tiers
Indexing blobs can be a time-consuming process. In cases where you have millions
Errors that commonly occur during indexing include unsupported content types, missing content, or oversized blobs.
-By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an image). You could use the `excludedFileNameExtensions` parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Troubleshooting common indexer issues](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
+By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an image). You could use the `excludedFileNameExtensions` parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
### Respond to errors
search Search Howto Indexing Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-indexing-azure-tables.md
Title: Search over Azure Table Storage
+ Title: Index data from Azure Table Storage
-description: Learn how to index data stored in Azure Table Storage with an Azure Cognitive Search indexer.
+description: Set up a search indexer to index data stored in Azure Table Storage for full text search in Azure Cognitive Search.
ms.devlang: rest-api Previously updated : 07/11/2020 Last updated : 06/26/2021
-# How to index tables from Azure Table Storage with Azure Cognitive Search
+# Index data from Azure Table Storage
-This article shows how to use Azure Cognitive Search to index data stored in Azure Table Storage.
+This article shows you how to configure an Azure table indexer to extract content and make it searchable in Azure Cognitive Search. This workflow creates a search index on Azure Cognitive Search and loads it with existing content extracted from Azure Table Storage.
-## Set up Azure Table Storage indexing
-
-You can set up an Azure Table Storage indexer by using these resources:
+You can set up an Azure Table Storage indexer by using any of these clients:
* [Azure portal](https://ms.portal.azure.com) * Azure Cognitive Search [REST API](/rest/api/searchservice/Indexer-operations)
-* Azure Cognitive Search [.NET SDK](/dotnet/api/overview/azure/search)
+* Azure Cognitive Search [.NET SDK](/dotnet/api/azure.search.documents.indexes.models.searchindexer)
+
+This article uses the REST APIs.
-Here we demonstrate the flow by using the REST API.
+## Configure an indexer
-### Step 1: Create a datasource
+### Step 1: Create a data source
-A datasource specifies which data to index, the credentials needed to access the data, and the policies that enable Azure Cognitive Search to efficiently identify changes in the data.
+[Create Data Source](/rest/api/searchservice/create-data-source) specifies which data to index, the credentials needed to access the data, and the policies that enable Azure Cognitive Search to efficiently identify changes in the data.
-For table indexing, the datasource must have the following properties:
+For table indexing, the data source must have the following properties:
- **name** is the unique name of the datasource within your search service. - **type** must be `azuretable`.
For table indexing, the datasource must have the following properties:
> [!IMPORTANT] > Whenever possible, use a filter on PartitionKey for better performance. Any other query does a full table scan, resulting in poor performance for large tables. See the [Performance considerations](#Performance) section. -
-To create a datasource:
+Send the following request to create a data source:
```http
- POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "table-datasource",
- "type" : "azuretable",
- "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
- "container" : { "name" : "my-table", "query" : "PartitionKey eq '123'" }
- }
+POST https://[service name].search.windows.net/datasources?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "table-datasource",
+ "type" : "azuretable",
+ "credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
+ "container" : { "name" : "my-table", "query" : "PartitionKey eq '123'" }
+}
```
-For more information on the Create Datasource API, see [Create Datasource](/rest/api/searchservice/create-data-source).
- <a name="Credentials"></a>+ #### Ways to specify credentials #### You can provide the credentials for the table in one of these ways:
For more information on storage shared access signatures, see [Using shared acce
> If you use shared access signature credentials, you will need to update the datasource credentials periodically with renewed signatures to prevent their expiration. If shared access signature credentials expire, the indexer fails with an error message similar to "Credentials provided in the connection string are invalid or have expired." ### Step 2: Create an index
-The index specifies the fields in a document, the attributes, and other constructs that shape the search experience.
-To create an index:
+[Create Index](/rest/api/searchservice/create-index) specifies the fields in a document, the attributes, and other constructs that shape the search experience.
+
+Send the following request to create an index:
```http
- POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "my-target-index",
- "fields": [
- { "name": "key", "type": "Edm.String", "key": true, "searchable": false },
- { "name": "SomeColumnInMyTable", "type": "Edm.String", "searchable": true }
- ]
- }
+POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "my-target-index",
+ "fields": [
+ { "name": "key", "type": "Edm.String", "key": true, "searchable": false },
+ { "name": "SomeColumnInMyTable", "type": "Edm.String", "searchable": true }
+ ]
+}
```
-For more information on creating indexes, see [Create Index](/rest/api/searchservice/create-index).
- ### Step 3: Create an indexer
-An indexer connects a datasource with a target search index and provides a schedule to automate the data refresh.
-After the index and datasource are created, you're ready to create the indexer:
+[Create Indexer](/rest/api/searchservice/create-indexer) connects a data source with a target search index and provides a schedule to automate the data refresh.
+
+After the index and data source are created, you're ready to create the indexer:
```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "table-indexer",
- "dataSourceName" : "table-datasource",
- "targetIndexName" : "my-target-index",
- "schedule" : { "interval" : "PT2H" }
- }
+POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "table-indexer",
+ "dataSourceName" : "table-datasource",
+ "targetIndexName" : "my-target-index",
+ "schedule" : { "interval" : "PT2H" }
+}
```
-This indexer runs every two hours. (The schedule interval is set to "PT2H".) To run an indexer every 30 minutes, set the interval to "PT30M". The shortest supported interval is five minutes. The schedule is optional; if omitted, an indexer runs only once when it's created. However, you can run an indexer on demand at any time.
-
-For more information on the Create Indexer API, see [Create Indexer](/rest/api/searchservice/create-indexer).
+This indexer runs every two hours. (The schedule interval is set to "PT2H".) To run an indexer every 30 minutes, set the interval to "PT30M". The shortest supported interval is five minutes. The schedule is optional; if omitted, an indexer runs only once when it's created. However, you can run an indexer on demand at any time. For more information about defining indexer schedules see [Schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
-For more information about defining indexer schedules see [How to schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md).
+## Handle field name discrepancies
-## Deal with different field names
Sometimes, the field names in your existing index are different from the property names in your table. You can use field mappings to map the property names from the table to the field names in your search index. To learn more about field mappings, see [Azure Cognitive Search indexer field mappings bridge the differences between datasources and search indexes](search-indexer-field-mappings.md). ## Handle document keys+ In Azure Cognitive Search, the document key uniquely identifies a document. Every search index must have exactly one key field of type `Edm.String`. The key field is required for each document that is being added to the index. (In fact, it's the only required field.) Because table rows have a compound key, Azure Cognitive Search generates a synthetic field called `Key` that is a concatenation of partition key and row key values. For example, if a rowΓÇÖs PartitionKey is `PK1` and RowKey is `RK1`, then the `Key` field's value is `PK1RK1`.
Because table rows have a compound key, Azure Cognitive Search generates a synth
> [!NOTE] > The `Key` value may contain characters that are invalid in document keys, such as dashes. You can deal with invalid characters by using the `base64Encode` [field mapping function](search-indexer-field-mappings.md#base64EncodeFunction). If you do this, remember to also use URL-safe Base64 encoding when passing document keys in API calls such as Lookup. >
->
## Incremental indexing and deletion detection+ When you set up a table indexer to run on a schedule, it reindexes only new or updated rows, as determined by a rowΓÇÖs `Timestamp` value. You donΓÇÖt have to specify a change detection policy. Incremental indexing is enabled for you automatically. To indicate that certain documents must be removed from the index, you can use a soft delete strategy. Instead of deleting a row, add a property to indicate that it's deleted, and set up a soft deletion detection policy on the datasource. For example, the following policy considers that a row is deleted if the row has a property `IsDeleted` with the value `"true"`: ```http
- PUT https://[service name].search.windows.net/datasources?api-version=2020-06-30
- Content-Type: application/json
- api-key: [admin key]
-
- {
- "name" : "my-table-datasource",
- "type" : "azuretable",
- "credentials" : { "connectionString" : "<your storage connection string>" },
- "container" : { "name" : "table name", "query" : "<query>" },
- "dataDeletionDetectionPolicy" : { "@odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy", "softDeleteColumnName" : "IsDeleted", "softDeleteMarkerValue" : "true" }
- }
+PUT https://[service name].search.windows.net/datasources?api-version=2020-06-30
+Content-Type: application/json
+api-key: [admin key]
+
+{
+ "name" : "my-table-datasource",
+ "type" : "azuretable",
+ "credentials" : { "connectionString" : "<your storage connection string>" },
+ "container" : { "name" : "table name", "query" : "<query>" },
+ "dataDeletionDetectionPolicy" : { "@odata.type" : "#Microsoft.Azure.Search.SoftDeleteColumnDeletionDetectionPolicy", "softDeleteColumnName" : "IsDeleted", "softDeleteMarkerValue" : "true" }
+}
``` <a name="Performance"></a>+ ## Performance considerations By default, Azure Cognitive Search uses the following query filter: `Timestamp >= HighWaterMarkValue`. Because Azure tables donΓÇÖt have a secondary index on the `Timestamp` field, this type of query requires a full table scan and is therefore slow for large tables. - Here are two possible approaches for improving table indexing performance. Both of these approaches rely on using table partitions: - If your data can naturally be partitioned into several partition ranges, create a datasource and a corresponding indexer for each partition range. Each indexer now has to process only a specific partition range, resulting in better query performance. If the data that needs to be indexed has a small number of fixed partitions, even better: each indexer only does a partition scan. For example, to create a datasource for processing a partition range with keys from `000` to `100`, use a query like this:
Here are two possible approaches for improving table indexing performance. Both
- Monitor indexer progress by using [Get Indexer Status API](/rest/api/searchservice/get-indexer-status), and periodically update the `<TimeStamp>` condition of the query based on the latest successful high-water-mark value. - With this approach, if you need to trigger a complete reindexing, you need to reset the datasource query in addition to resetting the indexer.
+## See also
-## Help us make Azure Cognitive Search better
-If you have feature requests or ideas for improvements, submit them on our [UserVoice site](https://feedback.azure.com/forums/263029-azure-search/).
++ [Indexers in Azure Cognitive Search](search-indexer-overview.md)++ [Create an indexer](search-howto-create-indexers.md)
search Search Howto Monitor Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-monitor-indexers.md
If there were document-specific problems during the run, they will be listed in
Warnings are common with some types of indexers, and do not always indicate a problem. For example indexers that use cognitive services can report warnings when image or PDF files don't contain any text to process.
-For more information about investigating indexer errors and warnings, see [Troubleshooting common indexer issues in Azure Cognitive Search](search-indexer-troubleshooting.md).
+For more information about investigating indexer errors and warnings, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md).
## Monitor using Get Indexer Status (REST API)
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-indexer-troubleshooting.md
Title: Troubleshoot common search indexer issues
+ Title: Indexer troubleshooting guidance
-description: Fix errors and common problems with indexers in Azure Cognitive Search, including data source connection, firewall, and missing documents.
+description: This article provides indexer problem and resolution guidance for cases when no error messages are returned from the service search.
Previously updated : 11/04/2019 Last updated : 06/27/2021
-# Troubleshooting common indexer issues in Azure Cognitive Search
+# Indexer troubleshooting guidance for Azure Cognitive Search
-Indexers can run into a number of issues when indexing data into Azure Cognitive Search. The main categories of failure include:
-
-* [Connecting to a data source or other resources](#connection-errors)
-* [Document processing](#document-processing-errors)
-* [Document ingestion to an index](#index-errors)
+Occasionally, indexers run into problems and there is no error to help with diagnosis. This article covers problems and potential resolutions when indexer results are unexpected and there is limited information to go on. If you have an error to investigate, see [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md) instead.
## Connection errors
Indexers can run into a number of issues when indexing data into Azure Cognitive
> > You can find out the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) by either using [Downloadable JSON files](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) or via the [Service Tag Discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api-public-preview). The IP address range is updated weekly.
-### Configure firewall rules
+### Firewall rules
-Azure Storage, CosmosDB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall is enabled. Typically, firewall errors are generic and look like `The remote server returned an error: (403) Forbidden` or `Credentials provided in the connection string are invalid or have expired`.
+Azure Storage, Cosmos DB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall is enabled. Typically, firewall errors are generic and look like `The remote server returned an error: (403) Forbidden` or `Credentials provided in the connection string are invalid or have expired`.
-There are 2 options for allowing indexers to access these resources in such an instance:
+There are two options for allowing indexers to access these resources in such an instance:
* Disable the firewall, by allowing access from **All Networks** (if feasible).+ * Alternatively, you can allow access for the IP address of your search service and the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) in the firewall rules of your resource (IP address range restriction). Details for configuring IP address range restrictions for each data source type can be found from the following links:
Details for configuring IP address range restrictions for each data source type
Azure functions (that could be used as a [Custom Web Api skill](cognitive-search-custom-skill-web-api.md)) also support [IP address restrictions](../azure-functions/ip-addresses.md#ip-address-restrictions). The list of IP addresses to configure would be the IP address of your search service and the IP address range of `AzureCognitiveSearch` service tag.
-Details for accessing data in SQL server on an Azure VM are outlined [here](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md)
+For more information about connecting to a virtual machine, see [Configure a connection to SQL Server on an Azure VM](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md)
### Configure network security group (NSG) rules
The `AzureCognitiveSearch` service tag can be directly used in the inbound [NSG
More details for accessing data in a SQL managed instance are outlined [here](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md)
-### CosmosDB "Indexing" isn't enabled
-
-Azure Cognitive Search has an implicit dependency on Cosmos DB indexing. If you turn off automatic indexing in Cosmos DB, Azure Cognitive Search returns a successful state, but fails to index container contents. For instructions on how to check settings and turn on indexing, see [Manage indexing in Azure Cosmos DB](../cosmos-db/how-to-manage-indexing-policy.md#use-the-azure-portal).
+## SharePoint Online Conditional Access policies
-### SharePoint Online Conditional Access policies
-
-When creating a SharePoint Online indexer you will go through a step that requires you to login to your AAD app after providing a device code. If you receive a message that says "Your sign-in was successful but your admin requires the device requesting access to be managed" the indexer is likely being blocked from accessing the SharePoint Online document library due to a [Conditional Access](https://review.docs.microsoft.com/azure/active-directory/conditional-access/overview) policy.
+When creating a SharePoint Online indexer you will go through a step that requires you to sign in to your Azure AD app after providing a device code. If you receive a message that says `"Your sign-in was successful but your admin requires the device requesting access to be managed"` the indexer is likely being blocked from accessing the SharePoint Online document library due to a [Conditional Access](../active-directory/conditional-access/overview.md) policy.
To update the policy to allow the indexer access to the document library, follow the below steps:
To update the policy to allow the indexer access to the document library, follow
} ```
-1. Back on the Conditional Access page in the Azure portal select **Named locations** from the menu on the left, then select **+ IP ranges location**. Give your new named location a name and add the IP ranges for your search service and indexer execution environments that you collected in the last two steps.
+1. Back on the Conditional Access page in Azure portal, select **Named locations** from the menu on the left, then select **+ IP ranges location**. Give your new named location a name and add the IP ranges for your search service and indexer execution environments that you collected in the last two steps.
* For your search service IP address you may need to add "/32" to the end of the IP address since it only accepts valid IP ranges. * Remember that for the indexer execution environment IP ranges, you only need to add the IP ranges for the region that your search service is in.
To update the policy to allow the indexer access to the document library, follow
1. Attempt to create the indexer again 1. Send an update request for the data source object that you created.
- 1. Resend the indexer create request. Use the new code to login, then send another indexer creation request after the successful login.
+ 1. Resend the indexer create request. Use the new code to sign in, then send another indexer creation request.
-## Document processing errors
+## Indexing unsupported document types
-### Unprocessable or unsupported documents
+If you are indexing content from Azure Blob Storage, and the container includes blobs of an [unsupported content type](search-howto-indexing-azure-blob-storage.md#SupportedFormats), the indexer will skip that document. In other cases, there may be problems with individual documents.
-The blob indexer [documents which document formats are explicitly supported.](search-howto-indexing-azure-blob-storage.md#SupportedFormats). Sometimes, a blob storage container contains unsupported documents. Other times there may be problematic documents. You can avoid stopping your indexer on these documents by [changing configuration options](search-howto-indexing-azure-blob-storage.md#DealingWithErrors):
+You can [set configuration options](search-howto-indexing-azure-blob-storage.md#DealingWithErrors) to allow indexer processing to continue in the event of problems with individual documents.
-```
+```http
PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30 Content-Type: application/json api-key: [admin key]
api-key: [admin key]
} ```
-### Missing document content
+## Missing documents
+
+Indexers extract documents or rows from an external [data source](/rest/api/searchservice/create-data-source) and create *search documents* which are then indexed by the search service. Occasionally, a document that exists in data source fails to appear in a search index. This unexpected result can occur due to the following reasons:
+
+* The document was updated after the indexer was run. If your indexer is on a [schedule](/rest/api/searchservice/create-indexer#indexer-schedule), it will eventually rerun and pick up the document.
+* The indexer timed out before the document could be ingested. There are [maximum processing time limits](search-limits-quotas-capacity.md#indexer-limits) after which no documents will be processed. You can check indexer status in the portal or by calling [Get Indexer Status (REST API)](/rest/api/searchservice/get-indexer-status).
+* [Field mappings](/rest/api/searchservice/create-indexer#fieldmappings) or [AI enrichment](./cognitive-search-concept-intro.md) have changed the document and its articulation in the search index is different from what you expect.
+* [Change tracking](/rest/api/searchservice/create-data-source#data-change-detection-policies) values are erroneous or prerequisites are missing. If your high watermark value is a date set to a future time, then any documents that have a date less than this will be skipped by the indexer. You can understand your indexer's change tracking state using the 'initialTrackingState' and 'finalTrackingState' fields in the [indexer status](/rest/api/searchservice/get-indexer-status#indexer-execution-result). Indexers for Azure SQL and MySQL must have an index on the high water mark column of the source table, or queries used by the indexer may time out.
+
+> [!TIP]
+> If documents are missing, check the [query](/rest/api/searchservice/search-documents) you are using to make sure it isn't excluding the document in question. To query for a specific document, use the [Lookup Document REST API](/rest/api/searchservice/lookup-document).
+
+## Missing content from Blob Storage
The blob indexer [finds and extracts text from blobs in a container](search-howto-indexing-azure-blob-storage.md#how-azure-search-indexes-blobs). Some problems with extracting text include: * The document only contains scanned images. PDF blobs that have non-text content, such as scanned images (JPGs), don't produce results in a standard blob indexing pipeline. If you have image content with text elements, you can use [cognitive search](cognitive-search-concept-image-scenarios.md) to find and extract the text.+ * The blob indexer is configured to only index metadata. To extract content, the blob indexer must be configured to [extract both content and metadata](search-howto-indexing-azure-blob-storage.md#PartsOfBlobToIndex):
-```
+```http
PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30 Content-Type: application/json api-key: [admin key]
api-key: [admin key]
} ```
-## Index errors
+## Missing content from Cosmos DB
-### Missing documents
+Azure Cognitive Search has an implicit dependency on Cosmos DB indexing. If you turn off automatic indexing in Cosmos DB, Azure Cognitive Search returns a successful state, but fails to index container contents. For instructions on how to check settings and turn on indexing, see [Manage indexing in Azure Cosmos DB](../cosmos-db/how-to-manage-indexing-policy.md#use-the-azure-portal).
-Indexers find documents from a [data source](/rest/api/searchservice/create-data-source). Sometimes a document from the data source that should have been indexed appears to be missing from an index. There are a couple of common reasons these errors may happen:
+## See also
-* The document hasn't been indexed. Check the portal for a successful indexer run.
-* Check your [change tracking](/rest/api/searchservice/create-data-source#data-change-detection-policies) value. If your high watermark value is a date set to a future time, then any documents that have a date less than this will be skipped by the indexer. You can understand your indexer's change tracking state using the 'initialTrackingState' and 'finalTrackingState' fields in the [indexer status](/rest/api/searchservice/get-indexer-status#indexer-execution-result).
-* The document was updated after the indexer run. If your indexer is on a [schedule](/rest/api/searchservice/create-indexer#indexer-schedule), it will eventually rerun and pick up the document.
-* The [query](/rest/api/searchservice/create-data-source) specified in the data source excludes the document. Indexers can't index documents that aren't part of the data source.
-* [Field mappings](/rest/api/searchservice/create-indexer#fieldmappings) or [AI enrichment](./cognitive-search-concept-intro.md) have changed the document and it looks different than you expect.
-* Use the [lookup document API](/rest/api/searchservice/lookup-document) to find your document.
+* [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md)
+* [Monitor indexer-based indexing](search-howto-monitor-indexers.md)
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-api-keys.md
Previously updated : 06/21/2021 Last updated : 06/25/2021 # Use API keys for Azure Cognitive Search authentication
-Key-based authentication uses access keys (or an *API key* as it's called in Cognitive Search) that are unique to your service to authenticate requests. Passing a valid API key on the request is considered proof that the request is from an authorized client. In Cognitive Search, key-based authentication is used for all inbound operations.
+Cognitive Search uses API keys as its primary authentication methodology. For inbound requests to the search services, such as requests that create or query an index, API keys are the only authentication option you have. A few outbound request scenarios, particularly those involving indexers, can use Azure Active Directory identities and roles.
-> [!NOTE]
-> Alternative [role-based authentication](search-security-rbac.md) is currently limited to two scenarios: portal access, and outbound indexer data read operations.
+API keys are generated when the service created. Passing a valid API key on the request is considered proof that the request is from an authorized client. There are two kinds of keys. *Admin keys* convey write permissions on the service and also grant rights to query system information. *Query keys* convey read permissions and can be used by apps to query a specific index.
## Using API keys in search
-When connecting to a search service, all requests must include a read-only API key that was generated specifically for your service. The API key is the sole mechanism for authenticating inbound requests to your search service endpoint and is required on every request.
+When connecting to a search service, all requests must include an API key that was generated specifically for your service.
+ In [REST solutions](search-get-started-rest.md), the API key is typically specified in a request header
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-overview.md
This article describes the security features in Azure Cognitive Search that protect data and operations.
-## Network access
+## Network traffic patterns
-A search service is hosted on Azure and typically accessed over public network connections. Understanding the service's access patterns can help you determine the appropriate controls for preventing unauthorized access.
+A search service is hosted on Azure and typically accessed over public network connections. Understanding the service's access patterns can help you design a security strategy that effectively deters unauthorized access to searchable content.
Cognitive Search has three basic network traffic patterns:
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-rbac.md
Title: Azure role-based authorization
+ Title: Role-based authorization
description: Azure role-based access control (Azure RBAC) in the Azure portal for controlling and delegating administrative tasks for Azure Cognitive Search management.
Last updated 06/21/2021
# Use Azure role-based authentication in Azure Cognitive Search
-Azure provides a [global role-based authorization (RBAC) model](../role-based-access-control/role-assignments-portal.md) for all services managed through the portal or Resource Manager APIs. In Azure Cognitive Search, you can use RBAC in two scenarios:
+Azure provides a [global role-based authorization (RBAC) model](../role-based-access-control/role-assignments-portal.md) for all services managed through the portal or Resource Manager APIs. In Azure Cognitive Search, you can use role authorization in two scenarios:
-+ Portal admin operations. Role membership determines the level of *service administration* rights.
++ Service administration using any client that calls [Azure Resource Manager](../azure-resource-manager/management/overview.md) to create or modify an Azure service. Whether you're using Azure portal, the Management REST API, an Azure SDK, Azure PowerShell, or Azure CLI, your ability to perform service management tasks depends on your Azure role assignment.
-+ Outbound indexer access to external Azure data sources, applicable when you [configure a managed identity](search-howto-managed-identities-data-sources.md). For a search service that runs under a managed identity, you can assign roles on external data services, such as Azure Blob Storage, to allow read operations from the trusted search service.
++ Outbound indexer access to external Azure data sources also use role authorization, applicable when you [configure a managed identity](search-howto-managed-identities-data-sources.md) to run the search service under. For a search service that runs under a managed identity, you can assign roles on external data services, such as Azure Blob Storage, to allow read operations from the trusted search service.
-RBAC scenarios that are **not** supported include:
+A few RBAC scenarios are **not** supported, and these include:
+ [Custom roles](../role-based-access-control/custom-roles.md)
-+ Inbound requests to the search service (use [key-based authentication](search-security-api-keys.md) instead)
++ Inbound requests to the search service (use [API keys](search-security-api-keys.md) instead) + User-identity access over search results (sometimes referred to as row-level security or document-level security.) > [!Tip]
RBAC scenarios that are **not** supported include:
## Azure roles used in Search
-Azure roles include Owner, Contributor, and Reader roles, where role membership consists of Azure Active Directory users and groups. In Azure Cognitive Search, roles are associated with permission levels that support the following management tasks:
+Azure roles include Owner, Contributor, and Reader roles, whose assigned members consist of Azure AD users and groups. In Azure Cognitive Search, roles are associated with permission levels that support the following management tasks:
| Role | Task | | | |
search Search Sku Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-sku-tier.md
Previously updated : 06/16/2021 Last updated : 06/26/2021 # Choose a pricing tier for Azure Cognitive Search
-Part of [creating a search service](search-create-service-portal.md) means choosing a pricing tier (or SKU) that's fixed for the lifetime of the service. Prices - or the estimated monthly cost of running the service - are shown in the portal's **Select Pricing Tier** page when you create the service. If you're provisioning through PowerShell or Azure CLI instead, the tier is specified through the **`-Sku`** parameter, and you should check [service pricing](https://azure.microsoft.com/pricing/details/search/) to learn about estimated costs.
+Part of [creating a search service](search-create-service-portal.md) means choosing a pricing tier (or SKU) that's fixed for the lifetime of the service. In the portal, tier is specified in the **Select Pricing Tier** page when you create the service. If you're provisioning through PowerShell or Azure CLI instead, the tier is specified through the **`-Sku`** parameter
The tier you select determines:
The tier you select determines:
In a few instances, the tier you choose determines the availability of [premium features](#premium-features).
+Pricing - or the estimated monthly cost of running the service - are shown in the portal's **Select Pricing Tier** page. You should check [service pricing](https://azure.microsoft.com/pricing/details/search/) to learn about estimated costs.
+ > [!NOTE] > Looking for information about "Azure SKUs"? Start with [Azure pricing](https://azure.microsoft.com/pricing/) and then scroll down for links to per-service pricing pages.
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/recover-from-identity-compromise.md
Azure Sentinel has many built-in resources to help in your investigation, such a
For more information, see: -- [Visualize and analyze your environment](/azure/sentinel/quickstart-get-visibility.md)-- [Detect threats out of the box](/azure/sentinel/tutorial-detect-threats-built-in.md).
+- [Visualize and analyze your environment](/azure/sentinel/quickstart-get-visibility)
+- [Detect threats out of the box](/azure/sentinel/tutorial-detect-threats-built-in).
### Monitoring with Microsoft 365 Defender
Check for other examples of detections, hunting queries, and threat analytics re
For more information, see: - [Track and respond to emerging threats with threat analytics](/windows/security/threat-protection/microsoft-defender-atp/threat-analytics)-- [Understand the analyst report in threat analytics](/windows/security/threat-protection/microsoft-defender-atp/threat-analytics-analyst-reports)
+- [Understand the analyst report in threat analytics](/microsoft-365/security/defender/threat-analytics-analyst-reports)
### Monitoring with Azure Active Directory
sentinel Collaborate In Microsoft Teams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/collaborate-in-microsoft-teams.md
ms.devlang: na
na Previously updated : 05/03/2021 Last updated : 06/17/2021
Organizations that already use Microsoft Teams for communication and collaborati
An Azure Sentinel incident team always has the most updated and recent data from Azure Sentinel, ensuring that your teams have the most relevant data right at hand.
+## Required permissions
+
+In order to create teams from Azure Sentinel:
+
+- The user creating the team must have Incident write permissions in Azure Sentinel. For example, the [Azure Sentinel Responder](../role-based-access-control/built-in-roles.md#azure-sentinel-responder) role is an ideal, minimum role for this privilege.
+
+- The user creating the team must also have permissions to create teams in Microsoft teams.
+
+- Any Azure Sentinel user, including users with the [Reader](../role-based-access-control/built-in-roles.md#azure-sentinel-reader), [Responder](../role-based-access-control/built-in-roles.md#azure-sentinel-responder), or [Contributor](../role-based-access-control/built-in-roles.md#azure-sentinel-contributor) roles, can gain access to the created team by requesting access.
+ ## Use an incident team to investigate Investigate together with an *incident team* by integrating Microsoft Teams directly from your incident.
Investigate together with an *incident team* by integrating Microsoft Teams dire
- **Team name**: Automatically defined as the name of your incident. Modify the name as needed so that it's easily identifiable to you. - **Description**: Enter a meaningful description for your incident team.
- - **Add groups**: Select one or more Azure AD groups to add to your incident team. Individual users aren't supported.
+ - **Add groups**: Select one or more Azure AD groups to add to your incident team. Individual users aren't supported in this page. If you need to add individual users, [do so in Microsoft Teams](#more-users) after you've created the team.
> [!TIP] > If you regularly work with the same teams, you may want to select the star :::image type="icon" source="media/collaborate-in-microsoft-teams/save-as-favorite.png" border="false"::: to save them as favorites.
Investigate together with an *incident team* by integrating Microsoft Teams dire
Continue the conversation about the investigation in Teams for as long as needed. You have the full incident details directly in teams. > [!TIP]
-> When you [close an incident](tutorial-investigate-cases.md#closing-an-incident), the related incident team you've created in Microsoft Teams is archived.
+> - <a name="more-users"></a>If you need to add individual users to your team, you can do so in Microsoft Teams using the **Add more people** button on the **Posts** tab.
>
-> If the incident is ever re-opened, the related incident team is also re-opened in Microsoft Teams so that you can continue your conversation, right where you left off.
+> - When you [close an incident](tutorial-investigate-cases.md#closing-an-incident), the related incident team you've created in Microsoft Teams is archived. If the incident is ever re-opened, the related incident team is also re-opened in Microsoft Teams so that you can continue your conversation, right where you left off.
> ## Next steps
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-security-content.md
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - Medium - FTP for non authorized servers** |Identifies an FTP connection for a non-authorized server. | Create a new FTP connection, such as by using the FTP_CONNECT Function Module. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Initial Access, Command and Control | |**SAP - Medium - Sensitive Tables Direct Access By RFC Logon** |Identifies a generic table access by RFC sign in. <br><br> Maintain tables in the [SAP - Sensitive Tables](#tables) watchlist.<br><br> **Note**: Relevant for production systems only. | Open the table contents using SE11/SE16/SE16N.<br><br>**Data sources**: SAPcon - Audit Log | Collection, Exfiltration, Credential Access | |**SAP - Medium - Sensitive Roles Changes** |Identifies changes in sensitive roles. <br><br> Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Impact, Privilege Escalation, Persistence |
-|**SAP - Medium - Sensitive Profile Changes** |Identifies changes in sensitive profiles. <br><br> Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Change a role using PFCG. <br><br>**Data sources**: SAPcon - Change Documents Log, SAPcon ΓÇô Audit Log | Privilege, Escalation, Persistence, Command and Control |
|**SAP - Medium - Execution of Obsolete/Insecure Program** |Identifies the execution of an obsolete or insecure ABAP program. <br><br> Maintain obsolete programs in the [SAP - Obsolete Programs](#programs) watchlist.<br><br> **Note**: Relevant for production systems only. | Run a program directly using SE38/SA38/SE80, or by using a background job. <br><br>**Data sources**: SAPcon - Audit Log | Discovery, Command and Control | |**SAP - Medium - Execution of Obsolete or Insecure Function Module** |Identifies the execution of an obsolete or insecure ABAP function module. <br><br>Maintain obsolete functions in the [SAP - Obsolete Function Modules](#modules) watchlist. Make sure to activate table logging changes for the `EUFUNC` table in the backend. (SE13)<br><br> **Note**: Relevant for production systems only. | Run an obsolete or insecure function module directly using SE37. <br><br>**Data sources**: SAPcon - Table Data Log | Discovery, Command and Control | | | | | |
service-fabric Service Fabric Tutorial Java Jenkins https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-tutorial-java-jenkins.md
In this tutorial series you learn how to:
You can set up Jenkins either inside or outside a Service Fabric cluster. The following instructions show how to set it up outside a cluster using a provided Docker image. However, a preconfigured Jenkins build environment can also be used. The following container image comes installed with the Service Fabric plugin and is ready for use with Service Fabric immediately.
-1. Pull the Service Fabric Jenkins container image: `docker pull docker pull rapatchi/jenkins:v10`. This image comes with Service Fabric Jenkins plugin pre-installed.
+
+1. Pull the Service Fabric Jenkins container image: `docker pull rapatchi/jenkins:v10`. This image comes with Service Fabric Jenkins plugin pre-installed.
1. Run the container image with the location where your Azure certificates are stored on your mounted local machine.
virtual-machines Hana Setup Smt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-setup-smt.md
Title: How to set up SMT server for SAP HANA on Azure (Large Instances) | Microsoft Docs
-description: How to set up SMT server for SAP HANA on Azure (Large Instances).
+description: Learn how to set up SMT server for SAP HANA on Azure (Large Instances).
documentationcenter: -+ editor:
vm-linux Previously updated : 09/10/2018- Last updated : 06/25/2021+ # Set up SMT server for SUSE Linux
-Large Instances of SAP HANA don't have direct connectivity to the internet. It's not a straightforward process to register such a unit with the operating system provider, and to download and apply updates. A solution for SUSE Linux is to set up an SMT server in an Azure virtual machine. Host the virtual machine in an Azure virtual network, which is connected to the HANA Large Instance. With such an SMT server, the HANA Large Instance unit could register and download updates.
-For more documentation on SUSE, see their [Subscription Management Tool for SLES 12 SP2](https://www.suse.com/documentation/sles-12/pdfdoc/book_smt/book_smt.pdf).
+In this article, we'll walk through the steps of setting up SMT server for SAP HANA on Azure Large Instances, otherwise known as BareMetal Infrastructure.
-Prerequisites for installing an SMT server that fulfills the task for HANA Large Instances are:
+Large Instances of SAP HANA don't have direct connectivity to the internet. It isn't straightforward to register such a unit with the operating system provider and to download and apply updates. A solution for SUSE Linux is to set up an SMT server in an Azure virtual machine (VM). You'll host the virtual machine in an Azure virtual network connected to the HANA Large Instance (HLI). With such an SMT server, the HANA Large Instance could register and download updates.
-- An Azure virtual network that is connected to the HANA Large Instance ExpressRoute circuit.-- A SUSE account that is associated with an organization. The organization should have a valid SUSE subscription.
+For more information on SUSE, see their [Subscription Management Tool for SLES 12 SP2](https://www.suse.com/documentation/sles-12/pdfdoc/book_smt/book_smt.pdf).
-## Install SMT server on an Azure virtual machine
+## Prerequisites
-First, sign in to the [SUSE Customer Center](https://scc.suse.com/).
+To install an SMT server for HANA Large Instances, you'll first need:
-Go to **Organization** > **Organization Credentials**. In that section, you should find the credentials that are necessary to set up the SMT server.
+- An Azure virtual network connected to the HANA Large Instance ExpressRoute circuit.
+- A SUSE account associated with an organization. The organization should have a valid SUSE subscription.
-Then, install a SUSE Linux VM in the Azure virtual network. To deploy the virtual machine, take a SLES 12 SP2 gallery image of Azure (select BYOS SUSE image). In the deployment process, don't define a DNS name, and don't use static IP addresses.
+## Install SMT server on an Azure virtual machine
-![Screenshot of virtual machine deployment for SMT server](./media/hana-installation/image3_vm_deployment.png)
+1. Sign in to the [SUSE Customer Center](https://scc.suse.com/). Go to **Organization** > **Organization Credentials**. In that section, you should find the credentials that are necessary to set up the SMT server.
-The deployed virtual machine is smaller, and got the internal IP address in the Azure virtual network of 10.34.1.4. The name of the virtual machine is *smtserver*. After the installation, the connectivity to the HANA Large Instance unit or units is checked. Depending on how you organized name resolution, you might need to configure resolution of the HANA Large Instance units in etc/hosts of the Azure virtual machine.
+2. Install a SUSE Linux VM in the Azure virtual network. To deploy the virtual machine, take a SLES 12 SP2 gallery image of Azure (select BYOS SUSE image). In the deployment process, don't define a DNS name, and don't use static IP addresses.
-Add a disk to the virtual machine. You use this disk to hold the updates, and the boot disk itself could be too small. Here, the disk got mounted to /srv/www/htdocs, as shown in the following screenshot. A 100-GB disk should suffice.
+ ![Screenshot of virtual machine deployment for SMT server.](./media/hana-installation/image3_vm_deployment.png)
-![Screenshot shows the added disk in the PuTTy window.](./media/hana-installation/image4_additional_disk_on_smtserver.PNG)
+ The deployed virtual machine has the internal IP address in the Azure virtual network of 10.34.1.4. The name of the virtual machine is *smtserver*. After the installation, check connectivity to the HANA Large Instances. Depending on how you organized name resolution, you might need to configure resolution of the HANA Large Instances in etc/hosts of the Azure virtual machine.
-Sign in to the HANA Large Instance unit or units, maintain /etc/hosts, and check whether you can reach the Azure virtual machine that is supposed to run the SMT server over the network.
+3. Add a disk to the virtual machine. You'll use this disk to hold the updates; the boot disk itself could be too small. Here, the disk is mounted to /srv/www/htdocs, as shown in the following screenshot. A 100-GB disk should suffice.
-After this check, sign in to the Azure virtual machine that should run the SMT server. If you are using putty to sign in to the virtual machine, run this sequence of commands in your bash window:
+ ![Screenshot shows the added disk in the PuTTy window.](./media/hana-installation/image4_additional_disk_on_smtserver.PNG)
-```
-cd ~
-echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc
-```
+4. Sign in to the HANA Large Instances, maintain /etc/hosts. Check whether you can reach the Azure virtual machine that will run the SMT server over the network.
-Restart your bash to activate the settings. Then start YAST.
+5. Sign in to the Azure virtual machine that will run the SMT server. If you're using putty to sign in to the virtual machine, run this sequence of commands in your bash window:
-Connect your VM (smtserver) to the SUSE site.
+ ```
+ cd ~
+ echo "export NCURSES_NO_UTF8_ACS=1" >> .bashrc
+ ```
-```
-smtserver:~ # SUSEConnect -r <registration code> -e s<email address> --url https://scc.suse.com
-Registered SLES_SAP 12.2 x86_64
-To server: https://scc.suse.com
-Using E-Mail: email address
-Successfully registered system.
-```
+6. Restart your bash to activate the settings. Then start YAST.
-After the virtual machine is connected to the SUSE site, install the smt packages. Use the following putty command to install the smt packages.
+7. Connect your VM (smtserver) to the SUSE site.
-```
-smtserver:~ # zypper in smt
-Refreshing service 'SUSE_Linux_Enterprise_Server_for_SAP_Applications_12_SP2_x86_64'.
-Loading repository data...
-Reading installed packages...
-Resolving package dependencies...
-```
+ ```
+ smtserver:~ # SUSEConnect -r <registration code> -e s<email address> --url https://scc.suse.com
+ Registered SLES_SAP 12.2 x86_64
+ To server: https://scc.suse.com
+ Using E-Mail: email address
+ Successfully registered system.
+ ```
+
+8. After the virtual machine is connected to the SUSE site, install the smt packages. Use the following putty command to install the smt packages.
+ ```
+ smtserver:~ # zypper in smt
+ Refreshing service 'SUSE_Linux_Enterprise_Server_for_SAP_Applications_12_SP2_x86_64'.
+ Loading repository data...
+ Reading installed packages...
+ Resolving package dependencies...
+ ```
+
+ You can also use the YAST tool to install the smt packages. In YAST, go to **Software Maintenance**, and search for smt. Select **smt**, which switches automatically to yast2-smt.
-You can also use the YAST tool to install the smt packages. In YAST, go to **Software Maintenance**, and search for smt. Select **smt**, which switches automatically to yast2-smt.
+ ![Screenshot of SMT in YAST.](./media/hana-installation/image5_smt_in_yast.PNG)
-![Screenshot of SMT in YAST](./media/hana-installation/image5_smt_in_yast.PNG)
+ Accept the selection for installation on the smtserver.
-Accept the selection for installation on the smtserver. After the installation completes, go to the SMT server configuration. Enter the organizational credentials from the SUSE Customer Center you retrieved earlier. Also enter your Azure virtual machine hostname as the SMT Server URL. In this demonstration, it's https:\//smtserver.
+9. After the installation completes, go to the SMT server configuration. Enter the organizational credentials from the SUSE Customer Center you retrieved earlier. Also enter your Azure virtual machine hostname as the SMT Server URL. In this example, it's https:\//smtserver.
-![Screenshot of SMT server configuration](./media/hana-installation/image6_configuration_of_smtserver1.png)
+ ![Screenshot of SMT server configuration.](./media/hana-installation/image6_configuration_of_smtserver1.png)
-Now test whether the connection to the SUSE Customer Center works. As you see in the following screenshot, in this demonstration case, it did work.
+10. Now test whether the connection to the SUSE Customer Center works. As you see in the following screenshot, in this example, it did work.
-![Screenshot of testing connection to SUSE Customer Center](./media/hana-installation/image7_test_connect.png)
+ ![Screenshot of testing connection to SUSE Customer Center.](./media/hana-installation/image7_test_connect.png)
-After the SMT setup starts, provide a database password. Because it's a new installation, you should define that password as shown in the following screenshot.
+11. After the SMT setup starts, provide a database password. Because it's a new installation, you should define that password as shown in the following screenshot.
-![Screenshot of defining password for database](./media/hana-installation/image8_define_db_passwd.PNG)
+ ![Screenshot of defining password for database.](./media/hana-installation/image8_define_db_passwd.PNG)
-The next step is to create a certificate.
+12. Create a certificate.
-![Screenshot of creating a certificate for SMT server](./media/hana-installation/image9_certificate_creation.PNG)
+ ![Screenshot of creating a certificate for SMT server.](./media/hana-installation/image9_certificate_creation.PNG)
-At the end of the configuration, it might take a few minutes to run the synchronization check. After the installation and configuration of the SMT server, you should find the directory repo under the mount point /srv/www/htdocs/. There are also some subdirectories under repo.
+ At the end of the configuration, it might take a few minutes to run the synchronization check. After the installation and configuration of the SMT server, you should find the directory repo under the mount point /srv/www/htdocs/. There are also some subdirectories under repo.
-Restart the SMT server and its related services with these commands.
+13. Restart the SMT server and its related services with these commands.
-```
-rcsmt restart
-systemctl restart smt.service
-systemctl restart apache2
-```
+ ```
+ rcsmt restart
+ systemctl restart smt.service
+ systemctl restart apache2
+ ```
+
+## Download packages onto the SMT server
-## Download packages onto SMT server
+1. After all the services are restarted, select the appropriate packages in SMT Management by using YAST. The package selection depends on the operating system image of the HANA Large Instance server. The package selection doesn't depend on the SLES release or version of the virtual machine running the SMT server. The following screenshot shows an example of the selection screen.
-After all the services are restarted, select the appropriate packages in SMT Management by using YAST. The package selection depends on the operating system image of the HANA Large Instance server. The package selection doesn't depend on the SLES release or version of the virtual machine running the SMT server. The following screenshot shows an example of the selection screen.
+ ![Screenshot of selecting packages.](./media/hana-installation/image10_select_packages.PNG)
-![Screenshot of selecting packages](./media/hana-installation/image10_select_packages.PNG)
+2. Start the initial copy of the select packages to the SMT server you set up. This copy is triggered in the shell by using the command, smt-mirror.
-Next, start the initial copy of the select packages to the SMT server you set up. This copy is triggered in the shell by using the command smt-mirror.
+ ![Screenshot of downloading packages to SMT server](./media/hana-installation/image11_download_packages.PNG)
-![Screenshot of downloading packages to SMT server](./media/hana-installation/image11_download_packages.PNG)
+ The packages should get copied into the directories created under the mount point /srv/www/htdocs. This process can take an hour or more, depending on how many packages you select. As this process finishes, move to the SMT client setup.
-The packages should get copied into the directories created under the mount point /srv/www/htdocs. This process can take an hour or more, depending on how many packages you select. As this process finishes, move to the SMT client setup.
+## Set up the SMT client on HANA Large Instances
-## Set up the SMT client on HANA Large Instance units
+The client or clients in this case are the HANA Large Instances. The SMT server setup copied the script clientSetup4SMT.sh into the Azure virtual machine.
-The client or clients in this case are the HANA Large Instance units. The SMT server setup copied the script clientSetup4SMT.sh into the Azure virtual machine. Copy that script over to the HANA Large Instance unit you want to connect to your SMT server. Start the script with the -h option, and give the name of your SMT server as a parameter. In this example, the name is *smtserver*.
+Copy that script over to the HANA Large Instance you want to connect to your SMT server. Start the script with the -h option, and give the name of your SMT server as a parameter. In this example, the name is *smtserver*.
-![Screenshot of configuring the SMT client](./media/hana-installation/image12_configure_client.PNG)
+![Screenshot of configuring the SMT client.](./media/hana-installation/image12_configure_client.PNG)
-It's possible that the load of the certificate from the server by the client succeeds, but the registration fails, as shown in the following screenshot.
+It's possible that the load of the certificate from the server by the client succeeds. In this example, however, the registration fails, as shown in the following screenshot.
-![Screenshot of client registration failure](./media/hana-installation/image13_registration_failed.PNG)
+![Screenshot of client registration failure.](./media/hana-installation/image13_registration_failed.PNG)
If the registration fails, see [SUSE support document](https://www.suse.com/de-de/support/kb/doc/?id=7006024), and run the steps described there. > [!IMPORTANT] > For the server name, provide the name of the virtual machine (in this case, *smtserver*), without the fully qualified domain name. -
-After running these steps, run the following command on the HANA Large Instance unit:
-
+
+After running these steps, run the following command on the HANA Large Instance:
+
``` SUSEConnect ΓÇôcleanup ```
SUSEConnect ΓÇôcleanup
> [!Note] > Wait a few minutes after that step. If you run clientSetup4SMT.sh immediately, you might get an error.
-If you encounter a problem that you need to fix based on the steps of the SUSE article, restart clientSetup4SMT.sh on the HANA Large Instance unit. Now it should finish successfully.
+If you find a problem you need to fix based on the steps of the SUSE article, restart clientSetup4SMT.sh on the HANA Large Instance. Now it should finish successfully.
-![Screenshot of client registration success](./media/hana-installation/image14_finish_client_config.PNG)
+![Screenshot of client registration success.](./media/hana-installation/image14_finish_client_config.PNG)
-You configured the SMT client of the HANA Large Instance unit to connect to the SMT server you installed in the Azure virtual machine. You now can take 'zypper up' or 'zypper in' to install operating system updates to HANA Large Instances, or install additional packages. You can only get updates that you downloaded before on the SMT server.
+You configured the SMT client of the HLI to connect to the SMT server installed on the Azure VM. Now take "zypper up" or "zypper in" to install OS updates to HANA Large Instances, or install other packages. You can only get updates that you previously downloaded on the SMT server.
## Next steps-- [HANA Installation on HLI](hana-example-installation.md).----------
+Learn about migrating SAP HANA on Azure Large Instance to Azure Virtual Machines.
+> [!div class="nextstepaction"]
+> [SAP HANA on Azure Large Instance migration to Azure Virtual Machines](hana-large-instance-virtual-machine-migration.md)