Updates from: 11/15/2022 02:09:44
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Partner Driven Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/partner-driven-integrations.md
The Azure Active Directory Provisioning service allows you to provision users and groups into both [SaaS](user-provisioning.md) and [on-premises](on-premises-scim-provisioning.md) applications. There are four integration paths: **Option 1 - Azure AD Application Gallery:**
-Popular third party applications, such as Dropbox, Snowflake, and Workplace by Facebook, are made available for customers through the Azure AD application gallery. New applications can easily be onboarded to the gallery using the [application network portal](../azuread-dev/howto-app-gallery-listing.md).
+Popular third party applications, such as Dropbox, Snowflake, and Workplace by Facebook, are made available for customers through the Azure AD application gallery. New applications can easily be onboarded to the gallery using the [application network portal](../manage-apps/v2-howto-app-gallery-listing.md).
**Option 2 - Implement a SCIM compliant API for your application:** If your line-of-business application supports the [SCIM](https://aka.ms/scimoverview) standard, it can easily be integrated with the [Azure AD SCIM client](use-scim-to-provision-users-and-groups.md).
active-directory Application Proxy Secure Api Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-secure-api-access.md
You've now registered the AppProxyNativeAppSample app in Azure Active Directory.
The last step is to configure the native app. The code snippet that's used in the following steps is based on [Add the Microsoft Authentication Library to your code (.NET C# sample)](application-proxy-configure-native-client-application.md#step-4-add-the-microsoft-authentication-library-to-your-code-net-c-sample). The code is customized for this example. The code must be added to the *Form1.cs* file in the NativeClient sample app where it will cause the [MSAL library](../develop/reference-v2-libraries.md) to acquire the token for requesting the API call and attach it as bearer to the header in the request. > [!NOTE]
-> The sample app uses [Azure Active Directory Authentication Library (ADAL)](../azuread-dev/active-directory-authentication-libraries.md). Read how to [add MSAL to your project](../develop/tutorial-v2-windows-desktop.md#add-msal-to-your-project). Remember to [add the reference to MSAL](../develop/tutorial-v2-windows-desktop.md#add-the-code-to-initialize-msal) to the class and remove the ADAL reference.
+> The sample app uses Azure Active Directory Authentication Library (ADAL). Read how to [add MSAL to your project](../develop/tutorial-v2-windows-desktop.md#add-msal-to-your-project). Remember to [add the reference to MSAL](../develop/tutorial-v2-windows-desktop.md#add-the-code-to-initialize-msal) to the class and remove the ADAL reference.
To configure the native app code:
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
Previously updated : 11/11/2022 Last updated : 11/14/2022
The settings option allows you to change the settings for the migration process:
:::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/settings.png" alt-text="Screenshot of settings."::: - Migrate ΓÇô This setting allows you to specify which method(s) should be migrated for the selection of users-- User Match ΓÇô Allows you to specify a different attribute for matching users instead of the default UPN-matching
+- User Match ΓÇô Allows you to specify a different on-premises Active Directory attribute for matching Azure AD UPN instead of the default match to userPrincipalName
- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined The migration process can be an automatic process, or a manual process.
active-directory Overview Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/overview-authentication.md
By default, Azure AD blocks weak passwords such as *Password1*. A global banned
To increase security, you can define custom password protection policies. These policies can use filters to block any variation of a password containing a name such as *Contoso* or a location like *London*, for example.
-For hybrid security, you can integrate Azure AD password protection with an on-premises Active Directory environment. A component installed in the on-prem environment receives the global banned password list and custom password protection policies from Azure AD, and domain controllers use them to process password change events. This hybrid approach makes sure that no matter how or where a user changes their credentials, you enforce the use of strong passwords.
+For hybrid security, you can integrate Azure AD password protection with an on-premises Active Directory environment. A component installed in the on-premises environment receives the global banned password list and custom password protection policies from Azure AD, and domain controllers use them to process password change events. This hybrid approach makes sure that no matter how or where a user changes their credentials, you enforce the use of strong passwords.
## Passwordless authentication
The end-goal for many environments is to remove the use of passwords as part of
When you sign in with a passwordless method, credentials are provided by using methods like biometrics with Windows Hello for Business, or a FIDO2 security key. These authentication methods can't be easily duplicated by an attacker.
-Azure AD provides ways to natively authenticate using passwordless methods to simplify the sign-in experience for users and reduce the risk of attacks.
+Azure AD provides ways to natively authenticate using passwordless methods to simplify the sign-in experience for users and reduce the risk of attacks.
+
+## Web browser cookies
+
+When authenticating against Azure Active Directory through a web browser, multiple cookies are involved in the process. Some of the cookies are common on all requests, other cookies are specific to some particular scenarios, i.e., specific authentication flows and/or specific client-side conditions.
+
+Persistent session tokens are stored as persistent cookies on the web browser's cookie jar, and non-persistent session tokens are stored as session cookies on the web browser and are destroyed when the browser session is closed.
+
+| Cookie Name | Type | Comments |
+|--|--|--|
+| ESTSAUTH | Common | Contains user's session information to facilitate SSO. Transient. |
+| ESTSAUTHPERSISTENT | Common | Contains user's session information to facilitate SSO. Persistent. |
+| ESTSAUTHLIGHT | Common | Contains Session GUID Information. Lite session state cookie used exclusively by client-side JavaScript in order to facilitate OIDC sign-out. Security feature. |
+| SignInStateCookie | Common | Contains list of services accessed to facilitate sign-out. No user information. Security feature. |
+| CCState | Common | Contains session information state to be used between Azure AD and the [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). |
+| buid | Common | Tracks browser related information. Used for service telemetry and protection mechanisms. |
+| fpc | Common | Tracks browser related information. Used for tracking requests and throttling. |
+| esctx | Common | Session context cookie information. For CSRF protection. Binds a request to a specific browser instance so the request can't be replayed outside the browser. No user information. |
+| ch | Common | ProofOfPossessionCookie. Stores the Proof of Possession cookie hash to the user agent. |
+| ESTSSC | Common | Legacy cookie containing session count information no longer used. |
+| ESTSSSOTILES | Common | Tracks session sign-out. When present and not expired, with value "ESTSSSOTILES=1", it will interrupt SSO, for specific SSO authentication model, and will present tiles for user account selection. |
+| AADSSOTILES | Common | Tracks session sign-out. Similar to ESTSSSOTILES but for other specific SSO authentication model. |
+| ESTSUSERLIST | Common | Tracks Browser SSO user's list. |
+| SSOCOOKIEPULLED | Common | Prevents looping on specific scenarios. No user information. |
+| cltm | Common | For telemetry purposes. Tracks AppVersion, ClientFlight and Network type. |
+| brcap | Common | Client-side cookie (set by JavaScript) to validate client/web browser's touch capabilities. |
+| clrc | Common | Client-side cookie (set by JavaScript) to control local cached sessions on the client. |
+| CkTst | Common | Client-side cookie (set by JavaScript). No longer in active use. |
+| wlidperf | Common | Client-side cookie (set by JavaScript) that tracks local time for performance purposes. |
+| x-ms-gateway-slice | Common | Azure AD Gateway cookie used for tracking and load balance purposes. |
+| stsservicecookie | Common | Azure AD Gateway cookie also used for tracking purposes. |
+| x-ms-refreshtokencredential | Specific | Available when [Primary Refresh Token (PRT)](/azure/active-directory/devices/concept-primary-refresh-token) is in use. |
+| estsStateTransient | Specific | Applicable to new session information model only. Transient. |
+| estsStatePersistent | Specific | Same as estsStateTransient, but persistent. |
+| ESTSNCLOGIN | Specific | National Cloud Login related Cookie. |
+| UsGovTraffic | Specific | US Gov Cloud Traffic Cookie. |
+| ESTSWCTXFLOWTOKEN | Specific | Saves flowToken information when redirecting to ADFS. |
+| CcsNtv | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). Native flows. |
+| CcsWeb | Specific | To control when Azure AD Gateway will send requests to [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults). Web flows. |
+| Ccs* | Specific | Cookies with prefix Ccs*, have the same purpose as the ones without prefix, but only apply when [Azure AD Backup Authentication Service](/azure/active-directory/conditional-access/resilience-defaults) is in use. |
+| threxp | Specific | Used for throttling control. |
+| rrc | Specific | Cookie used to identify a recent B2B invitation redemption. |
+| debug | Specific | Cookie used to track if user's browser session is enabled for DebugMode. |
+| MSFPC | Specific | This cookie is not specific to any ESTS flow, but is sometimes present. It applies to all Microsoft Sites (when accepted by users). Identifies unique web browsers visiting Microsoft sites. It's used for advertising, site analytics, and other operational purposes. |
+
+> [!NOTE]
+> Cookies identified as client-side cookies are set locally on the client device by JavaScript, hence, will be marked with HttpOnly=false.
+>
+> Cookie definitions and respective names are subject to change at any moment in time according to Azure AD service requirements.
## Next steps
active-directory Workload Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md
Sample JSON for location-based configuration using the Microsoft Graph beta endp
], "excludeServicePrincipals": [ "[Service principal Object ID]"
- ],
+ ]
}, "locations": { "includeLocations": [
Sample JSON for location-based configuration using the Microsoft Graph beta endp
- [Using the location condition in a Conditional Access policy](location-condition.md) - [Conditional Access: Programmatic access](howto-conditional-access-apis.md) - [What is Conditional Access report-only mode?](concept-conditional-access-report-only.md)
-
+
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-register-app.md
Last updated 10/31/2022 + #Customer intent: As developer, I want to know how to register my application with the Microsoft identity platform so that the security token service can issue ID and/or access tokens to client applications that request them. # Quickstart: Register an application with the Microsoft identity platform
-Get started with the Microsoft identity platform by registering an application in the Azure portal.
-
-The Microsoft identity platform performs identity and access management (IAM) only for registered applications. Whether it's a client application like a web or mobile app, or it's a web API that backs a client app, registering it establishes a trust relationship between your application and the identity provider, the Microsoft identity platform.
-
-> [!TIP]
-> To register an application for Azure AD B2C, follow the steps in [Tutorial: Register a web application in Azure AD B2C](../../active-directory-b2c/tutorial-register-applications.md).
-
-## Prerequisites
--- An Azure account that has an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- The Azure account must have permission to manage applications in Azure Active Directory (Azure AD). Any of the following Azure AD roles include the required permissions:
- - [Application administrator](../roles/permissions-reference.md#application-administrator)
- - [Application developer](../roles/permissions-reference.md#application-developer)
- - [Cloud application administrator](../roles/permissions-reference.md#cloud-application-administrator)
-- Completion of the [Set up a tenant](quickstart-create-new-tenant.md) quickstart.-
-## Register an application
-
-Registering your application establishes a trust relationship between your app and the Microsoft identity platform. The trust is unidirectional: your app trusts the Microsoft identity platform, and not the other way around.
-
-Follow these steps to create the app registration:
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. Enter a display **Name** for your application. Users of your application might see the display name when they use the app, for example during sign-in.
- You can change the display name at any time and multiple app registrations can share the same name. The app registration's automatically generated Application (client) ID, not its display name, uniquely identifies your app within the identity platform.
-1. Specify who can use the application, sometimes called its _sign-in audience_.
-
- | Supported account types | Description |
- | - | - |
- | **Accounts in this organizational directory only** | Select this option if you're building an application for use only by users (or guests) in _your_ tenant.<br><br>Often called a _line-of-business_ (LOB) application, this app is a _single-tenant_ application in the Microsoft identity platform. |
- | **Accounts in any organizational directory** | Select this option if you want users in _any_ Azure Active Directory (Azure AD) tenant to be able to use your application. This option is appropriate if, for example, you're building a software-as-a-service (SaaS) application that you intend to provide to multiple organizations.<br><br>This type of app is known as a _multitenant_ application in the Microsoft identity platform. |
- | **Accounts in any organizational directory and personal Microsoft accounts** | Select this option to target the widest set of customers.<br><br>By selecting this option, you're registering a _multitenant_ application that can also support users who have personal _Microsoft accounts_. |
- | **Personal Microsoft accounts** | Select this option if you're building an application only for users who have personal Microsoft accounts. Personal Microsoft accounts include Skype, Xbox, Live, and Hotmail accounts. |
-
-1. Don't enter anything for **Redirect URI (optional)**. You'll configure a redirect URI in the next section.
-1. Select **Register** to complete the initial app registration.
-
- :::image type="content" source="media/quickstart-register-app/portal-02-app-reg-01.png" alt-text="Screenshot of the Azure portal in a web browser, showing the Register an application pane.":::
-
-When registration finishes, the Azure portal displays the app registration's **Overview** pane. You see the **Application (client) ID**. Also called the _client ID_, this value uniquely identifies your application in the Microsoft identity platform.
-
-> [!IMPORTANT]
-> New app registrations are hidden to users by default. When you are ready for users to see the app on their [My Apps page](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) you can enable it. To enable the app, in the Azure portal navigate to **Azure Active Directory** > **Enterprise applications** and select the app. Then on the **Properties** page toggle **Visible to users?** to Yes.
-
-Your application's code, or more typically an authentication library used in your application, also uses the client ID. The ID is used as part of validating the security tokens it receives from the identity platform.
--
-## Add a redirect URI
-
-A _redirect URI_ is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.
-
-In a production web application, for example, the redirect URI is often a public endpoint where your app is running, like `https://contoso.com/auth-response`. During development, it's common to also add the endpoint where you run your app locally, like `https://127.0.0.1/auth-response` or `http://localhost/auth-response`.
-
-You add and modify redirect URIs for your registered applications by configuring their [platform settings](#configure-platform-settings).
-
-### Configure platform settings
-
-Settings for each application type, including redirect URIs, are configured in **Platform configurations** in the Azure portal. Some platforms, like **Web** and **Single-page applications**, require you to manually specify a redirect URI. For other platforms, like mobile and desktop, you can select from redirect URIs generated for you when you configure their other settings.
-
-To configure application settings based on the platform or device you're targeting, follow these steps:
-
-1. In the Azure portal, in **App registrations**, select your application.
-1. Under **Manage**, select **Authentication**.
-1. Under **Platform configurations**, select **Add a platform**.
-1. Under **Configure platforms**, select the tile for your application type (platform) to configure its settings.
-
- :::image type="content" source="media/quickstart-register-app/portal-04-app-reg-03-platform-config.png" alt-text="Screenshot of the platform configuration pane in the Azure portal." border="false":::
-
- | Platform | Configuration settings |
- | -- | -- |
- | **Web** | Enter a **Redirect URI** for your app. This URI is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.<br/><br/>Select this platform for standard web applications that run on a server. |
- | **Single-page application** | Enter a **Redirect URI** for your app. This URI is the location where the Microsoft identity platform redirects a user's client and sends security tokens after authentication.<br/><br/>Select this platform if you're building a client-side web app by using JavaScript or a framework like Angular, Vue.js, React.js, or Blazor WebAssembly. |
- | **iOS / macOS** | Enter the app **Bundle ID**. Find it in **Build Settings** or in Xcode in _Info.plist_.<br/><br/>A redirect URI is generated for you when you specify a **Bundle ID**. |
- | **Android** | Enter the app **Package name**. Find it in the _AndroidManifest.xml_ file. Also generate and enter the **Signature hash**.<br/><br/>A redirect URI is generated for you when you specify these settings. |
- | **Mobile and desktop applications** | Select one of the **Suggested redirect URIs**. Or specify a **Custom redirect URI**.<br/><br/>For desktop applications using embedded browser, we recommend<br/>`https://login.microsoftonline.com/common/oauth2/nativeclient`<br/><br/>For desktop applications using system browser, we recommend<br/>`http://localhost`<br/><br/>Select this platform for mobile applications that aren't using the latest Microsoft Authentication Library (MSAL) or aren't using a broker. Also select this platform for desktop applications. |
-
-1. Select **Configure** to complete the platform configuration.
-
-### Redirect URI restrictions
-
-There are some restrictions on the format of the redirect URIs you add to an app registration. For details about these restrictions, see [Redirect URI (reply URL) restrictions and limitations](reply-url.md).
-
-## Add credentials
-
-Credentials are used by [confidential client applications](msal-client-applications.md) that access a web API. Examples of confidential clients are web apps, other web APIs, or service-type and daemon-type applications. Credentials allow your application to authenticate as itself, requiring no interaction from a user at runtime.
-
-You can add both certificates and client secrets (a string) as credentials to your confidential client app registration.
--
-### Add a certificate
-
-Sometimes called a _public key_, a certificate is the recommended credential type because they're considered more secure than client secrets. For more information about using a certificate as an authentication method in your application, see [Microsoft identity platform application authentication certificate credentials](active-directory-certificate-credentials.md).
-
-1. In the Azure portal, in **App registrations**, select your application.
-1. Select **Certificates & secrets** > **Certificates** > **Upload certificate**.
-1. Select the file you want to upload. It must be one of the following file types: _.cer_, _.pem_, _.crt_.
-1. Select **Add**.
-
-### Add a client secret
-
-Sometimes called an _application password_, a client secret is a string value your app can use in place of a certificate to identity itself.
-
-Client secrets are considered less secure than certificate credentials. Application developers sometimes use client secrets during local app development because of their ease of use. However, you should use certificate credentials for any of your applications that are running in production.
-
-1. In the Azure portal, in **App registrations**, select your application.
-1. Select **Certificates & secrets** > **Client secrets** > **New client secret**.
-1. Add a description for your client secret.
-1. Select an expiration for the secret or specify a custom lifetime.
- - Client secret lifetime is limited to two years (24 months) or less. You can't specify a custom lifetime longer than 24 months.
- - Microsoft recommends that you set an expiration value of less than 12 months.
-1. Select **Add**.
-1. _Record the secret's value_ for use in your client application code. This secret value is _never displayed again_ after you leave this page.
-
-For application security recommendations, see [Microsoft identity platform best practices and recommendations](identity-platform-integration-checklist.md#security).
--
-### Add a federated credential
-
-Federated identity credentials are a type of credential that allows workloads, such as GitHub Actions, workloads running on Kubernetes, or workloads running in compute platforms outside of Azure access Azure AD protected resources without needing to manage secrets using [workload identity federation](workload-identity-federation.md).
-
-To add a federated credential, follow these steps:
-
-1. In the Azure portal, in **App registrations**, select your application.
-1. Select **Certificates & secrets** > **Federated credentials** > **Add a credential**.
-1. In the **Federated credential scenario** drop-down box, select one of the supported scenarios, and follow the corresponding guidance to complete the configuration.
-
- - **Customer managed keys** for encrypt data in your tenant using Azure Key Vault in another tenant.
- - **GitHub actions deploying Azure resources** to [configure a GitHub workflow](workload-identity-federation-create-trust.md#github-actions) to get tokens for your application and deploy assets to Azure.
- - **Kubernetes accessing Azure resources** to configure a [Kubernetes service account](workload-identity-federation-create-trust.md#kubernetes) to get tokens for your application and access Azure resources.
- - **Other issuer** to configure an identity managed by an external [OpenID Connect provider](workload-identity-federation-create-trust.md#other-identity-providers) to get tokens for your application and access Azure resources.
-
-
-For more information, how to get an access token with a federated credential, check out the [Microsoft identity platform and the OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) article.
## Next steps
active-directory Reference Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-breaking-changes.md
Today, `?e= "f"&g=h` is parsed identically as `?e=f&g=h` - so `e` == `f`. Wit
**Effective date**: July 26, 2019
-**Endpoints impacted**: Both [v1.0](../azuread-dev/v1-oauth2-client-creds-grant-flow.md) and [v2.0](./v2-oauth2-client-creds-grant-flow.md)
+**Endpoints impacted**: Both v1.0 and [v2.0](./v2-oauth2-client-creds-grant-flow.md)
-**Protocol impacted**: [Client Credentials (app-only tokens)](../azuread-dev/v1-oauth2-client-creds-grant-flow.md)
+**Protocol impacted**: Client Credentials (app-only tokens)
A security change took effect on July 26, 2019 changing the way app-only tokens (via the client credentials grant) are issued. Previously, applications were allowed to get tokens to call any other app, regardless of presence in the tenant or roles consented to for that application. This behavior has been updated so that for resources (sometimes called web APIs) set to be single-tenant (the default), the client application must exist within the resource tenant. Existing consent between the client and the API is still not required, and apps should still be doing their own authorization checks to ensure that a `roles` claim is present and contains the expected value for the API.
active-directory Scopes Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scopes-oidc.md
The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-p
In this article, you'll learn about scopes and permissions in the identity platform.
-The following list shows are some examples of Microsoft web-hosted resources:
+The following list shows some examples of Microsoft web-hosted resources:
- Microsoft Graph: `https://graph.microsoft.com` - Microsoft 365 Mail API: `https://outlook.office.com` - Azure Key Vault: `https://vault.azure.net`
-The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others:
+The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources can also define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others:
- Read a user's calendar - Write to a user's calendar
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power BI Pro | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro CE | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro Dept | POWER_BI_PRO_DEPT | 3a6a908c-09c5-406a-8170-8ebb63c42882 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
-| Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)</br>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) |
+| Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) |
| Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | | Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) | | Project for Office 365 | PROJECTCLIENT | a10d5e58-74da-4312-95c8-76be4e5b75a0 | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) |
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
Previously updated : 08/17/2022 Last updated : 11/14/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a new tenant for your organization After you sign in to the Azure portal, you can create a new tenant for your organization. Your new tenant represents your organization and helps you to manage a specific instance of Microsoft cloud services for your internal and external users.
->[!Important]
->If users with the business need to create tenants are unable to create them, review your user settings page to ensure that **Tenant Creation** is not switched off. If it is switched off, reach out to your Global Administrator to provide those who need it with access to the Tenant Creator role.
+ ### To create a new tenant 1. Sign in to your organization's [Azure portal](https://portal.azure.com/).
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
We're removing the multifactor authentication (MFA) server IP address from the [
**Service category:** Authentications (Logins) **Product capability:** User Authentication
-On July 26, 2019, we changed how we provide app-only tokens through the [client credentials grant](../azuread-dev/v1-oauth2-client-creds-grant-flow.md). Previously, apps could get tokens to call other apps, regardless of whether the client app was in the tenant. We've updated this behavior so single-tenant resources, sometimes called Web APIs, can only be called by client apps that exist in the resource tenant.
+On July 26, 2019, we changed how we provide app-only tokens through the [client credentials grant](../develop/v2-oauth2-client-creds-grant-flow.md). Previously, apps could get tokens to call other apps, regardless of whether the client app was in the tenant. We've updated this behavior so single-tenant resources, sometimes called Web APIs, can only be called by client apps that exist in the resource tenant.
If your app isn't located in the resource tenant, you'll get an error message that says, `The service principal named <app_name> was not found in the tenant named <tenant_name>. This can happen if the application has not been installed by the administrator of the tenant.` To fix this problem, you must create the client app service principal in the tenant, using either the [admin consent endpoint](../develop/v2-permissions-and-consent.md#using-the-admin-consent-endpoint) or [through PowerShell](../develop/howto-authenticate-service-principal-powershell.md), which ensures your tenant has given the app permission to operate within the tenant.
active-directory How To Connect Group Writeback Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-enable.md
Group writeback requires enabling both the original and new versions of the feat
``` PowerShell Set-ADSyncScheduler -SyncCycleEnabled $true ```
+6. Run a full sync cycle if group writeback was previously configured and will not be configured in the ΓüáAzure AD Connect wizard:
+ ``` PowerShell
+ Start-ADSyncSyncCycle -PolicyType Initial
+ ```
+ ### Enable group writeback by using the Azure AD Connect wizard If the original version of group writeback was not previously enabled, continue with the following steps:
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
To configure the integration of ServiceNow into Azure AD, you need to add Servic
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
- ## Configure and test Azure AD SSO for ServiceNow Configure and test Azure AD SSO with ServiceNow by using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ServiceNow.
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
description: Learn how to use the Azure CLI to create and Azure Active Directory
Previously updated : 07/29/2021 Last updated : 11/11/2021
# Integrate Azure Active Directory with Azure Kubernetes Service using the Azure CLI (legacy) > [!WARNING]
-> **The feature described in this document, Azure AD Integration (legacy), will be deprecated on February 29th 2024.
+> **The feature described in this document, Azure AD Integration (legacy), will be deprecated on June 1st, 2023.
> > AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client application. If you want to migrate follow the instructions [here][managed-aad-migrate].
aks Managed Cluster Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-cluster-snapshot.md
- Title: Use cluster snapshots to save and apply Azure Kubernetes Service (AKS) cluster configuration (preview)
-description: Learn how to use cluster snapshots to save and apply Azure Kubernetes Service (AKS) cluster configuration
---- Previously updated : 10/03/2022---
-# Use cluster snapshots to save and apply Azure Kubernetes Service cluster configuration (preview)
-
-Cluster snapshots allow you to save configuration from an Azure Kubernetes Service (AKS) cluster, which can then be used to easily apply the configuration to other clusters. Currently, we snapshot the following properties:
-- `ManagedClusterSKU`-- `EnableRbac`-- `KubernetesVersion`-- `LoadBalancerSKU`-- `NetworkMode`-- `NetworkPolicy`-- `NetworkPlugin`--
-## Prerequisite
--- An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-- The latest version of the [Azure CLI](/cli/azure/install-azure-cli) installed.-- Your cluster must be running successfully.-- Your cluster must have been created with the `AddonManagerV2Preview` and `CSIControllersV2Preview` custom header feature values:
- ```azurecli
- az aks create -g $RESOURCE_GROUP -n $CLUSTER_NAME --aks-custom-headers AKSHTTPCustomFeatures=AddonManagerV2Preview,AKSHTTPCustomFeatures=CSIControllersV2Preview
- ```
-
-### Install the `aks-preview` Azure CLI extension
-
-Install the latest version of the `aks-preview` Azure CLI extension using the following command:
-
-```azurecli
-az extension add --upgrade --name aks-preview
-```
-
-### Register the `ManagedClusterSnapshotPreview` feature flag
-
-To use the KEDA, you must enable the `ManagedClusterSnapshotPreview` feature flag on your subscription.
-
-```azurecli
-az feature register --name ManagedClusterSnapshotPreview --namespace Microsoft.ContainerService
-```
-
-You can check on the registration status by using the `az feature list` command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/ManagedClusterSnapshotPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
-
-## Take a snapshot of your cluster
-
-To begin, get the `id` of the cluster you want to take a snapshot of using `az aks show`:
-
-```azurecli-interactive
-az aks show -g $RESOURCE_GROUP -n $CLUSTER_NAME
-```
-
-Using the `id` you just obtained, create a snapshot using `az aks snapshot create`:
-
-```azurecli-interactive
-az aks snapshot create -g $RESOURCE_GROUp -n snapshot1 --cluster-id $CLUSTER_ID
-```
-
-Your output will look similar to the following example:
-
-```json
-{
- "creationData": {
- "sourceResourceId": $CLUSTER_ID
- },
- "id": "/subscriptions/$SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ContainerService/managedclustersnapshots/snapshot1",
- "location": "eastus2",
- "managedClusterPropertiesReadOnly": {
- "enableRbac": true,
- "kubernetesVersion": "1.22.6",
- "networkProfile": {
- "loadBalancerSku": "Standard",
- "networkMode": null,
- "networkPlugin": "kubenet",
- "networkPolicy": null
- },
- "sku": {
- "name": "Basic",
- "tier": "Paid"
- }
- },
- "name": "snapshot1",
- "resourceGroup": $RESOURCE_GROUP,
- "snapshotType": "ManagedCluster",
- "systemData": {
- "createdAt": "2022-04-21T00:47:49.041399+00:00",
- "createdBy": "user@contoso.com",
- "createdByType": "User",
- "lastModifiedAt": "2022-04-21T00:47:49.041399+00:00",
- "lastModifiedBy": "user@contoso.com",
- "lastModifiedByType": "User"
- },
- "tags": null,
- "type": "Microsoft.ContainerService/ManagedClusterSnapshots"
-}
-```
-
-## View a snapshot
-
-To list all available snapshots, use the `az aks snapshot list` command:
-
-```azurecli-interactive
-az aks snapshot list -g $RESOURCE_GROUP
-```
-
-To view details for an individual snapshot, reference it by name in the `az aks snapshot show command`. For example, to view the snapshot `snapshot1` created in the steps above:
-
-```azurecli-interactive
-az aks snapshot show -g $RESOURCE_GROUp -n snapshot1 -o table
-```
-
-Your output will look similar to the following example:
-
-```bash
-Name Location ResourceGroup Sku EnableRbac KubernetesVersion NetworkPlugin LoadBalancerSku
- - -- - --
-snapshot1 eastus2 qizhe-rg Paid True 1.22.6 kubenet Standard
-```
-
-## Delete a snapshot
-
-Removing a snapshot can be done by referencing the snapshot's name in the `az aks snapshot delete` command. For example, to delete the snapshot `snapshot1` created in the above steps:
-
-```azurecli-interactive
-az aks snapshot delete -g $RESOURCE_GROUP -n snapshot1
-```
-
-## Create a cluster from a snapshot
-
-New AKS clusters can be created based on the configuration captured in a snapshot. To do so, first obtain the `id` of the desired snapshot. Next, use `az aks create`, using the snapshot's `id` with the `--cluster-snapshot-id` flag. Be sure to include the `addonManagerV2` and `CSIControllersV2Preview` feature flag custom header values. For example:
-
-```azurecli-interactive
-az aks create -g $RESOURCE_GROUP -n aks-from-snapshot --cluster-snapshot-id "/subscriptions/$SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ContainerService/managedclustersnapshots/snapshot1" --aks-custom-headers AKSHTTPCustomFeatures=AddonManagerV2Preview,AKSHTTPCustomFeatures=CSIControllersV2Preview
-```
-
-> [!NOTE]
-> The cluster can be created with other allowed parameters that are not captured in the snapshot, such as `vm-sku-size` or `--node-count`. However, no configuration arguments for parameters that are part of the snapshot should be included. If the values passed in these arguments differs from the snapshot's values, cluster creation will fail.
-
-## Update or upgrade a cluster using a snapshot
-
-Clusters can also be updated and upgraded while using a snapshot by using the snapshot's `id` with the `--cluster-snapshot-id` flag:
--
-```azurecli-interactive
-az aks update -g $RESOURCE_GROUP -n aks-from-snapshot --cluster-snapshot-id "/subscriptions/$SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ContainerService/managedclustersnapshots/snapshot1" --aks-custom-headers AKSHTTPCustomFeatures=AddonManagerV2Preview,AKSHTTPCustomFeatures=CSIControllersV2Preview
-```
--
-```azurecli-interactive
-az aks upgrade -g $RESOURCE_GROUP -n aks-from-snapshot --cluster-snapshot-id "/subscriptions/$SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ContainerService/managedclustersnapshots/snapshot1" --aks-custom-headers AKSHTTPCustomFeatures=AddonManagerV2Preview,AKSHTTPCustomFeatures=CSIControllersV2Preview
-```
-
-## Next steps
-- Learn [how to use node pool snapshots](./node-pool-snapshot.md)
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 11/01/2022 Last updated : 11/09/2022 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
For more information on using the KMS plugin, see [Encrypting Secret Data at Res
* Azure CLI version 2.39.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. > [!WARNING]
-> KMS only supports Konnectivity and [API Server Vnet Integration][api-server-vnet-integration].
+> KMS supports Konnectivity or [API Server Vnet Integration][api-server-vnet-integration].
> You can use `kubectl get po -n kube-system` to verify the results show that a konnectivity-agent-xxx pod is running. If there is, it means the AKS cluster is using Konnectivity. When using VNet integration, you can run the command `az aks cluster show -g -n` to verify the setting `enableVnetIntegration` is set to **true**. ## Limitations
aks Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-managed-identity.md
Title: Use a managed identity in Azure Kubernetes Service description: Learn how to use a system-assigned or user-assigned managed identity in Azure Kubernetes Service (AKS) Previously updated : 09/27/2022 Last updated : 11/08/2022 # Use a managed identity in Azure Kubernetes Service
A custom control plane managed identity enables access to be granted to the exis
> [!NOTE] > USDOD Central, USDOD East, USGov Iowa regions in Azure US Government cloud aren't currently supported.
->
+>
> AKS will create a system-assigned kubelet identity in the Node resource group if you do not [specify your own kubelet managed identity][Use a pre-created kubelet managed identity]. If you don't have a managed identity, you should create one by running the [az identity][az-identity-create] command.
The output should resemble the following:
} ```
+Before creating the cluster, you need to [add the role assignment for control plane identity][add role assignment for control plane identity].
+ Run the following command to create a cluster with your existing identity: ```azurecli-interactive
Use [Azure Resource Manager templates ][aks-arm-template] to create a managed id
[Bring your own control plane managed identity]: use-managed-identity.md#bring-your-own-control-plane-managed-identity [Use a pre-created kubelet managed identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity [workload-identity-overview]: workload-identity-overview.md
-[aad-pod-identity]: use-azure-ad-pod-identity.md
+[aad-pod-identity]: use-azure-ad-pod-identity.md
+[add role assignment for control plane identity]: use-managed-identity.md#add-role-assignment-for-control-plane-identity
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
az aks nodepool add \
Mariner is an open-source Linux distribution available as an AKS container host. It provides high reliability, security, and consistency. Mariner only includes the minimal set of packages needed for running container workloads, which improves boot times and overall performance.
-You can add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku mariner`.
+You can add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku CBLMariner`.
```azurecli az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \
- --os-sku mariner
+ --os-sku CBLMariner
``` ### Migrate Ubuntu nodes to Mariner Use the following instructions to migrate your Ubuntu nodes to Mariner nodes.
-1. Add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku mariner`.
+1. Add a Mariner node pool into your existing cluster using the `az aks nodepool add` command and specifying `--os-sku CBLMariner`.
> [!NOTE] > When adding a new Mariner node pool, you need to add at least one as `--mode System`. Otherwise, AKS won't allow you to delete your existing Ubuntu node pool.
aks Use Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-network-policies.md
Azure Network Policy Manager(NPM) doesn't support IPv6. Otherwise, Azure NPM ful
>[!NOTE] > * Azure NPM pod logs will record an error if an unsupported policy is created.
+## Scale:
+
+With the current limits set on Azure NPM for Linux, it can scale up to 500 Nodes and 40k Pods. You may see OOM kills beyond this scale. Please reach out to us on [aks-acn-github] if you'd like to increase your memory limit.
+ ## Create an AKS cluster and enable Network Policy To see network policies in action, let's create an AKS cluster that supports network policy and then work on adding policies.
To learn more about policies, see [Kubernetes network policies][kubernetes-netwo
[calico-support]: https://www.tigera.io/tigera-products/calico/ [calico-logs]: https://docs.projectcalico.org/maintenance/troubleshoot/component-logs [calico-aks-cleanup]: https://github.com/Azure/aks-engine/blob/master/docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml
+[aks-acn-github]: https://github.com/Azure/azure-container-networking/issues
<!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli
aks Use Pod Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md
Last updated 03/25/2021
# Preview - Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) > [!Important]
-> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 06-01-2023. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
+> The feature described in this article, pod security policy (preview), will be deprecated starting with Kubernetes version 1.21, and it will be removed in version 1.25. AKS will mark the pod security policy as Deprecated with the AKS API on 06-01-2023 and remove it in version 1.25. You can migrate pod security policy to pod security admission controller before the deprecation deadline.
After pod security policy (preview) is deprecated, you must have already migrated to Pod Security Admission controller or disabled the feature on any existing clusters using the deprecated feature to perform future cluster upgrades and stay within Azure support.
analysis-services Analysis Services Refresh Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-azure-automation.md
Title: Refresh Azure Analysis Services models with Azure Automation | Microsoft Docs description: This article describes how to code model refreshes for Azure Analysis Services by using Azure Automation. -+ Last updated 12/01/2020
analysis-services Analysis Services Refresh Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-refresh-logic-app.md
Title: Refresh with Logic Apps for Azure Analysis Services models | Microsoft Docs description: This article describes how to code asynchronous refresh for Azure Analysis Services by using Azure Logic Apps. -+ Last updated 10/30/2019
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
Depending on the DDoS Protection plan you use, enable DDoS protection on the vir
### Enable DDoS protection on the API Management public IP address
-If your plan uses the IP DDoS Protection SKU, see [Enable DDoS IP Protection for a public IP address](../ddos-protection/manage-ddos-protection-powershell-ip.md#disable-ddos-ip-protection-for-an-existing-public-ip-address).
+If your plan uses the IP DDoS Protection SKU, see [Enable DDoS IP Protection Preview for a public IP address](../ddos-protection/manage-ddos-protection-powershell-ip.md#disable-ddos-ip-protection-preview-for-an-existing-public-ip-address).
## Next steps
app-service App Service Undelete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-undelete.md
Restore-AzDeletedWebApp -TargetResourceGroupName <my_rg> -Name <my_app> -TargetA
> [!NOTE] > Deployment slots are not restored as part of your app. If you need to restore a staging slot, use the `-Slot <slot-name>` flag.
+> The commandlet is restoring original slot to the target app's production slot.
> By default `Restore-AzDeletedWebApp` will restore both your app configuration as well any content to target app. If you want to only restore content, you use the `-RestoreContentOnly` flag with this commandlet. >Restore only site content to the target app
The inputs for command are:
- **Name**: Name for the app, should be globally unique. - **ResourceGroupName**: Original resource group for the deleted app - **Slot**: Slot for the deleted app -- **RestoreContentOnly**: y default `Restore-AzDeletedWebApp` will restore both your app configuration as well any content. If you want to only restore content, you use the `-RestoreContentOnly` flag with this commandlet.
+- **RestoreContentOnly**: By default `Restore-AzDeletedWebApp` will restore both your app configuration as well any content. If you want to only restore content, you can use the `-RestoreContentOnly` flag with this commandlet.
> [!NOTE] > If the app was hosted on and then deleted from an App Service Environment, it can be restored only if the corresponding App Service Environment still exists.
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
For your custom Windows image, you must choose the right [parent image (base ima
It takes some time to download a parent image during app start-up. However, you can reduce start-up time by using one of the following parent images that are already cached in Azure App Service: -- [https://mcr.microsoft.com/windows/servercore:ltsc2022](https://mcr.microsoft.com/product/windows/servercore/about)-- [https://mcr.microsoft.com/windows/servercore:ltsc2019](https://mcr.microsoft.com/product/windows/servercore/about)-- [https://mcr.microsoft.com/dotnet/framework/aspnet](https://mcr.microsoft.com/product/dotnet/framework/aspnet/tags):4.8-windowsservercore-ltsc2022-- [https://mcr.microsoft.com/dotnet/framework/aspnet](https://mcr.microsoft.com/product/dotnet/framework/aspnet/tags):4.8-windowsservercore-ltsc2019-- [https://mcr.microsoft.com/dotnet/runtime](https://mcr.microsoft.com/product/dotnet/runtime/tags):6.0-nanoserver-ltsc2022-- [https://mcr.microsoft.com/dotnet/runtime](https://mcr.microsoft.com/product/dotnet/runtime/tags):6.0-nanoserver-1809-- [https://mcr.microsoft.com/dotnet/aspnet](https://mcr.microsoft.com/product/dotnet/aspnet/tags):6.0-nanoserver-ltsc2022-- [https://mcr.microsoft.com/dotnet/aspnet](https://mcr.microsoft.com/product/dotnet/aspnet/tags):6.0-nanoserver-1809
+- [mcr.microsoft.com/windows/servercore:ltsc2022](https://mcr.microsoft.com/product/windows/servercore/about)
+- [mcr.microsoft.com/windows/servercore:ltsc2019](https://mcr.microsoft.com/product/windows/servercore/about)
+- [mcr.microsoft.com/dotnet/framework/aspnet](https://mcr.microsoft.com/product/dotnet/framework/aspnet/tags):4.8-windowsservercore-ltsc2022
+- [mcr.microsoft.com/dotnet/framework/aspnet](https://mcr.microsoft.com/product/dotnet/framework/aspnet/tags):4.8-windowsservercore-ltsc2019
+- [mcr.microsoft.com/dotnet/runtime](https://mcr.microsoft.com/product/dotnet/runtime/tags):6.0-nanoserver-ltsc2022
+- [mcr.microsoft.com/dotnet/runtime](https://mcr.microsoft.com/product/dotnet/runtime/tags):6.0-nanoserver-1809
+- [mcr.microsoft.com/dotnet/aspnet](https://mcr.microsoft.com/product/dotnet/aspnet/tags):6.0-nanoserver-ltsc2022
+- [mcr.microsoft.com/dotnet/aspnet](https://mcr.microsoft.com/product/dotnet/aspnet/tags):6.0-nanoserver-1809
::: zone-end
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 10/26/2022 Last updated : 11/11/2022
App Service can now automate migration of your App Service Environment v1 and v2
At this time, App Service Environment migrations to v3 using the migration feature are supported in the following regions:
+### Azure Public:
+ - Australia East - Australia Central - Australia Southeast
At this time, App Service Environment migrations to v3 using the migration featu
- East US - East US 2 - France Central
+- Germany North
- Germany West Central - Japan East - Korea Central
+- Korea South
- North Central US - North Europe - Norway East
+- Norway West
- South Central US - Switzerland North
+- Switzerland West
- UAE North - UK South - UK West
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
Title: App Service Environment overview
description: This article discusses the Azure App Service Environment feature of Azure App Service. Previously updated : 10/28/2022 Last updated : 11/14/2022
App Service Environment v3 is available in the following regions:
| Region | Single zone support | Availability zone support | Single zone support | | -- | :--: | :-: | :-: | | | App Service Environment v3 | App Service Environment v3 | App Service Environment v1/v2 |
-| Australia Central | | | ✅ |
-| Australia Central 2 | | | ✅ |
+| Australia Central | | | ✅ |
+| Australia Central 2 | | | ✅ |
| Australia East | ✅ | ✅ | ✅ | | Australia Southeast | ✅ | | ✅ | | Brazil South | ✅ | ✅ | ✅ |
App Service Environment v3 is available in the following regions:
| East US | ✅ | ✅ | ✅ | | East US 2 | ✅ | ✅ | ✅ | | France Central | ✅ | ✅ | ✅ |
-| France South | | | ✅ |
-| Germany North | | | ✅ |
+| France South | | | ✅ |
+| Germany North | | | ✅ |
| Germany West Central | ✅ | ✅ | ✅ | | Japan East | ✅ | ✅ | ✅ | | Japan West | | | ✅ | | Jio India West | | | ✅ | | Korea Central | ✅ | ✅ | ✅ |
-| Korea South | | | ✅ |
+| Korea South | | | ✅ |
| North Central US | ✅ | | ✅ | | North Europe | ✅ | ✅ | ✅ | | Norway East | ✅ | ✅ | ✅ | | Norway West | | | ✅ |
+| Qatar Central | ✅ | ✅ | |
| South Africa North | ✅ | ✅ | ✅ | | South Africa West | | | ✅ | | South Central US | ✅ | ✅ | ✅ |
-| South India | | | ✅ |
+| South India | | | ✅ |
| Southeast Asia | ✅ | ✅ | ✅ | | Sweden Central | ✅ | ✅ | | | Switzerland North | ✅ | ✅ | ✅ |
App Service Environment v3 is available in the following regions:
| West US 2 | ✅ | ✅ | ✅ | | West US 3 | ✅ | ✅ | ✅ |
+\* Limited availability and no support for dedicated host deployments
+ ### Azure Government: | Region | Single zone support | Availability zone support | Single zone support |
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
To see the entirety of the command output, drop the `--query` in the command.
## 5 - Generate the Database Schema
-To generate our database schema, set up a firewall rule on the SQL database server. This rule lets your local computer connect to Azure. For this step, you'll need to know your local computer's IP address. For more information about how to find the IP address, [see here](https://whatismyipaddress.com/).
+To generate our database schema, set up a firewall rule on the SQL database server. This rule lets your local computer connect to Azure. For this step, you'll need to know your local computer's IP address. Azure will attempt to detect your IP automatically and presents the option to add it for you, as seen in the steps below. For more information about how to find your IP address manually, [see here](https://whatismyipaddress.com/).
### [Azure portal](#tab/azure-portal)
az sql server firewall-rule create --resource-group msdocs-core-sql --server <yo
-Next, update the *appsettings.json* file in the sample project with the [connection string Azure SQL Database](#4connect-the-app-to-the-database). The update allows us to run migrations locally against our database hosted in Azure. Replace the username and password placeholders with the values you chose when creating your database.
+Next, update the name of the connection string in `appsettings.json` file to match the `AZURE_SQL_CONNECTION` name generated by the service connector. When the app is deployed to Azure, the `localdb` connection string value will be overridden by the connection string stored in Azure.
```json
-"AZURE_SQL_CONNECTIONSTRING": "Data Source=<your-server-name>.database.windows.net,1433;Initial Catalog=coreDb;User ID=<username>;Password=<password>"
+"ConnectionStrings": {
+ "AZURE_SQL_CONNECTION": "Server=(localdb)\\mssqllocaldb;Trusted_Connection=True;MultipleActiveResultSets=true"
+ }
```
-Next, update the *Startup.cs* file the sample project by updating the existing connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`:
+Next, update the `Startup.cs` file the sample project by updating the existing connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`. This change configures the `DbContext` to use the correct connection string in Azure and locally from the `appsettings.json` file.
```csharp services.AddDbContext<MyDatabaseContext>(options => options.UseSqlServer(Configuration.GetConnectionString("AZURE_SQL_CONNECTIONSTRING"))); ```
-From a local terminal, run the following commands to install the necessary CLI tools for Entity Framework Core, create an initial database migration file, and apply those changes to update the database:
+From a local terminal, run the following commands to install the necessary CLI tools for Entity Framework Core, create an initial database migration file, and apply those changes to update the database. Make sure to pass in the connection string value you copied from the Azure SQL database for the `connection` parameter. The `connection` parameter overrides the value of the connection string that is configured for the `DbContext` in the `startup.cs` file.
```dotnetcli cd <sample-root>\DotNetCoreSqlDb dotnet tool install -g dotnet-ef dotnet ef migrations add InitialCreate
-dotnet ef database update
+dotnet ef database update --connection "<your-azure-sql-connection-string>"
``` After the migration finishes, the correct schema is created.
If you receive the error `Client with IP address xxx.xxx.xxx.xxx is not allowed
## 6 - Deploy to the App Service
-That we're able to create the schema in the database means that our .NET app can connect to the Azure database successfully with the new connection string. Remember that the service connector already configured the `AZURE_SQL_CONNECTIONSTRING` connection string in our App Service app. We're now ready to deploy our .NET app to the App Service.
+That we're able to create the schema in the database means that our .NET app can connect to the Azure database successfully with the new connection string. Remember that the service connector already configured the `AZURE_SQL_CONNECTIONSTRING` connection string in our App Service app. We're now ready to deploy our .NET app to the App Service. When the app is deployed, the `AZURE_SQL_CONNECTION` configuration applied to the App Service by the Service Connector will override the `localdb` connection string with the same name in the `appsettings.json` file.
### [Deploy using Visual Studio](#tab/visualstudio-deploy)
attestation Author Sign Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/author-sign-policy.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Azure Diagnostic Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/azure-diagnostic-monitoring.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/basic-concepts.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Claim Rule Grammar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-rule-grammar.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Claim Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/claim-sets.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-examples.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Policy Signer Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-signer-examples.md
Previously updated : 08/31/2020 Last updated : 11/14/2022
attestation Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/private-endpoint-powershell.md
Previously updated : 03/26/2021 Last updated : 11/14/2022
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
From the Azure portal, follow these steps:
| Parameter | Description | Example | |--|--|| | Separator | The separator is the character parsed in your imported configuration file to separate key-values that will be added to your configuration store. Select one of the following options: *.*, *,*, *:*, *;*, */*, *-*, *_*, *ΓÇö*. | *;* |
- | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. The entered prefix will be appended to the beginning of every key you import from this file. | *TestApp:* |
+ | Prefix | Optional. A key prefix is the beginning part of a key-value's "key" property. Prefixes can be used to manage groups of key-values in a configuration store. The entered prefix will be appended to the front of the "key" property of every key-value you import from this file. | *TestApp:* |
| Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | *prod* | | Content type | Optional. Indicate if you're importing a JSON file or Key Vault references. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | *JSON (application/json)* | 1. Select **Apply** to proceed with the import.
-You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp". The separator ":" is used and all the keys you've imported have content type set as "JSON".
+You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp". The separator ":" is used and all the key-values you've imported have content type set as "JSON".
#### [Azure CLI](#tab/azure-cli)
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| Parameter | Description | Example | ||-|--| | `--separator` | Optional. The separator is the delimiter for flattening the key-values to Json/Yaml. It's required for exporting hierarchical structure and will be ignored for property files and feature flags. Select one of the following options: `.`, `,`, `:`, `;`, `/`, `-`, `_`, `ΓÇö`. | `;` |
- | `--prefix` | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | `TestApp:` |
+ | `--prefix` | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of key-values in a configuration store. This prefix will be appended to the front of the "key" property of each imported key-value. | `TestApp:` |
| `--label` | Optional. Enter a label that will be assigned to your imported key-values. | `prod` | | `--content-type` | Optional. Enter `appconfig/kvset` or `application/json` to state that the imported content consists of a Key Vault reference or a JSON file. | `application/json` |
- Example: import all keys and feature flags from a JSON file, apply the label "prod", and append the prefix "TestApp". Add the "application/json" content type.
+ Example: import all key-values and feature flags from a JSON file, apply the label "prod", and append the prefix "TestApp". Add the "application/json" content type.
```azurecli az appconfig kv import --name my-app-config-store --source file --path D:/abc.json --format json --separator ; --prefix TestApp: --label prod --content-type application/json
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
:::image type="content" source="./media/import-export/continue-import-file-prompt.png" alt-text="Screenshot of the CLI. Import from file confirmation prompt.":::
-You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp:". The separator ";" is used and all keys that you have imported have content type set as "JSON".
+You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp:". The separator ";" is used and all key-values that you have imported have content type set as "JSON".
For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
From the Azure portal, follow these steps:
| Parameter | Description | Example | |--|-||
- | From label | Select at least one label to import values with the corresponding labels. **Select all** will import keys with any label, and **(No label)** will restrict the import to keys with no label. | *prod* |
- | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
+ | From label | Select at least one label to import values with the corresponding labels. **Select all** will import key-values with any label, and **(No label)** will restrict the import to key-values with no label. | *prod* |
+ | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the key-values in the selected configuration store. Format: "YYYY-MM-DDThh:mm:ssZ". This field defaults to the current point in time of the key-values when left empty. | *07/28/2022 12:00:00 AM* |
| Override default key-value labels | Optional. By default, imported items use their current label. Check the box and enter a label to override these defaults with a custom label. | *new* | | Override default key-value content type | Optional. By default, imported items use their current content type. Check the box and select **Key Vault Reference** or **JSON (application/json)** under **Content type** to state that the imported content consists of a Key Vault reference or a JSON file. Content type can only be overridden for imported key-values. Default content type for feature flags is "application/vnd.microsoft.appconfig.ff+json;charset=utf-8' and isn't updated by this parameter.| *JSON (application/json)* | 1. Select **Apply** to proceed with the import.
-You've imported keys and feature flags with the "prod" label from an App Configuration store on January 28, 2021 at 12 AM, and have assigned them the label "new". All keys that you have imported have content type set as "JSON".
+You've imported key-values and feature flags with the "prod" label from an App Configuration store on January 28, 2021 at 12 AM, and have assigned them the label "new". All key-values that you have imported have content type set as "JSON".
#### [Azure CLI](#tab/azure-cli)
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| `--name` | Enter the name of the App Configuration store you want to import data into | `my-app-config-store` | | `--source` | Enter `appconfig` to indicate that you're importing data from an App Configuration store. | `appconfig` | | `--src-name` | Enter the name of the App Configuration store you want to import data from. | `my-source-app-config` |
- | `--src-label`| Restrict your import to keys with a specific label. If you don't use this parameter, only keys with a null label will be imported. Supports star sign as filter: enter `*` for all labels; `abc*` for all labels with abc as prefix.| `prod` |
+ | `--src-label`| Restrict your import to key-values with a specific label. If you don't use this parameter, only key-values with a null label will be imported. Supports star sign as filter: enter `*` for all labels; `abc*` for all labels with abc as prefix.| `prod` |
1. Optionally add the following parameters:
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| `--label` | Optional. Enter a label that will be assigned to your imported key-values. | `new` | | `--content-type` | Optional. Enter `appconfig/kvset` or `application/json` to state that the imported content consists of a Key Vault reference or a JSON file. Content type can only be overridden for imported key-values. Default content type for feature flags is "application/vnd.microsoft.appconfig.ff+json;charset=utf-8' by default and isn't updated by this parameter. | `application/json` |
- Example: import keys-values and feature flags with the label "prod" from another App Configuration on January 28, 2021 at 1PM, and assign them the label "new". Add the "application/json" content type.
+ Example: import key-values and feature flags with the label "prod" from another App Configuration, and assign them the label "new". Add the "application/json" content type.
```azurecli az appconfig kv import --name my-app-config-store --source appconfig --src-name my-source-app-config --src-label prod --label new --content-type application/json
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
:::image type="content" source="./media/import-export/continue-import-app-configuration-prompt.png" alt-text="Screenshot of the CLI. Import from App Configuration confirmation prompt.":::
-You've imported keys with the label "prod" from an App Configuration store and have assigned them the label "new". All keys that you have imported have content type set as "JSON".
+You've imported key-values with the label "prod" from an App Configuration store and have assigned them the label "new". All key-values that you have imported have content type set as "JSON".
For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
From the Azure portal:
| Resource | Select the App Service that contains the configuration you want to import. | *my-app-service* | > [!NOTE]
- > A message is displayed, indicating the number of keys that were successfully fetched from the source App Service resource.
+ > A message is displayed, indicating the number of key-values that were successfully fetched from the source App Service resource.
1. Fill out the next part of the form: | Parameter | Description | Example | |--|||
- | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | *TestApp:* |
+ | Prefix | Optional. A key prefix is the beginning part of a key-values's "key" property. Prefixes can be used to manage groups of key-values in a configuration store. This prefix will be appended to the front of the "key" property of each imported key-value. | *TestApp:* |
| Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | *prod* | | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | *JSON (application/json)* | 1. Select **Apply** to proceed with the import.
-You've imported all application settings from an App Service as key-values and assigned them the label "prod" and the prefix "TestApp". All keys that you have imported have content type set as "JSON".
+You've imported all application settings from an App Service as key-values and assigned them the label "prod" and the prefix "TestApp". All key-values that you have imported have content type set as "JSON".
#### [Azure CLI](#tab/azure-cli)
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| Parameter | Description | Example | ||||
- | `--prefix` | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | `TestApp:` |
+ | `--prefix` | Optional. A key prefix is the beginning part of a key-value's "key" property. Prefixes can be used to manage groups of key-values in a configuration store. This prefix will be appended to the front of the "key" property of each imported key-value. | `TestApp:` |
| `--label` | Optional. Enter a label that will be assigned to your imported key-values. If you don't specify a label, the null label will be assigned to your key-values. | `prod` | | `--content-type` | Optional. Enter appconfig/kvset or application/json to state that the imported content consists of a Key Vault reference or a JSON file. | `application/json` |
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
:::image type="content" source="./media/import-export/continue-import-app-service-prompt.png" alt-text="Screenshot of the CLI. Import from App Service confirmation prompt.":::
-You've imported all application settings from your App Service as key-values, have assigned them the label "prod", and have added a "TestApp:" prefix. All keys that you have imported have content type set as "JSON".
+You've imported all application settings from your App Service as key-values, have assigned them the label "prod", and have added a "TestApp:" prefix. All key-values that you have imported have content type set as "JSON".
For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
From the [Azure portal](https://portal.azure.com), follow these steps:
| Parameter | Description | Example | |--|--|--|
- | Prefix | Optional. This prefix will be trimmed from the keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | *TestApp:* |
+ | Prefix | Optional. This prefix will be trimmed from each key-value's "key" property. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of key-values in a configuration store. | *TestApp:* |
| From label | Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, by default only key-values with the "No Label" label will be exported. See note below. | *prod* |
- | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
+ | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the key-values in the selected configuration store. Format: "YYYY-MM-DDThh:mm:ssZ". This field defaults to the current point in time of the key-values when left empty. | *07/28/2022 12:00:00 AM* |
| File type | Select the type of file you're exporting between Yaml, Properties or Json. | *JSON* | | Separator | The separator is the delimiter for flattening the key-values to Json/Yaml. It supports the configuration's hierarchical structure and doesn't apply to property files and feature flags. Select one of the following options: *.*, *,*, *:*, *;*, */*, *-*, *_*, *ΓÇö*, or *(No separator)*. | *;* | > [!IMPORTANT]
- > If you don't select a *From label*, only keys without labels will be exported. To export a key-value with a label, you must select its label. Note that you can only select one label per export in portal, in case you want to export the key-values with all labels specified please use CLI.
+ > If you don't select a *From label*, only key-values without labels will be exported. To export a key-value with a label, you must select its label. Note that you can only select one label per export in portal, in case you want to export the key-values with all labels specified please use CLI.
1. Select **Export** to finish the export.
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| `--destination` | Enter `file` to indicate that you're exporting data to a file. | `file` | | `--path` | Enter the path where you want to save the file. | `C:/Users/john/Downloads/data.json` | | `--format` | Enter `yaml`, `properties` or `json` to indicate the format of the file you want to export. | `json` |
- | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
+ | `--label` | Enter a label to export key-values and feature flags with this label. If you don't specify a label, by default, you will only export key-values and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
> [!IMPORTANT]
- > If you don't select a label, only keys without labels will be exported. To export a key-value with a label, you must select its label.
+ > If you don't select a label, only key-values without labels will be exported. To export a key-value with a label, you must select its label.
1. Optionally also add the following parameters: | Parameter | Description | Example | |||| | `--separator` | Optional. The separator is the delimiter for flattening the key-values to Json/Yaml. It's required for exporting hierarchical structure and will be ignored for property files and feature flags. Select one of the following options: `.`, `,`, `:`, `;`, `/`, `-`, `_`, `ΓÇö`. | `;` |
- | `--prefix` | Optional. Prefix to be trimmed from keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. Prefix will be ignored for feature flags. | `TestApp:` |
+ | `--prefix` | Optional. Prefix to be trimmed from each key-value's "key" property. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of key-values in a configuration store. Prefix will be ignored for feature flags. | `TestApp:` |
- Example: export all keys and feature flags with label "prod" to a JSON file.
+ Example: export all key-values and feature flags with label "prod" to a JSON file.
```azurecli az appconfig kv export --name my-app-config-store --label prod --destination file --path D:/abc.json --format json --separator ; --prefix TestApp:
From the Azure portal, follow these steps:
1. The page now displays the selected **Target service** and resource ID. The **Select resource** action lets you switch to another source App Configuration store. > [!NOTE]
- > A message is displayed on screen, indicating that the keys were fetched successfully.
+ > A message is displayed on screen, indicating that the key-values were fetched successfully.
1. Fill out the next part of the form: | Parameter | Description | Example | |--|-||
- | From label | Select at least one label to export values with the corresponding labels. **Select all** will export keys with any label, and **(No label)** will restrict the export to keys with no label. | *prod* |
- | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
+ | From label | Select at least one label to export values with the corresponding labels. **Select all** will export key-values with any label, and **(No label)** will restrict the export to key-values with no label. | *prod* |
+ | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the key-values in the selected configuration store. Format: "YYYY-MM-DDThh:mm:ssZ". This field defaults to the current point in time of the key-values when left empty. | *07/28/2022 12:00:00 AM* |
| Override default key-value labels | Optional. By default, imported items use their current label. Check the box and enter a label to override these defaults with a custom label. | *new* | 1. Select **Apply** to proceed with the export.
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` | | `--destination` | Enter `appconfig` to indicate that you're exporting data to an App Configuration store. | `appconfig` | | `--dest-name` | Enter the name of the App Configuration store you want to export data to. | `my-other-app-config-store` |
- | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
+ | `--label` | Enter a label to export key-values and feature flags with this label. If you don't specify a label, by default, you will only export key-values and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
> [!IMPORTANT]
- > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label).
+ > If the key-values you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only key-values without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label).
1. Optionally also add the following parameter:
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
||--|--| | `--dest-label` | Optional. Enter a destination label, to assign this label to exported key-values. | `new` |
- Example: export keys and feature flags with the label "prod" to another App Configuration store and add the destination label "new".
+ Example: export key-values and feature flags with the label "prod" to another App Configuration store and add the destination label "new".
```azurecli az appconfig kv export --name my-app-config-store --destination appconfig --dest-name my-other-app-config-store --dest-label new --label prod
From the Azure portal, follow these steps:
| Resource group | Select a resource group that contains the App Service with configuration to export. | *my-resource-group* | | Resource | Select the App Service that contains the configuration you want to export. | *my-app-service* |
-1. The page now displays the selected **Target service** and resource ID. The **Select resource** action lets you switch to another source App Service.
+1. The page now displays the selected **Target service** and resource ID. The **Select resource** action lets you switch to another target App Service resource.
1. Optionally fill out the next part of the form: | Parameter | Description | Example | |--|-||
- | Prefix | Optional. This prefix will be trimmed from the imported keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. Prefix will be ignored for feature flags. | *TestApp:* |
- | From label | Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, only key-values with the "No label" label will be exported. See note below. | *prod* |
- | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
- | Content type | Optional. Check the box **Override default key-value content types** and select **Key Vault Reference** or **JSON** under **Content type** to state that the imported content consists of a Key Vault reference or a JSON file. | *JSON (application/json)* |
-
- > [!IMPORTANT]
- > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label).
+ | Prefix | Optional. This prefix will be trimmed from each exported key-value's "key" property. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of key-values in a configuration store. Prefix will be ignored for feature flags. | *TestApp:* |
+ | Export as reference | Optional. Check to export key-values to App Service as App Configuration references. [Learn more](../app-service/app-service-configuration-references.md) |
+ | At a specific time | Optional. Fill out to export key-values from a specific point in time. This is the point in time of the key-values in the selected configuration store. Format: "YYYY-MM-DDThh:mm:ssZ". This field defaults to the current point in time of the key-values when left empty. | *07/28/2022 12:00:00 AM* |
+ | From label | Optional. Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, only key-values with the "No label" label will be exported. | *prod* |
1. Select **Apply** to proceed with the export.
-You've exported key-values that have the "prod" label from an App Service resource, at their state from 07/28/2021 12:00:00 AM, and have trimmed the prefix "TestApp". The keys have been exported with a content type in JSON format.
+You've exported key-values that have the "prod" label from an App Service resource, at their state from 07/28/2021 12:00:00 AM, and have trimmed the prefix "TestApp". The key-values have been exported with a content type in JSON format.
+
+If you checked the box to export key-values as references, the exported key-values will be indicated as App Configuration references in the "Source" column of your App Service resource configuration settings.
+ #### [Azure CLI](#tab/azure-cli)
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` | | `--destination` | Enter `appservice` to indicate that you're exporting data to App Service. | `appservice` | | `--appservice-account` | Enter the App Service's ARM ID or use the name of the App Service, if it's in the same subscription and resource group as the App Configuration. | `/subscriptions/123/resourceGroups/my-as-resource-group/providers/Microsoft.Web/sites/my-app-service` or `my-app-service` |
- | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
-
- > [!IMPORTANT]
- > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label).
+ | `--label` | Optional. Enter a label to export key-values and feature flags with this label. If you don't specify a label, by default, you will only export key-values and feature flags with no label. | `prod` |
To get the value for `--appservice-account`, use the command `az webapp show --resource-group <resource-group> --name <resource-name>`.
From the Azure CLI, follow the steps below. If you don't have the Azure CLI inst
| Parameter | Description | Example | ||-|--|
- | `--prefix` | Optional. Prefix to be trimmed from keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | `TestApp:` |
+ | `--prefix` | Optional. Prefix to be trimmed from an exported key-value's "key" property. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of key-values in a configuration store. | `TestApp:` |
Example: export all key-values with the label "prod" to an App Service application and trim the prefix "TestApp". ```azurecli az appconfig kv export --name my-app-config-store --destination appservice --appservice-account /subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service/config/web --label prod --prefix TestApp: ```-
-1. The command line displays a list of key-values getting exported to the file. Confirm the export by selecting `y`.
+ The command line displays a list of key-values getting exported to an App Service resource. Confirm the export by selecting `y`.
:::image type="content" source="./media/import-export/continue-export-app-service-prompt.png" alt-text="Screenshot of the CLI. Export to App Service confirmation prompt.":::
-You've exported all keys with the label "prod" to an Azure App Service resource and have trimmed the prefix "TestApp:".
+ You've exported all key-values with the label "prod" to an Azure App Service resource and have trimmed the prefix "TestApp:".
-For more optional parameters and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true).
+
+1. Optionally specify a flag to export as an App Configuration Reference.
+
+ | Parameter | Description |
+ ||-|
+ | `--export-as-reference` `-r` | Optional. Specify whether key-values are exported to App Service as App Configuration references. [Learn more](../app-service/app-service-configuration-references.md). |
+
+ Example: export all key-values with the label "prod" as app configuration references to an App Service application.
+
+ ```azurecli
+ az appconfig kv export --name my-app-config-store --destination appservice --appservice-account "/subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service" --label prod --export-as-reference
+ ```
+ The command line displays a list of key-values getting exported as app configuration references to an App Service resource. Confirm the export by selecting `y`.
+
+ :::image type="content" source="./media/import-export/export-app-service-reference-cli-preview.png" alt-text="Screenshot of the CLI. Export App Configuration reference to App Service confirmation prompt.":::
+
+ You've exported all key-values with the label "prod" as app configuration references to an Azure App Service resource. In your App Service resource, the imported key-values will be indicated as App Configuration references in the "Source" column.
+
+ :::image type="content" source="./media/import-export/export-app-service-reference-value.png" alt-text="Screenshot of App Service configuration settings. Exported App Configuration reference in App Service.":::
+
+For more optional parameters and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv#az-appconfig-kv-export).
## Error messages
-You may encounter the following error messages when importing or exporting App Configuration keys:
+You may encounter the following error messages when importing or exporting App Configuration key-values:
- **Selected file must be between 1 and 2097152 bytes.**: your file is too large. Select a smaller file.-- **Public access is disabled for your store or you are accessing from a private endpoint that is not in the storeΓÇÖs private endpoint configurations**. To import keys from an App Configuration store, you need to have access to that store. If necessary, enable public access for the source store or access it from an approved private endpoint. If you just enabled public access, wait up to 5 minutes for the cache to refresh.
+- **Public access is disabled for your store or you are accessing from a private endpoint that is not in the storeΓÇÖs private endpoint configurations**. To import key-values from an App Configuration store, you need to have access to that store. If necessary, enable public access for the source store or access it from an approved private endpoint. If you just enabled public access, wait up to 5 minutes for the cache to refresh.
## Next steps
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
public class GetToDoItems {
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request, @SQLInput(
+ name = "toDoItems",
commandText = "SELECT * FROM dbo.ToDo", commandType = "Text", connectionStringSetting = "SqlConnectionString")
public class GetToDoItem {
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request, @SQLInput(
+ name = "toDoItems",
commandText = "SELECT * FROM dbo.ToDo", commandType = "Text", parameters = "@Id={Query.id}",
public class DeleteToDo {
authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request, @SQLInput(
+ name = "toDoItems",
commandText = "dbo.DeleteToDo", commandType = "StoredProcedure", parameters = "@Id={Query.id}",
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
| **commandText** | Required. The Transact-SQL query command or name of the stored procedure executed by the binding. | | **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database against which the query or stored procedure is being executed. This value isn't the actual connection string and must instead resolve to an environment variable name. | | **commandType** | Required. A [CommandType](/dotnet/api/system.data.commandtype) value, which is ["Text"](/dotnet/api/system.data.commandtype#fields) for a query and ["StoredProcedure"](/dotnet/api/system.data.commandtype#fields) for a stored procedure. |
+|**name** | Required. The unique name of the function binding. |
| **parameters** | Optional. Zero or more parameter values passed to the command during execution as a single string. Must follow the format `@param1=param1,@param2=param2`. Neither the parameter name nor the parameter value can contain a comma (`,`) or an equals sign (`=`). | ::: zone-end
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
More samples for the Azure SQL output binding are available in the [GitHub repos
This section contains the following examples: * [HTTP trigger, write a record to a table](#http-trigger-write-record-to-table-java)
-<!-- * [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-java) -->
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-java)
The examples refer to a `ToDoItem` class (in a separate file `ToDoItem.java`) and a corresponding database table:
public class PostToDo {
public HttpResponseMessage run( @HttpTrigger(name = "req", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request, @SQLOutput(
+ name = "toDoItem",
commandText = "dbo.ToDo", connectionStringSetting = "SqlConnectionString") OutputBinding<ToDoItem> output) throws JsonParseException, JsonMappingException, JsonProcessingException {
public class PostToDo {
} ```
-<!-- commented out until issue with java library resolved
- <a id="http-trigger-write-to-two-tables-java"></a> ### HTTP trigger, write to two tables
The second table, `dbo.RequestLog`, corresponds to the following definition:
```sql CREATE TABLE dbo.RequestLog (
- Id int identity(1,1) primary key,
- RequestTimeStamp datetime2 not null,
- ItemCount int not null
+ Id INT IDENTITY(1,1) PRIMARY KEY,
+ RequestTimeStamp DATETIME2 NOT NULL DEFAULT(GETDATE()),
+ ItemCount INT NOT NULL
) ```
public class RequestLog {
```java
-module.exports = async function (context, req) {
- context.log('JavaScript HTTP trigger and SQL output binding function processed a request.');
- context.log(req.body);
+package com.function;
- const newLog = {
- RequestTimeStamp = Date.now(),
- ItemCount = 1
- }
+import java.util.*;
+import com.microsoft.azure.functions.annotation.*;
+import com.microsoft.azure.functions.*;
+import com.microsoft.azure.functions.sql.annotation.SQLOutput;
+import com.fasterxml.jackson.core.JsonParseException;
+import com.fasterxml.jackson.core.JsonProcessingException;
+import com.fasterxml.jackson.databind.JsonMappingException;
+import com.fasterxml.jackson.databind.ObjectMapper;
- if (req.body) {
- context.bindings.todoItems = req.body;
- context.bindings.requestLog = newLog;
- context.res = {
- body: req.body,
- mimetype: "application/json",
- status: 201
- }
- } else {
- context.res = {
- status: 400,
- body: "Error reading request body"
- }
+import java.util.Optional;
+
+public class PostToDoWithLog {
+ @FunctionName("PostToDoWithLog")
+ public HttpResponseMessage run(
+ @HttpTrigger(name = "req", methods = {HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
+ @SQLOutput(
+ name = "toDoItem",
+ commandText = "dbo.ToDo",
+ connectionStringSetting = "SqlConnectionString")
+ OutputBinding<ToDoItem> output,
+ @SQLOutput(
+ name = "requestLog",
+ commandText = "dbo.RequestLog",
+ connectionStringSetting = "SqlConnectionString")
+ OutputBinding<RequestLog> outputLog,
+ final ExecutionContext context) throws JsonParseException, JsonMappingException, JsonProcessingException {
+ context.getLogger().info("Java HTTP trigger processed a request.");
+
+ String json = request.getBody().get();
+ ObjectMapper mapper = new ObjectMapper();
+ ToDoItem newToDo = mapper.readValue(json, ToDoItem.class);
+ newToDo.Id = UUID.randomUUID();
+ output.setValue(newToDo);
+
+ RequestLog newLog = new RequestLog();
+ newLog.ItemCount = 1;
+ outputLog.setValue(newLog);
+
+ return request.createResponseBuilder(HttpStatus.CREATED).header("Content-Type", "application/json").body(output).build();
} }
-``` -->
-
+```
::: zone-end
In the [Java functions runtime library](/java/api/overview/azure/functions/runti
||| | **commandText** | Required. The name of the table being written to by the binding. | | **connectionStringSetting** | Required. The name of an app setting that contains the connection string for the database to which data is being written. This isn't the actual connection string and must instead resolve to an environment variable.|
+|**name** | Required. The unique name of the function binding. |
::: zone-end
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Add the Java library for SQL bindings to your functions project with an update t
<dependency> <groupId>com.microsoft.azure.functions</groupId> <artifactId>azure-functions-java-library-sql</artifactId>
- <version>0.1.0</version>
+ <version>0.1.1</version>
</dependency> ```
azure-functions Functions Bindings Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md
The following table indicates when and how you register bindings.
By default, extension bundles are used by Java, JavaScript, PowerShell, Python, C# script, and Custom Handler function apps to work with binding extensions. In cases where extension bundles can't be used, you can explicitly install binding extensions with your function app project. Extension bundles are supported for version 2.x and later version of the Functions runtime.
-Extension bundles are a way to add a pre-defined set of compatible set of binding extensions to your function app. Extension bundles are versioned. Each version contains a specific set of binding extensions that are verified to work together. Select a bundle version based on the extensions that you need in your app.
+Extension bundles are a way to add a pre-defined set of compatible binding extensions to your function app. Extension bundles are versioned. Each version contains a specific set of binding extensions that are verified to work together. Select a bundle version based on the extensions that you need in your app.
When you create a non-.NET Functions project from tooling or in the portal, extension bundles are already enabled in the app's *host.json* file.
azure-functions Functions Bindings Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md
Title: Azure Blob storage trigger and bindings for Azure Functions
description: Learn to use the Azure Blob storage trigger and bindings in Azure Functions. Previously updated : 03/04/2022 Last updated : 11/11/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
This extension version is available from the extension bundle v3 by adding the f
To learn more, see [Update your extensions]. - # [Functions 2.x and higher](#tab/functionsv2/csharp-script) You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
You can add this version of the extension from the extension bundle v3 by adding
To learn more, see [Update your extensions]. - # [Bundle v2.x](#tab/extensionv2) You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
azure-functions Functions Bindings Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue.md
Title: Azure Queue storage trigger and bindings for Azure Functions overview description: Understand how to use the Azure Queue storage trigger and output binding in Azure Functions. Previously updated : 03/04/2022 Last updated : 11/11/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers
This extension version is available from the extension bundle v3 by adding the f
To learn more, see [Update your extensions]. - # [Functions 2.x+](#tab/functionsv2/csharp-script) You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
You can add this version of the extension from the preview extension bundle v3 b
To learn more, see [Update your extensions]. - # [Bundle v2.x](#tab/extensionv2) You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
azure-functions Functions Bindings Storage Table Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-input.md
Title: Azure Tables input bindings for Azure Functions description: Understand how to use Azure Tables input bindings in Azure Functions. Previously updated : 03/04/2022 Last updated : 11/11/2022 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Tables input bindings for Azure Functions
-Use the Azure Tables input binding to read a table in an Azure Storage or Azure Cosmos DB account.
+Use the Azure Tables input binding to read a table in [Azure Cosmos DB for Table](../cosmos-db/table/introduction.md) or [Azure Table Storage](../storage/tables/table-storage-overview.md).
For information on setup and configuration details, see the [overview](./functions-bindings-storage-table.md).
C# script is used primarily when creating C# functions in the Azure portal.
Choose a version to see examples for the mode and version.
-# [Combined Azure Storage extension](#tab/storage-extension/in-process)
+# [Azure Tables extension](#tab/table-api/in-process)
The following example shows a [C# function](./functions-dotnet-class-library.md) that reads a single table row. For every message sent to the queue, the function will be triggered.
The row key value `{queueTrigger}` binds the row key to the message metadata, wh
```csharp public class TableStorage {
- public class MyPoco
+ public class MyPoco : Azure.Data.Tables.ITableEntity
{
+ public string Text { get; set; }
+ public string PartitionKey { get; set; } public string RowKey { get; set; }
- public string Text { get; set; }
+ public DateTimeOffset? Timestamp { get; set; }
+ public ETag ETag { get; set; }
} + [FunctionName("TableInput")] public static void TableInput( [QueueTrigger("table-items")] string input,
public class TableStorage
} ```
-Use a `CloudTable` method parameter to read the table by using the Azure Storage SDK. Here's an example of a function that queries an Azure Functions log table:
+Use a `TableClient` method parameter to read the table by using the Azure SDK. Here's an example of a function that queries an Azure Functions log table:
```csharp using Microsoft.Azure.WebJobs;
-using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
-using Microsoft.Azure.Cosmos.Table;
+using Azure.Data.Tables;
using System; using System.Threading.Tasks;-
+using Azure;
namespace FunctionAppCloudTable2 {
- public class LogEntity : TableEntity
+ public class LogEntity : ITableEntity
{ public string OriginalName { get; set; }+
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public DateTimeOffset? Timestamp { get; set; }
+ public ETag ETag { get; set; }
} public static class CloudTableDemo { [FunctionName("CloudTableDemo")] public static async Task Run(
- [TimerTrigger("0 */1 * * * *")] TimerInfo myTimer,
- [Table("AzureWebJobsHostLogscommon")] CloudTable cloudTable,
+ [TimerTrigger("0 */1 * * * *")] TimerInfo myTimer,
+ [Table("AzureWebJobsHostLogscommon")] TableClient tableClient,
ILogger log) { log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");-
- TableQuery<LogEntity> rangeQuery = new TableQuery<LogEntity>().Where(
- TableQuery.CombineFilters(
- TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal,
- "FD2"),
- TableOperators.And,
- TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan,
- "t")));
-
- // Execute the query and loop through the results
- foreach (LogEntity entity in
- await cloudTable.ExecuteQuerySegmentedAsync(rangeQuery, null))
+ AsyncPageable<LogEntity> queryResults = tableClient.QueryAsync<LogEntity>(filter: $"PartitionKey eq 'FD2' and RowKey gt 't'");
+ await foreach (LogEntity entity in queryResults)
{
- log.LogInformation(
- $"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
+ log.LogInformation($"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
} } } } ```
+For more information about how to use `TableClient`, see the [Azure.Data.Tables API Reference](/dotnet/api/azure.data.tables.tableclient).
-For more information about how to use CloudTable, see [Get started with Azure Table storage](../cosmos-db/tutorial-develop-table-dotnet.md).
-
-If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
-
-# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
+# [Combined Azure Storage extension](#tab/storage-extension/in-process)
The following example shows a [C# function](./functions-dotnet-class-library.md) that reads a single table row. For every message sent to the queue, the function will be triggered.
The row key value `{queueTrigger}` binds the row key to the message metadata, wh
```csharp public class TableStorage {
- public class MyPoco : ITableEntity
+ public class MyPoco
{
- public string Text { get; set; }
- public string PartitionKey { get; set; } public string RowKey { get; set; }
- public DateTimeOffset? Timestamp { get; set; }
- public ETag ETag { get; set; }
+ public string Text { get; set; }
} - [FunctionName("TableInput")] public static void TableInput( [QueueTrigger("table-items")] string input,
public class TableStorage
} ```
-Use a `TableClient` method parameter to read the table by using the Azure SDK. Here's an example of a function that queries an Azure Functions log table:
+Use a `CloudTable` method parameter to read the table by using the Azure Storage SDK. Here's an example of a function that queries an Azure Functions log table:
```csharp using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
-using Azure.Data.Tables;
+using Microsoft.Azure.Cosmos.Table;
using System; using System.Threading.Tasks;
-using Azure;
+ namespace FunctionAppCloudTable2 {
- public class LogEntity : ITableEntity
+ public class LogEntity : TableEntity
{ public string OriginalName { get; set; }-
- public string PartitionKey { get; set; }
- public string RowKey { get; set; }
- public DateTimeOffset? Timestamp { get; set; }
- public ETag ETag { get; set; }
} public static class CloudTableDemo { [FunctionName("CloudTableDemo")] public static async Task Run(
- [TimerTrigger("0 */1 * * * *")] TimerInfo myTimer,
- [Table("AzureWebJobsHostLogscommon")] TableClient tableClient,
+ [TimerTrigger("0 */1 * * * *")] TimerInfo myTimer,
+ [Table("AzureWebJobsHostLogscommon")] CloudTable cloudTable,
ILogger log) { log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
- AsyncPageable<LogEntity> queryResults = tableClient.QueryAsync<LogEntity>(filter: $"PartitionKey eq 'FD2' and RowKey gt 't'");
- await foreach (LogEntity entity in queryResults)
+
+ TableQuery<LogEntity> rangeQuery = new TableQuery<LogEntity>().Where(
+ TableQuery.CombineFilters(
+ TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal,
+ "FD2"),
+ TableOperators.And,
+ TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThan,
+ "t")));
+
+ // Execute the query and loop through the results
+ foreach (LogEntity entity in
+ await cloudTable.ExecuteQuerySegmentedAsync(rangeQuery, null))
{
- log.LogInformation($"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
+ log.LogInformation(
+ $"{entity.PartitionKey}\t{entity.RowKey}\t{entity.Timestamp}\t{entity.OriginalName}");
} } } } ```
-For more information about how to use `TableClient`, see the [Azure.Data.Tables API Reference](/dotnet/api/azure.data.tables.tableclient).
+
+For more information about how to use CloudTable, see [Get started with Azure Table storage](../cosmos-db/tutorial-develop-table-dotnet.md).
+
+If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
# [Functions 1.x](#tab/functionsv1/in-process)
public class TableStorage
} ```
+# [Azure Tables extension](#tab/table-api/isolated-process)
+
+The following `MyTableData` class represents a row of data in the table:
+
+```csharp
+public class MyTableData : Azure.Data.Tables.ITableEntity
+{
+ public string Text { get; set; }
+
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public DateTimeOffset? Timestamp { get; set; }
+ public ETag ETag { get; set; }
+}
+```
+
+The following function, which is started by a Queue Storage trigger, reads a row key from the queue, which is used to get the row from the input table. The expression `{queueTrigger}` binds the row key to the message metadata, which is the message string.
+
+```csharp
+[Function("TableFunction")]
+[TableOutput("OutputTable", Connection = "AzureWebJobsStorage")]
+public static MyTableData Run(
+ [QueueTrigger("table-items")] string input,
+ [TableInput("MyTable", "<PartitionKey>", "{queueTrigger}")] MyTableData tableInput,
+ FunctionContext context)
+{
+ var logger = context.GetLogger("TableFunction");
+
+ logger.LogInformation($"PK={tableInput.PartitionKey}, RK={tableInput.RowKey}, Text={tableInput.Text}");
+
+ return new MyTableData()
+ {
+ PartitionKey = "queue",
+ RowKey = Guid.NewGuid().ToString(),
+ Text = $"Output record with rowkey {input} created at {DateTime.Now}"
+ };
+}
+```
+
+The following Queue-triggered function returns the first 5 entities as an `IEnumerable<T>`, with the partition key value set as the queue message.
+
+```csharp
+[Function("TestFunction")]
+public static void Run([QueueTrigger("myqueue", Connection = "AzureWebJobsStorage")] string partition,
+ [TableInput("inTable", "{queueTrigger}", Take = 5, Filter = "Text eq 'test'",
+ Connection = "AzureWebJobsStorage")] IEnumerable<MyTableData> tableInputs,
+ FunctionContext context)
+{
+ var logger = context.GetLogger("TestFunction");
+ logger.LogInformation(partition);
+ foreach (MyTableData tableInput in tableInputs)
+ {
+ logger.LogInformation($"PK={tableInput.PartitionKey}, RK={tableInput.RowKey}, Text={tableInput.Text}");
+ }
+}
+```
+The `Filter` and `Take` properties are used to limit the number of entities returned.
+ # [Combined Azure Storage extension](#tab/storage-extension/isolated-process) The following `MyTableData` class represents a row of data in the table:
public static void Run([QueueTrigger("myqueue", Connection = "AzureWebJobsStorag
} } ```
-The `Filter` and `Take` properties are used to limit the number of entities returned.
-
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
+The `Filter` and `Take` properties are used to limit the number of entities returned.
# [Functions 1.x](#tab/functionsv1/isolated-process) Functions version 1.x doesn't support isolated worker process.
+# [Azure Tables extension](#tab/table-api/csharp-script)
+
+The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
+
+The *function.json* file specifies a `partitionKey` and a `rowKey`. The `rowKey` value `{queueTrigger}` indicates that the row key comes from the queue message string.
+
+```json
+{
+ "bindings": [
+ {
+ "queueName": "myqueue-items",
+ "connection": "MyStorageConnectionAppSetting",
+ "name": "myQueueItem",
+ "type": "queueTrigger",
+ "direction": "in"
+ },
+ {
+ "name": "personEntity",
+ "type": "table",
+ "tableName": "Person",
+ "partitionKey": "Test",
+ "rowKey": "{queueTrigger}",
+ "connection": "MyStorageConnectionAppSetting",
+ "direction": "in"
+ }
+ ],
+ "disabled": false
+}
+```
+
+The [configuration](#configuration) section explains these properties.
+
+Here's the C# script code:
+
+```csharp
+#r "Microsoft.WindowsAzure.Storage"
+using Microsoft.Extensions.Logging;
+using Azure.Data.Tables;
+
+public static void Run(string myQueueItem, Person personEntity, ILogger log)
+{
+ log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
+ log.LogInformation($"Name in Person entity: {personEntity.Name}");
+}
+
+public class Person : ITableEntity
+{
+ public string Name { get; set; }
+
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public DateTimeOffset? Timestamp { get; set; }
+ public ETag ETag { get; set; }
+}
+```
+ # [Combined Azure Storage extension](#tab/storage-extension/csharp-script) The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
For more information about how to use CloudTable, see [Get started with Azure Ta
If you try to bind to `CloudTable` and get an error message, make sure that you have a reference to [the correct Storage SDK version](./functions-bindings-storage-table.md#azure-storage-sdk-version-in-functions-1x).
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
-
-Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
- # [Functions 1.x](#tab/functionsv1/csharp-script) The following example shows a table input binding in a *function.json* file and [C# script](./functions-reference-csharp.md) code that uses the binding. The function uses a queue trigger to read a single table row.
C# script is used primarily when creating C# functions in the Azure portal.
Choose a version to see usage details for the mode and version.
-# [Combined Azure Storage extension](#tab/storage-extension/in-process)
+# [Azure Tables extension](#tab/table-api/in-process)
To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
+To execute queries that return multiple entities, bind to a [TableClient] object. You can then use this object to create and execute queries against the bound table. Note that [TableClient] and related APIs belong to the [Azure.Data.Tables](/dotnet/api/azure.data.tables) namespace.
-# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
+# [Combined Azure Storage extension](#tab/storage-extension/in-process)
To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-To execute queries that return multiple entities, bind to a [TableClient] object. You can then use this object to create and execute queries against the bound table. Note that [TableClient] and related APIs belong to the [Azure.Data.Tables](/dotnet/api/azure.data.tables) namespace.
+To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
+ # [Functions 1.x](#tab/functionsv1/in-process)
To return a specific entity by key, use a binding parameter that derives from [T
To execute queries that return multiple entities, bind to an [`IQueryable<T>`] of a type that inherits from [TableEntity].
+# [Azure Tables extension](#tab/table-api/isolated-process)
+
+To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
+
+To execute queries that return multiple entities, bind to a [TableClient] object. You can then use this object to create and execute queries against the bound table. Note that [TableClient] and related APIs belong to the [Azure.Data.Tables](/dotnet/api/azure.data.tables) namespace.
+ # [Combined Azure Storage extension](#tab/storage-extension/isolated-process) To return a specific entity by key, use a plain-old CLR object (POCO). The specific `TableName`, `PartitionKey`, and `RowKey` are used to try and get a specific entity from the table. When returning multiple entities as an [`IEnumerable<T>`], you can instead use `Take` and `Filter` properties to restrict the result set.
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-
-The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
- # [Functions 1.x](#tab/functionsv1/isolated-process) Functions version 1.x doesn't support isolated worker process.
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
+# [Azure Tables extension](#tab/table-api/csharp-script)
To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
+To execute queries that return multiple entities, bind to a [TableClient] object. You can then use this object to create and execute queries against the bound table. Note that [TableClient] and related APIs belong to the [Azure.Data.Tables](/dotnet/api/azure.data.tables) namespace.
+
+# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
+To return a specific entity by key, use a binding parameter that derives from [TableEntity](/dotnet/api/azure.data.tables.tableentity).
-Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
+To execute queries that return multiple entities, bind to a [CloudTable] object. You can then use this object to create and execute queries against the bound table. Note that [CloudTable] and related APIs belong to the [Microsoft.Azure.Cosmos.Table](/dotnet/api/microsoft.azure.cosmos.table) namespace.
# [Functions 1.x](#tab/functionsv1/csharp-script)
azure-functions Functions Bindings Storage Table Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table-output.md
Title: Azure Tables output bindings for Azure Functions description: Understand how to use Azure Tables output bindings in Azure Functions. Previously updated : 03/04/2022 Last updated : 11/11/2022 ms.devlang: csharp, java, javascript, powershell, python zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Tables output bindings for Azure Functions
-Use an Azure Tables output binding to write entities to a table in an Azure Storage or Azure Cosmos DB account.
+Use an Azure Tables output binding to write entities to a table in [Azure Cosmos DB for Table](../cosmos-db/table/introduction.md) or [Azure Table Storage](../storage/tables/table-storage-overview.md).
For information on setup and configuration details, see the [overview](./functions-bindings-storage-table.md)
public class TableStorage
The following `MyTableData` class represents a row of data in the table:
+```csharp
+public class MyTableData : Azure.Data.Tables.ITableEntity
+{
+ public string Text { get; set; }
+
+ public string PartitionKey { get; set; }
+ public string RowKey { get; set; }
+ public DateTimeOffset? Timestamp { get; set; }
+ public ETag ETag { get; set; }
+}
+```
The following function, which is started by a Queue Storage trigger, writes a new `MyDataTable` entity to a table named **OutputTable**.
+```csharp
+[Function("TableFunction")]
+[TableOutput("OutputTable", Connection = "AzureWebJobsStorage")]
+public static MyTableData Run(
+ [QueueTrigger("table-items")] string input,
+ [TableInput("MyTable", "<PartitionKey>", "{queueTrigger}")] MyTableData tableInput,
+ FunctionContext context)
+{
+ var logger = context.GetLogger("TableFunction");
+
+ logger.LogInformation($"PK={tableInput.PartitionKey}, RK={tableInput.RowKey}, Text={tableInput.Text}");
+
+ return new MyTableData()
+ {
+ PartitionKey = "queue",
+ RowKey = Guid.NewGuid().ToString(),
+ Text = $"Output record with rowkey {input} created at {DateTime.Now}"
+ };
+}
+```
# [C# Script](#tab/csharp-script)
The following types are supported for `out` parameters and return types:
You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
-# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
+# [Azure Tables extension](#tab/table-api/in-process)
The following types are supported for `out` parameters and return types:
You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.a
Return a plain-old CLR object (POCO) with properties that can be mapped to the table entity.
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
+# [Azure Tables extension](#tab/table-api/isolated-process)
+
+The following types are supported for `out` parameters and return types:
+
+- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.
+- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.
+
+You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table.
-The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the combined Azure Storage extension.
# [Functions 1.x](#tab/functionsv1/isolated-process)
The following types are supported for `out` parameters and return types:
You can also bind to `CloudTable` [from the Storage SDK](/dotnet/api/microsoft.azure.cosmos.table.cloudtable) as a method parameter. You can then use that object to write to the table.
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
+# [Azure Tables extension](#tab/table-api/csharp-script)
-Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the combined Azure Storage extension.
+The following types are supported for `out` parameters and return types:
+
+- A plain-old CLR object (POCO) that includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.
+- `ICollector<T>` or `IAsyncCollector<T>` where `T` includes the `PartitionKey` and `RowKey` properties. You can accompany these properties by implementing `ITableEntity`.
+
+You can also bind to `TableClient` [from the Azure SDK](/dotnet/api/azure.data.tables.tableclient). You can then use that object to write to the table.
# [Functions 1.x](#tab/functionsv1/csharp-script)
azure-functions Functions Bindings Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md
Title: Azure Tables bindings for Azure Functions description: Understand how to use Azure Tables bindings in Azure Functions. Previously updated : 03/04/2022 Last updated : 11/11/2022 zone_pivot_groups: programming-languages-set-functions-lang-workers # Azure Tables bindings for Azure Functions
-Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.md) via [triggers and bindings](./functions-triggers-bindings.md). Integrating with Azure Tables allows you to build functions that read and write data using the Tables API for [Azure Storage](../storage/index.yml) and [Azure Cosmos DB](../cosmos-db/introduction.md).
-
-> [!NOTE]
-> The Table bindings have historically only supported Azure Storage. Support for Azure Cosmos DB is currently in preview. See [Azure Cosmos DB for Table extension (preview)](#table-api-extension).
+Azure Functions integrates with [Azure Tables](../cosmos-db/table/introduction.md) via [triggers and bindings](./functions-triggers-bindings.md). Integrating with Azure Tables allows you to build functions that read and write data using [Azure Cosmos DB for Table](../cosmos-db/table/introduction.md) and [Azure Table Storage](../storage/tables/table-storage-overview.md).
| Action | Type | |||
The process for installing the extension varies depending on the extension versi
<a name="storage-extension"></a> <a name="table-api-extension"></a>
-# [Azure Cosmos DB for Table extension](#tab/table-api/in-process)
+# [Azure Tables extension](#tab/table-api/in-process)
[!INCLUDE [functions-bindings-supports-identity-connections-note](../../includes/functions-bindings-supports-identity-connections-note.md)]
This extension is available by installing the [Microsoft.Azure.WebJobs.Extension
Using the .NET CLI: ```dotnetcli
-# Install the Tables API extension
+# Install the Azure Tables extension
dotnet add package Microsoft.Azure.WebJobs.Extensions.Tables --version 1.0.0
-# Update the combined Azure Storage extension (to a version which no longer includes Tables)
+# Update the combined Azure Storage extension (to a version which no longer includes Azure Tables)
dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0 ```
dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0
Working with the bindings requires that you reference the appropriate NuGet package. Tables are included in a combined package for Azure Storage. Install the [Microsoft.Azure.WebJobs.Extensions.Storage NuGet package][storage-4.x], version 3.x or 4.x. > [!NOTE]
-> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x of the extension NuGet package or additionally include the [Azure Cosmos DB for Table extension](#table-api-extension) when using version 5.x.
+> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x of the extension NuGet package or additionally include the [Azure Tables extension](#table-api-extension) when using version 5.x.
# [Functions 1.x](#tab/functionsv1/in-process)
Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs](
[!INCLUDE [functions-storage-sdk-version](../../includes/functions-storage-sdk-version.md)]
+# [Azure Tables extension](#tab/table-api/isolated-process)
++
+This version allows you to bind to types from [`Azure.Data.Tables`](/dotnet/api/azure.data.tables). It also introduces the ability to use Azure Cosmos DB for Table.
+
+This extension is available by installing the [Microsoft.Azure.Functions.Worker.Extensions.Tables NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Tables) into a project using version 5.x or higher of the extensions for [blobs](./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5) and [queues](./functions-bindings-storage-queue.md?tabs=isolated-process%2Cextensionv5).
+
+Using the .NET CLI:
+
+```dotnetcli
+# Install the Azure Tables extension
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Tables --version 1.0.0
+
+# Update the combined Azure Storage extension (to a version which no longer includes Azure Tables)
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version 5.0.0
+```
++ # [Combined Azure Storage extension](#tab/storage-extension/isolated-process) Tables are included in a combined package for Azure Storage. Install the [Microsoft.Azure.Functions.Worker.Extensions.Storage NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage/4.0.4), version 4.x. > [!NOTE]
-> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x.
-
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/isolated-process)
-
-The Azure Cosmos DB for Table extension does not currently support isolated worker process. You will instead need to use the [Storage extension](#storage-extension).
+> Tables have been moved out of this package starting in its 5.x version. You need to instead use version 4.x of the extension NuGet package or additionally include the [Azure Tables extension](#table-api-extension) when using version 5.x.
# [Functions 1.x](#tab/functionsv1/isolated-process) Functions version 1.x doesn't support isolated worker process.
-# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
+# [Azure Tables extension (preview)](#tab/table-api/csharp-script)
-You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
-> [!NOTE]
-> Version 3.x of the extension bundle doesn't include the Table Storage bindings. You need to instead use version 2.x for now.
+You can add this version of the extension from the extension bundle v3 by adding or replacing the following code in your `host.json` file:
-# [Azure Cosmos DB for Table extension (preview)](#tab/table-api/csharp-script)
-Version 3.x of the extension bundle doesn't currently include the Azure Cosmos DB for Table bindings. For now, you need to instead use version 2.x of the extension bundle, which uses the [Storage extension](#storage-extension).
+# [Combined Azure Storage extension](#tab/storage-extension/csharp-script)
+
+You can install this version of the extension in your function app by registering the [extension bundle], version 2.x.
# [Functions 1.x](#tab/functionsv1/csharp-script)
The Azure Tables bindings are part of an [extension bundle], which is specified
# [Bundle v3.x](#tab/extensionv3)
-Version 3.x of the extension bundle doesn't currently include the Azure Tables bindings. You need to instead use version 2.x of the extension bundle.
+
+You can add this version of the extension from the extension bundle v3 by adding or replacing the following code in your `host.json` file:
+ # [Bundle v2.x](#tab/extensionv2)
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Title: Guidance for developing Azure Functions
description: Learn the Azure Functions concepts and techniques that you need to develop functions in Azure, across all programming languages and bindings. ms.assetid: d8efe41a-bef8-4167-ba97-f3e016fcd39e Previously updated : 9/02/2021 Last updated : 11/11/2022 ms.devlang: csharp
Identity-based connections are supported by the following components:
| Connection source | Plans supported | Learn more | ||--|--|
-| Azure Blob triggers and bindings | All | [Extension version 5.0.0 or later][blobv5]<br/>[Extension bundle 3.3.0 or later][blobv5] |
-| Azure Queue triggers and bindings | All | [Extension version 5.0.0 or later][queuev5]<br/>[Extension bundle 3.3.0 or later][queuev5] |
-| Azure Event Hubs triggers and bindings | All | [Extension version 5.0.0 or later][eventhubv5]<br/>[Extension bundle 3.3.0 or later][eventhubv5] |
-| Azure Service Bus triggers and bindings | All | [Extension version 5.0.0 or later][servicebusv5]<br/>[Extension bundle 3.3.0 or later][servicebusv5] |
-| Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Extension version 4.0.0-preview1 or later][cosmosv4]<br/> [Preview extension bundle 4.0.0 or later][cosmosv4]|
-| Azure Tables (when using Azure Storage) - Preview | All | [Azure Cosmos DB for Table extension](./functions-bindings-storage-table.md#table-api-extension)<br/>[Extension bundle 3.3.0 or later][tablesv1] |
-| Durable Functions storage provider (Azure Storage) - Preview | All | [Extension version 2.7.0 or later](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.7.0) |
+| Azure Blobs triggers and bindings | All | [Azure Blobs extension version 5.0.0 or later][blobv5],<br/>[Extension bundle 3.3.0 or later][blobv5] |
+| Azure Queues triggers and bindings | All | [Azure Queues extension version 5.0.0 or later][queuev5],<br/>[Extension bundle 3.3.0 or later][queuev5] |
+| Azure Tables (when using Azure Storage) | All | [Azure Tables extension version 1.0.0 or later](./functions-bindings-storage-table.md#table-api-extension),<br/>[Extension bundle 3.3.0 or later][tablesv1] |
+| Azure Event Hubs triggers and bindings | All | [Azure Event Hubs extension version 5.0.0 or later][eventhubv5],<br/>[Extension bundle 3.3.0 or later][eventhubv5] |
+| Azure Service Bus triggers and bindings | All | [Azure Service Bus extension version 5.0.0 or later][servicebusv5],<br/>[Extension bundle 3.3.0 or later][servicebusv5] |
+| Azure Cosmos DB triggers and bindings - Preview | Elastic Premium | [Azure Cosmos DB extension version 4.0.0-preview1 or later][cosmosv4],<br/> [Preview extension bundle 4.0.0 or later][cosmosv4]|
+| Durable Functions storage provider (Azure Storage) - Preview | All | [Durable Functions extension version 2.7.0 or later][durable-identity],<br/>[Extension bundle 3.3.0 or later][durable-identity] |
| Host-required storage ("AzureWebJobsStorage") - Preview | All | [Connecting to host storage with an identity](#connecting-to-host-storage-with-an-identity-preview) | [blobv5]: ./functions-bindings-storage-blob.md#install-extension
Identity-based connections are supported by the following components:
[servicebusv5]: ./functions-bindings-service-bus.md [cosmosv4]: ./functions-bindings-cosmosdb-v2.md?tabs=extensionv4 [tablesv1]: ./functions-bindings-storage-table.md#table-api-extension
+[durable-identity]: ./durable/durable-functions-storage-providers.md#identity-based-connections-preview
[!INCLUDE [functions-identity-based-connections-configuration](../../includes/functions-identity-based-connections-configuration.md)]
Choose a tab below to learn about permissions for each component:
[!INCLUDE [functions-queue-permissions](../../includes/functions-queue-permissions.md)]
+# [Azure Tables extension](#tab/table)
++ # [Event Hubs extension](#tab/eventhubs) [!INCLUDE [functions-event-hubs-permissions](../../includes/functions-event-hubs-permissions.md)]
Choose a tab below to learn about permissions for each component:
[!INCLUDE [functions-cosmos-permissions](../../includes/functions-cosmos-permissions.md)]
-# [Azure Tables API extension (preview)](#tab/table)
- # [Durable Functions storage provider (preview)](#tab/durable)
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
[Subscription key]: quick-demo-map-app.md#get-the-primary-key-for-your-account [authentication]: azure-maps-authentication.md
+[Host daemon]: /azure/azure-maps/how-to-secure-daemon-app#host-a-daemon-on-non-azure-resources
[.NET standard]: /dotnet/standard/net-standard?tabs=net-standard-2-0 [Rest API]: /rest/api/maps/ [.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
It is worth mentioning that the collection endpoint pre-aggregates events before
| Java | Not Supported | Not Supported | [Supported](java-in-process-agent.md#metrics) | | Node.js | Not Supported | Not Supported | Not Supported |
-1. ASP.NET codeless attach on App Service only emits metrics in "full" monitoring mode. ASP.NET codeless attach on App Service, VM/VMSS, and On-Premises emits standard metrics without dimensions. SDK is required for all dimensions.
+1. ASP.NET codeless attach on VM/VMSS, and On-Premises emits standard metrics without dimensions. The same is true for App Service, but the collection level must be set to recommended. The SDK is required for all dimensions.
2. ASP.NET Core codeless attach on App Service emits standard metrics without dimensions. SDK is required for all dimensions. ## Using pre-aggregation with Application Insights custom metrics
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis.md
Currently the following dependencies are supported in **Web App Diagnose and sol
- **App Services file changes**: File changes take up to 30 minutes to display. - **App Services configuration changes**: Due to the snapshot approach to configuration changes, timestamps of configuration changes could take up to 6 hours to display from when the change actually happened. - **Web app deployment and configuration changes**: Since these changes are collected by a site extension and stored on disk space owned by your application, data collection and storage is subject to your application's behavior. Check to see if a misbehaving application is affecting the results.
+- **Snapshot retention for all changes**: The Change Analysis data for resources is tracked by Azure Resource Graphs (ARG). ARG keeps snapshot history of tracked resources only for 14 days.
+ ## Next steps
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
A single Azure Monitor workspace can collect data from multiple sources, but the
There are several reasons that you may consider creating additional workspaces including the following. -- Azure tenants. If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant.-- Azure regions. Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations.-- Data ownership. You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies.-- Workspace limits. See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for current capacity limits related to Azure Monitor workspaces.-- Multiple environments. You may have Azure Monitor workspaces supporting different environments such as test, pre-production, and production.
+| Criteria | Description |
+|:|:|
+| Azure tenants | If you have multiple Azure tenants, you'll usually create a workspace in each because several data sources can only send monitoring data to a workspace in the same Azure tenant. |
+| Azure regions | Each workspace resides in a particular Azure region, and you may have regulatory or compliance requirements to store data in particular locations. |
+| Data ownership | You may choose to create separate workspaces to define data ownership, for example by subsidiaries or affiliated companies. |
+| Multiple environments | You may have Azure Monitor workspaces supporting different environments such as test, pre-production, and production. |
+| Logical boundaries | You may choose to separate your data based on logical boundaries such as application team or company division. |
+| Workspace limits | See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for current capacity limits related to Azure Monitor workspaces. If your capacity reaches 80%, you should consider creating multiple workspaces according to logical boundaries that make sense for your organization. |
+ > [!NOTE] > You cannot currently query across multiple Azure Monitor workspaces.
azure-monitor Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/authentication-authorization.md
You can also use this flow to request a token to `https://api.loganalytics.io`.
&client_secret=YOUR_CLIENT_SECRET ```
+##### Microsoft identity platform v2.0
+
+```
+ POST /YOUR_AAD_TENANT/oauth2/v2.0/token HTTP/1.1
+ Host: https://login.microsoftonline.com
+ Content-Type: application/x-www-form-urlencoded
+
+ grant_type=client_credentials
+ &client_id=YOUR_CLIENT_ID
+ &scope=https://management.azure.com/.default
+ &client_secret=YOUR_CLIENT_SECRET
+```
+ A successful request receives an access token: ```
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
When the gallery opens, select a saved workbook or a template. You can also sear
To save a workbook, save the report with a specific title, subscription, resource group, and location.
-The workbook is auto-filled with the same settings as the LA workspace, with the same subscription and resource group. You can change the report settings if you want. Workbooks are saved to 'My Reports' by default, and are only accessible by the individual user, but they can be saved directly to shared reports or shared later on. Workbooks are shared resources and they require write access to the parent resource group to be saved.
+By default, the workbook is auto-filled with the same settings as the LA workspace, with the same subscription and resource group. Workbooks are saved to 'My Reports' by default, and are only accessible by the individual user, but they can be saved directly to shared reports or shared later on. Workbooks are shared resources and they require write access to the parent resource group to be saved.
-## Share a workbook template
+## Share a workbook
-After you start creating your own workbook template, you might want to share it with the wider community. To learn more, and to explore other templates that aren't part of the default Azure Monitor gallery, see the [GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/README.md). To browse existing workbooks, see the [Workbook library](https://github.com/microsoft/Application-Insights-Workbooks/tree/master/Workbooks) on GitHub.
+When you want to share a workbook or template, keep in mind that the person you want to share with must have permissions to access the workbook. They must have an Azure account, and **Azure Sentinel Workbook Reader** permissions.
+To share a workbook or workbook template:
+
+1. In the Azure portal, select the workbook or template you want to share.
+1. Select the **Share** icon from the top toolbar.
+1. The **Share workbook** or **Share template** window opens with a URL to use for sharing the workbook.
+1. Copy the link to share the workbook, or select **Share link via email** to open your default mail app.
+ ## Pin a visualization
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
If the Dependency agent fails to start, check the logs for detailed error inform
Since the Dependency agent works at the kernel level, support is also dependent on the kernel version. As of Dependency agent version 9.10.* the agent supports * kernels. The following table lists the major and minor Linux OS release and supported kernel versions for the Dependency agent. >[!NOTE]
-> Dependency agent is not supported for Azure Virtual Machines with Ampere Altra ARMΓÇôbased processors.
+> With Dependency agent 9.10.15 and above, installation is not blocked for unsupported kernel versions, but the agent will run in degraded mode. In this mode, connection and port data stored in VMConnection and VMBoundport tables is not collected. The VMProcess table may have some data, but it will be minimal.
| Distribution | OS version | Kernel version | |:|:|:|
Since the Dependency agent works at the kernel level, support is also dependent
| | 15 | 4.12.14-150.\*-default | | Debian | 9 | 4.9 |
+>[!NOTE]
+> Dependency agent is not supported for Azure Virtual Machines with Ampere Altra ARMΓÇôbased processors.
+ ## Next steps If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
na Previously updated : 10/24/2022 Last updated : 11/14/2022 # Manage availability zone volume placement for Azure NetApp Files
Azure NetApp Files lets you deploy new volumes in the logical availability zone
* VMs and Azure NetApp Files volumes are to be deployed separately, within the same logical availability zone to create zone alignment between VMs and Azure NetApp Files. The availability zone volume placement feature does not create zonal VMs upon volume creation, or vice versa.
+> [!IMPORTANT]
+> Once the volume is created using the availability zone volume placement feature, the volume has the same level of support as other volumes deployed in the subscription without this feature enabled. For example, if there is an issue with backup and restore on the volume, it will be supported because the problem is not with the availability zone volume placement feature itself.
+ ## Register the feature The feature of availability zone volume placement is currently in preview. If you are using this feature for the first time, you need to register the feature first.
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
[ ![Screenshot that shows the Availability Zone volume overview.](../media/azure-netapp-files/availability-zone-volume-overview.png) ](../media/azure-netapp-files/availability-zone-volume-overview.png#lightbox)
-> [!IMPORTANT]
-> Once the volume is created using the availability zone volume placement feature, the volume has the same level of support as other volumes deployed in the subscription without this feature enabled. For example, if there is an issue with backup and restore on the volume, it will be supported because the problem is not with the availability zone volume placement feature itself.
- ## Next steps * [Use availability zones for high availability](use-availability-zones.md)
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep description: Use loops to iterate over collections in Bicep Previously updated : 11/09/2022 Last updated : 11/14/2022 # Iterative loops in Bicep
Loops can be declared by:
Using loops in Bicep has these limitations:
+- Bicep loops only work with values that can be determined at the start of deployment.
- Loop iterations can't be a negative number or exceed 800 iterations. - Can't loop a resource with nested child resources. Change the child resources to top-level resources. See [Iteration for a child resource](#iteration-for-a-child-resource).-- Can't loop on multiple levels of properties.
+- To loop on multiple levels of properties, use [lambda map function](./bicep-functions-lambda.md#map).
## Integer index
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
description: Create parameter file for passing in values during deployment of a
Previously updated : 07/18/2022 Last updated : 11/14/2022 # Create Bicep parameter file
A parameter file uses the following format:
} ```
-Notice that the parameter file stores parameter values as plain text. This approach works for values that aren't sensitive, such as a resource SKU. Plain text doesn't work for sensitive values, such as passwords. If you need to pass a parameter that contains a sensitive value, store the value in a key vault. Instead of adding the sensitive value to your parameter file, retrieve it with the [getSecret function](bicep-functions-resource.md#getsecret). For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md).
+It's worth noting that the parameter file saves parameter values as plain text. For security reasons, this approach is not recommended for sensitive values such as passwords. If you must pass a parameter with a sensitive value, keep the value in a key vault. Instead of adding the sensitive value to your parameter file, use the [getSecret function](bicep-functions-resource.md#getsecret) to retrieve it. For more information, see [Use Azure Key Vault to pass secure parameter value during Bicep deployment](key-vault-parameter.md).
## Define parameter values
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/parameter-files.md
Title: Create parameter file description: Create parameter file for passing in values during deployment of an Azure Resource Manager template Previously updated : 05/11/2021 Last updated : 11/14/2022
A parameter file uses the following format:
} ```
-Notice that the parameter file stores parameter values as plain text. This approach works for values that aren't sensitive, such as a resource SKU. Plain text doesn't work for sensitive values, such as passwords. If you need to pass a parameter that contains a sensitive value, store the value in a key vault. Then reference the key vault in your parameter file. The sensitive value is securely retrieved during deployment.
+It's worth noting that the parameter file saves parameter values as plain text. For security reasons, this approach is not recommended for sensitive values such as passwords. If you must pass a parameter with a sensitive value, keep the value in a key vault. Then, in your parameter file, include a reference to the key vault. During deployment, the sensitive value is securely retrieved. For more information, see [Use Azure Key Vault to pass secure parameter value during deployment](./key-vault-parameter.md).
The following parameter file includes a plain text value and a sensitive value that's stored in a key vault.
azure-signalr Signalr Quickstart Azure Functions Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-functions-javascript.md
Make sure you have Azure Functions Core Tools installed.
body: data } context.done()
- } catch (error) {
+ } catch (err) {
context.log.error(err); context.done(err); }
azure-web-pubsub Reference Odata Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-odata-filter.md
+
+ Title: OData filter syntax in Azure Web PubSub service
+description: OData language reference and full syntax used for creating filter expressions in Azure Web PubSub service queries.
++++ Last updated : 11/11/2022++
+# OData filter syntax in Azure Web PubSub service
+
+In Azure Web PubSub service, the **filter** parameter specifies inclusion or exclusion criteria for the connections to send messages to. This article describes the OData syntax of **filter** and provides examples.
+
+The complete syntax is described in the [formal grammar](#formal-grammar).
+
+There is also a browsable [syntax diagram](https://aka.ms/awps/filter-syntax-diagram) that allows you to interactively explore the grammar and the relationships between its rules.
+
+## Syntax
+
+A filter in the OData language is a Boolean expression, which in turn can be one of several types of expression, as shown by the following EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)):
+
+```
+/* Identifiers */
+string_identifier ::= 'connectionId' | 'userId'
+collection_identifier ::= 'groups'
+
+/* Rules for $filter */
+
+boolean_expression ::= logical_expression
+ | comparison_expression
+ | in_expression
+ | boolean_literal
+ | boolean_function_call
+ | '(' boolean_expression ')'
+```
+
+An interactive syntax diagram is also available:
+
+> [!div class="nextstepaction"]
+> [OData syntax diagram for Azure Web PubSub service](https://aka.ms/awps/filter-syntax-diagram)
+
+> [!NOTE]
+> See [formal grammar section](#formal-grammar) for the complete EBNF.
+
+### Identifiers
+
+The filter syntax is used to filter out the connections matching the filter expression to send messages to.
+
+Azure Web PubSub supports below identifiers:
+
+| Identifier | Description | Note | Examples
+| | | -- | --
+| `userId` | The userId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `userId eq 'user1'`
+| `connectionId` | The connectionId of the connection. | Case insensitive. It can be used in [string operations](#supported-operations). | `connectionId ne '123'`
+| `groups` | The collection of groups the connection is currently in. | Case insensitive. It can be used in [collection operations](#supported-operations). | `'group1' in groups`
+
+Identifiers are used to refer to the property value of a connection. Azure Web PubSub supports 3 identifiers matching the property name of the connection model. and supports identifiers `userId` and `connectionId` in string operations, supports identifier `groups` in [collection operations](#supported-operations). For example, to filter out connections with userId `user1`, we specify the filter as `userId eq 'user1'`. Read through the below sections for more samples using the filter.
+
+### Boolean expressions
+
+The expression for a filter is a boolean expression. When sending messages to connections, Azure Web PubSub sends messages to connections with filter expression evaluated to `true`.
+
+The types of Boolean expressions include:
+
+- Logical expressions that combine other Boolean expressions using the operators `and`, `or`, and `not`.
+- Comparison expressions, which compare fields or range variables to constant values using the operators `eq`, `ne`, `gt`, `lt`, `ge`, and `le`.
+- The Boolean literals `true` and `false`. These constants can be useful sometimes when programmatically generating filters, but otherwise don't tend to be used in practice.
+- Boolean expressions in parentheses. Using parentheses can help to explicitly determine the order of operations in a filter. For more information on the default precedence of the OData operators, see [operator precedence section](#operator-precedence).
+
+### Supported operations
+| Operator | Description | Example
+| | |
+| **Logical Operators**
+| `and` | Logical and | `length(userId) le 10 and length(userId) gt 3`
+| `or` | Logical or | `length(userId) gt 10 or length(userId) le 3`
+| `not` | Logical negation | `not endswith(userId, 'milk')`
+| **Comparison Operators**
+| `eq` | Equal | `userId eq 'user1'`, </br> `userId eq null`
+| `ne` | Not equal | `userId ne 'user1'`, </br> `userId ne null`
+| `gt` | Greater than | `length(userId) gt 10`
+| `ge` | Greater than or equal | `length(userId) ge 10`
+| `lt` | Less than | `length(userId) lt 3`
+| `le` | Less than or equal | `'group1' in groups`, </br> `user in ('user1','user2')`
+| **In Operator**
+| `in` | The right operand MUST be either a comma-separated list of primitive values, enclosed in parentheses, or a single expression that resolves to a collection.| `userId ne 'user1'`
+| **Grouping Operator**
+| `()` | Controls the evaluation order of an expression | `userId eq 'user1' or (not (startswith(userId,'user2'))`
+| **String Functions**
+| `string tolower(string p)` | Get the lower case for the string value | `tolower(userId) eq 'user1'` can match connections for user `USER1`
+| `string toupper(string p)` | Get the upper case for the string value | `toupper(userId) eq 'USER1'` can match connections for user `user1`
+| `string trim(string p)` | Trim the string value | `trim(userId) eq 'user1'` can match connections for user ` user1 `
+| `string substring(string p, int startIndex)`,</br>`string substring(string p, int startIndex, int length)` | Substring of the string | `substring(userId,5,2) eq 'ab'` can match connections for user `user-ab-de`
+| `bool endswith(string p0, string p1)` | Check if `p0` ends with `p1` | `endswith(userId,'de')` can match connections for user `user-ab-de`
+| `bool startswith(string p0, string p1)` | Check if `p0` starts with `p1` | `startswith(userId,'user')` can match connections for user `user-ab-de`
+| `int indexof(string p0, string p1)` | Get the index of `p1` in `p0`. Returns `-1` if `p0` does not contain `p1`. | `indexof(userId,'-ab-') ge 0` can match connections for user `user-ab-de`
+| `int length(string p)` | Get the length of the input string | `length(userId) gt 1` can match connections for user `user-ab-de`
+| **Collection Functions**
+| `int length(collection p)` | Get the length of the collection | `length(groups) gt 1` can match connections in 2 groups
+
+### Operator precedence
+
+If you write a filter expression with no parentheses around its sub-expressions, Azure Web PubSub service will evaluate it according to a set of operator precedence rules. These rules are based on which operators are used to combine sub-expressions. The following table lists groups of operators in order from highest to lowest precedence:
+
+| Group | Operator(s) |
+| | |
+| Logical operators | `not` |
+| Comparison operators | `eq`, `ne`, `gt`, `lt`, `ge`, `le` |
+| Logical operators | `and` |
+| Logical operators | `or` |
+
+An operator that is higher in the above table will "bind more tightly" to its operands than other operators. For example, `and` is of higher precedence than `or`, and comparison operators are of higher precedence than either of them, so the following two expressions are equivalent:
+
+```odata-filter-expr
+length(userId) gt 0 and length(userId) lt 3 or length(userId) gt 7 and length(userId) lt 10
+((length(userId) gt 0) and (length(userId) lt 3)) or ((length(userId) gt 7) and (length(userId) lt 10))
+```
+
+The `not` operator has the highest precedence of all -- even higher than the comparison operators. That's why if you try to write a filter like this:
+
+```odata-filter-expr
+not length(userId) gt 5
+```
+
+You'll get this error message:
+
+```text
+Invalid syntax for 'not length(userId)': Type 'null', expect 'bool'. (Parameter 'filter')
+```
+
+This error happens because the operator is associated with just the `length(userId)` expression, which is of type `null` when `userId` is `null`, and not with the entire comparison expression. The fix is to put the operand of `not` in parentheses:
+
+```odata-filter-expr
+not (length(userId) gt 5)
+```
+
+### Filter size limitations
+
+There are limits to the size and complexity of filter expressions that you can send to Azure Web PubSub service. The limits are based roughly on the number of clauses in your filter expression. A good guideline is that if you have over 100 clauses, you are at risk of exceeding the limit. We recommend designing your application in such a way that it doesn't generate filters of unbounded size.
+
+## Examples
+
+1. Send to multiple groups
+
+ ```odata-filter-expr
+ filter='group1' in groups or 'group2' in groups or 'group3' in groups
+ ```
+2. Send to multiple users in some specific group
+ ```odata-filter-expr
+ filter=userId in ('user1', 'user2', 'user3') and 'group1' in groups
+ ```
+3. Send to some user but not some specific connectionId
+ ```odata-filter-expr
+ filter=userId eq 'user1' and connectionId ne '123'
+ ```
+4. Send to some user not in some specific group
+ ```odata-filter-expr
+ filter=userId eq 'user1' and (not ('group1' in groups))
+ ```
+5. Escape `'` when userId contains `'`
+ ```odata-filter-expr
+ filter=userId eq 'user''1'
+ ```
+
+## Formal grammar
+
+We can describe the subset of the OData language supported by Azure Web PubSub service using an EBNF ([Extended Backus-Naur Form](https://en.wikipedia.org/wiki/Extended_BackusΓÇôNaur_form)) grammar. Rules are listed "top-down", starting with the most complex expressions, and breaking them down into more primitive expressions. At the top is the grammar rule for `$filter` that correspond to specific parameter `filter` of the Azure Azure Web PubSub service `Send*` REST APIs:
++
+```
+/* Top-level rule */
+
+filter_expression ::= boolean_expression
+
+/* Identifiers */
+string_identifier ::= 'connectionId' | 'userId'
+collection_identifier ::= 'groups'
+
+/* Rules for $filter */
+
+boolean_expression ::= logical_expression
+ | comparison_expression
+ | in_expression
+ | boolean_literal
+ | boolean_function_call
+ | '(' boolean_expression ')'
+
+logical_expression ::= boolean_expression ('and' | 'or') boolean_expression
+ | 'not' boolean_expression
+
+comparison_expression ::= primary_expression comparison_operator primary_expression
+
+in_expression ::= primary_expression 'in' ( '(' primary_expression (',' primary_expression)* ')' ) | collection_expression
+
+collection_expression ::= collection_variable
+ | '(' collection_expression ')'
+
+primary_expression ::= primary_variable
+ | function_call
+ | constant
+ | '(' primary_expression ')'
+
+string_expression ::= string_literal
+ | 'null'
+ | string_identifier
+ | string_function_call
+ | '(' string_expression ')'
+
+primary_variable ::= string_identifier
+collection_variable ::= collection_identifier
+
+comparison_operator ::= 'gt' | 'lt' | 'ge' | 'le' | 'eq' | 'ne'
+
+/* Rules for constants and literals */
+constant ::= string_literal
+ | integer_literal
+ | boolean_literal
+ | 'null'
+
+boolean_literal ::= 'true' | 'false'
+
+string_literal ::= "'"([^'] | "''")*"'"
+
+digit ::= [0-9]
+sign ::= '+' | '-'
+integer_literal ::= sign? digit+
+
+boolean_literal ::= 'true' | 'false'
+
+/* Rules for functions */
+
+function_call ::= indexof_function_call
+ | length_function_call
+ | string_function_call
+ | boolean_function_call
+
+boolean_function_call ::= endsWith_function_call
+ | startsWith_function_call
+ | contains_function_call
+string_function_call ::= tolower_function_call
+ | toupper_function_call
+ | trim_function_call
+ | substring_function_call
+ | concat_function_call
+
+/* Rules for string functions */
+indexof_function_call ::= "indexof" '(' string_expression ',' string_expression ')'
+concat_function_call ::= "concat" '(' string_expression ',' string_expression ')'
+contains_function_call ::= "contains" '(' string_expression ',' string_expression ')'
+endsWith_function_call ::= "endswith" '(' string_expression ',' string_expression ')'
+startsWith_function_call ::= "startswith" '(' string_expression ',' string_expression ')'
+substring_function_call ::= "substring" '(' string_expression ',' integer_literal (',' integer_literal)? ')'
+tolower_function_call ::= "tolower" '(' string_expression ')'
+toupper_function_call ::= "toupper" '(' string_expression ')'
+trim_function_call ::= "trim" '(' string_expression ')'
+
+/* Rules for string and collection functions */
+length_function_call ::= "length" '(' string_expression | collection_expression ')'
+```
+
+## Next steps
+
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 11/04/2022 Last updated : 11/14/2022
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Backup is supported in all regions where Trusted Launch VM is available. </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). </li></ul>
-
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where Trusted Launch VM is available. <br><br> - Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. <br><br> - Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm).
+[Confidential VM](../confidential-computing/confidential-vm-overview.md) | The backup support is in Limited Preview. <br><br> Backup is supported only for those Confidential VMs with no confidential disk encryption and for Confidential VMs with confidential OS disk encryption using Platform Managed Key (PMK). <br><br> Backup is currently not supported for Confidential VMs with confidential OS disk encryption using Customer Managed Key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where Confidential VM is available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported using [Enhanced Policy](backup-azure-vms-enhanced-policy.md) only. You can configure backup through [Create VM blade](backup-azure-arm-vms-prepare.md), [VM Manage blade](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and File Recovery (Item level Restore) for Confidential VM are currently not supported.
## VM storage support
backup Sap Hana Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-backup-support-matrix.md
Title: SAP HANA Backup support matrix description: In this article, learn about the supported scenarios and limitations when you use Azure Backup to back up SAP HANA databases on Azure VMs. Previously updated : 08/03/2022 Last updated : 11/14/2022
Azure Backup supports the backup of SAP HANA databases to Azure. This article su
| -- | | | | **Topology** | SAP HANA running in Azure Linux VMs only | HANA Large Instances (HLI) | | **Regions** | **Americas** ΓÇô Central US, East US 2, East US, North Central US, South Central US, West US 2, West US 3, West Central US, West US, Canada Central, Canada East, Brazil South <br> **Asia Pacific** ΓÇô Australia Central, Australia Central 2, Australia East, Australia Southeast, Japan East, Japan West, Korea Central, Korea South, East Asia, Southeast Asia, Central India, South India, West India, China East, China East 2, China East 3, China North, China North 2, China North 3 <br> **Europe** ΓÇô West Europe, North Europe, France Central, UK South, UK West, Germany North, Germany West Central, Switzerland North, Switzerland West, Central Switzerland North, Norway East, Norway West <br> **Africa / ME** - South Africa North, South Africa West, UAE North, UAE Central <BR> **Azure Government regions** | France South, Germany Central, Germany Northeast, US Gov IOWA |
-| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2 and SP3 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, and 8.6 | |
+| **OS versions** | SLES 12 with SP2, SP3, SP4 and SP5; SLES 15 with SP0, SP1, SP2, SP3, and SP4 <br><br> RHEL 7.4, 7.6, 7.7, 7.9, 8.1, 8.2, 8.4, and 8.6 | |
| **HANA versions** | SDC on HANA 1.x, MDC on HANA 2.x SPS04, SPS05 Rev <= 56, SPS 06 (validated for encryption enabled scenarios as well) | | | **Encryption** | SSLEnforce, HANA data encryption | | | **HANA deployments** | SAP HANA on a single Azure VM - Scale up only. <br><br> For high availability deployments, both the nodes on the two different machines are treated as individual nodes with separate data chains. | Scale-out <br><br> In high availability deployments, backup doesnΓÇÖt failover to the secondary node automatically. Configuring backup should be done separately for each node. |
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/skus.md
The following table presents component options for each available SKU.
|vCPUs|72|72| |RAM|576 GB|768 GB| |Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|19.95 TB (2x375G Optane, 6x3.2TB NVMe)|
-|Network|100 Gbps (four links * 25 Gbps)|100 Gbps (four links * 25 Gbps)|
+|Network (available bandwidth between nodes)|25 Gbps|25 Gbps|
## Next steps
batch Batch Aad Auth Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth-management.md
# Authenticate Batch Management solutions with Active Directory
-Applications that call the Azure Batch Management service authenticate with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD). Azure AD is Microsoft's multi-tenant cloud based directory and identity management service. Azure itself uses Azure AD for the authentication of its customers, service administrators, and organizational users.
+Applications that call the Azure Batch Management service authenticate with [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (Azure AD). Azure AD is Microsoft's multi-tenant cloud based directory and identity management service. Azure itself uses Azure AD for the authentication of its customers, service administrators, and organizational users.
The Batch Management .NET library exposes types for working with Batch accounts, account keys, applications, and application packages. The Batch Management .NET library is an Azure resource provider client, and is used together with [Azure Resource Manager](../azure-resource-manager/management/overview.md) to manage these resources programmatically. Azure AD is required to authenticate requests made through any Azure resource provider client, including the Batch Management .NET library, and through Azure Resource Manager.
To learn more about using the Batch Management .NET library and the AccountManag
## Register your application with Azure AD
-The [Azure Active Directory Authentication Library](../active-directory/azuread-dev/active-directory-authentication-libraries.md) (ADAL) provides a programmatic interface to Azure AD for use within your applications. To call ADAL from your application, you must register your application in an Azure AD tenant. When you register your application, you supply Azure AD with information about your application, including a name for it within the Azure AD tenant. Azure AD then provides an application ID that you use to associate your application with Azure AD at runtime. To learn more about the application ID, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
+The [Microsoft Authentication Library](../active-directory/develop/msal-authentication-flows.md) (MSAL) provides a programmatic interface to Azure AD for use within your applications. To call MSAL from your application, you must register your application in an Azure AD tenant. When you register your application, you supply Azure AD with information about your application, including a name for it within the Azure AD tenant. Azure AD then provides an application ID that you use to associate your application with Azure AD at runtime. To learn more about the application ID, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
To register the AccountManagement sample application, follow the steps in the [Adding an Application](../active-directory/develop/quickstart-register-app.md) section in [Integrating applications with Azure Active Directory](../active-directory/develop/quickstart-register-app.md). Specify **Native Client Application** for the type of application. The industry standard OAuth 2.0 URI for the **Redirect URI** is `urn:ietf:wg:oauth:2.0:oob`. However, you can specify any valid URI (such as `http://myaccountmanagementsample`) for the **Redirect URI**, as it does not need to be a real endpoint.
Follow these steps in the Azure portal:
6. In step 2, select the check box next to **Access Azure classic deployment model as organization users**, and click the **Select** button. 7. Click the **Done** button.
-The **Required Permissions** blade now shows that permissions to your application are granted to both the ADAL and Resource Manager APIs. Permissions are granted to ADAL by default when you first register your app with Azure AD.
+The **Required Permissions** blade now shows that permissions to your application are granted to both the MSAL and Resource Manager APIs. Permissions are granted to MSAL by default when you first register your app with Azure AD.
![Delegate permissions to the Azure Resource Manager API](./media/batch-aad-auth-management/required-permissions-management-plane.png)
private const string RedirectUri = "http://myaccountmanagementsample";
## Acquire an Azure AD authentication token
-After you register the AccountManagement sample in the Azure AD tenant and update the sample source code with your values, the sample is ready to authenticate using Azure AD. When you run the sample, the ADAL attempts to acquire an authentication token. At this step, it prompts you for your Microsoft credentials:
+After you register the AccountManagement sample in the Azure AD tenant and update the sample source code with your values, the sample is ready to authenticate using Azure AD. When you run the sample, the MSAL attempts to acquire an authentication token. At this step, it prompts you for your Microsoft credentials:
```csharp // Obtain an access token using the "common" AAD resource. This allows the application
After you provide your credentials, the sample application can proceed to issue
- For more information on running the [AccountManagement sample application](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/AccountManagement), see [Manage Batch accounts and quotas with the Batch Management client library for .NET](batch-management-dotnet.md). - To learn more about Azure AD, see the [Azure Active Directory Documentation](../active-directory/index.yml).-- In-depth examples showing how to use ADAL are available in the [Azure Code Samples](https://azure.microsoft.com/resources/samples/?service=active-directory) library.
+- In-depth examples showing how to use MSAL are available in the [Azure Code Samples](https://azure.microsoft.com/resources/samples/?service=active-directory) library.
- To authenticate Batch service applications using Azure AD, see [Authenticate Batch service solutions with Active Directory](batch-aad-auth.md).
batch Batch Aad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth.md
Use the **Azure Batch resource endpoint** to acquire a token for authenticating
## Register your application with a tenant
-The first step in using Azure AD to authenticate is registering your application in an Azure AD tenant. Registering your application enables you to call the Azure [Active Directory Authentication Library](../active-directory/azuread-dev/active-directory-authentication-libraries.md) (ADAL) from your code. The ADAL provides an API for authenticating with Azure AD from your application. Registering your application is required whether you plan to use integrated authentication or a service principal.
+The first step in using Azure AD to authenticate is registering your application in an Azure AD tenant. Registering your application enables you to call the Azure [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (MSAL) from your code. The ADAL provides an API for authenticating with Azure AD from your application. Registering your application is required whether you plan to use integrated authentication or a service principal.
When you register your application, you supply information about your application to Azure AD. Azure AD then provides an application ID (also called a *client ID*) that you use to associate your application with Azure AD at runtime. To learn more about the application ID, see [Application and service principal objects in Azure Active Directory](../active-directory/develop/app-objects-and-service-principals.md).
The code examples in this section show how to authenticate with Azure AD using i
### Code example: Using Azure AD integrated authentication with Batch .NET
-To authenticate with integrated authentication from Batch .NET, reference the [Azure Batch .NET](https://www.nuget.org/packages/Microsoft.Azure.Batch/) package and the [ADAL](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory/) package.
+To authenticate with integrated authentication from Batch .NET, reference the [Azure Batch .NET](https://www.nuget.org/packages/Microsoft.Azure.Batch/) package and the [MSAL](https://www.nuget.org/packages/Microsoft.Identity.Client/) package.
Include the following `using` statements in your code: ```csharp using Microsoft.Azure.Batch; using Microsoft.Azure.Batch.Auth;
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
+using Microsoft.Identity.Client;
``` Reference the Azure AD endpoint in your code, including the tenant ID. To retrieve the tenant ID, follow the steps outlined in [Get the tenant ID for your Azure Active Directory](#get-the-tenant-id-for-your-active-directory):
Also copy the redirect URI that you specified, if you registered your applicatio
private const string RedirectUri = "http://mybatchdatasample"; ```
-Write a callback method to acquire the authentication token from Azure AD. The **GetAuthenticationTokenAsync** callback method shown here calls ADAL to authenticate a user who is interacting with the application. The **AcquireTokenAsync** method provided by ADAL prompts the user for their credentials, and the application proceeds once the user provides them (unless it has already cached credentials):
+Write a callback method to acquire the authentication token from Azure AD. The **GetAuthenticationTokenAsync** callback method shown here calls MSAL to authenticate a user who is interacting with the application. The **AcquireTokenAsync** method provided by MSAL prompts the user for their credentials, and the application proceeds once the user provides them (unless it has already cached credentials):
```csharp public static async Task<string> GetAuthenticationTokenAsync()
public static void PerformBatchOperations()
### Code example: Using an Azure AD service principal with Batch .NET
-To authenticate with a service principal from Batch .NET, reference the [Azure Batch .NET](https://www.nuget.org/packages/Azure.Batch/) package and the [ADAL](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory/) package.
+To authenticate with a service principal from Batch .NET, reference the [Azure Batch .NET](https://www.nuget.org/packages/Azure.Batch/) package and the [MSAL](https://www.nuget.org/packages/Microsoft.Identity.Client/) package.
Include the following `using` statements in your code: ```csharp using Microsoft.Azure.Batch; using Microsoft.Azure.Batch.Auth;
-using Microsoft.IdentityModel.Clients.ActiveDirectory;
+using Microsoft.Identity.Client;
``` Reference the Azure AD endpoint in your code, including the tenant ID. When using a service principal, you must provide a tenant-specific endpoint. To retrieve the tenant ID, follow the steps outlined in [Get the tenant ID for your Azure Active Directory](#get-the-tenant-id-for-your-active-directory):
Specify the secret key that you copied from the Azure portal:
private const string ClientKey = "<secret-key>"; ```
-Write a callback method to acquire the authentication token from Azure AD. The **GetAuthenticationTokenAsync** callback method shown here calls ADAL for unattended authentication:
+Write a callback method to acquire the authentication token from Azure AD. The **GetAuthenticationTokenAsync** callback method shown here calls MSAL for unattended authentication:
```csharp public static async Task<string> GetAuthenticationTokenAsync()
batch Batch Management Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-management-dotnet.md
The Batch Management .NET library is an Azure resource provider client, and is u
To see Batch Management .NET in action, check out the [AccountManagement](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/AccountManagement) sample project on GitHub. The AccountManagement sample application demonstrates the following operations:
-1. Acquire a security token from Azure AD by using [ADAL](../active-directory/azuread-dev/active-directory-authentication-libraries.md). If the user is not already signed in, they are prompted for their Azure credentials.
+1. Acquire a security token from Azure AD by using [Acquire and cache tokens using the Microsoft Authentication Library (MSAL)](../active-directory/develop/msal-net-acquire-token-silently.md). If the user is not already signed in, they are prompted for their Azure credentials.
2. With the security token obtained from Azure AD, create a [SubscriptionClient](/dotnet/api/microsoft.azure.management.resourcemanager.subscriptionclient) to query Azure for a list of subscriptions associated with the account. The user can select a subscription from the list if it contains more than one subscription. 3. Get credentials associated with the selected subscription. 4. Create a [ResourceManagementClient](/dotnet/api/microsoft.azure.management.resourcemanager.resourcemanagementclient) object by using the credentials.
batch Batch Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-virtual-network.md
Title: Provision a pool in a virtual network description: How to create a Batch pool in an Azure virtual network so that compute nodes can communicate securely with other VMs in the network, such as a file server. Previously updated : 10/26/2022 Last updated : 11/14/2022
In order to provide the necessary communication between compute nodes and the Ba
configured such that: * Inbound TCP traffic on ports 29876 and 29877 from Batch service IP addresses that correspond to the
-`BatchNodeManagement` service tag. This rule is only created in `classic` pool communication mode.
+BatchNodeManagement.*region* service tag. This rule is only created in `classic` pool communication mode.
* Inbound TCP traffic on port 22 (Linux nodes) or port 3389 (Windows nodes) to permit remote access. For certain types of multi-instance tasks on Linux (such as MPI), you'll need to also allow SSH port 22 traffic for IPs in the subnet containing the Batch compute nodes. This traffic may be blocked per subnet-level NSG rules (see below).
-* Outbound any traffic on port 443 to Batch service IP addresses that correspond to the `BatchNodeManagement` service tag.
+* Outbound any traffic on port 443 to Batch service IP addresses that correspond to the BatchNodeManagement.*region* service tag.
* Outbound traffic on any port to the virtual network. This rule may be amended per subnet-level NSG rules (see below). * Outbound traffic on any port to the Internet. This rule may be amended per subnet-level NSG rules (see below).
If you have an NSG associated with the subnet for Batch compute nodes, you must
NSG with at least the inbound and outbound security rules that are shown in the following tables. > [!WARNING]
-> Batch service IP addresses can change over time. Therefore, we highly recommend that you use the `BatchNodeManagement` service tag (or a regional variant) for the NSG rules indicated in the following tables. Avoid populating NSG rules with specific Batch service IP addresses.
+> Batch service IP addresses can change over time. Therefore, we highly recommend that you use the
+> BatchNodeManagement.*region* service tag (or a regional variant) for the NSG rules indicated in the
+> following tables. Avoid populating NSG rules with specific Batch service IP addresses.
#### Inbound security rules | Source Service Tag or IP Addresses | Destination Ports | Protocol | Pool Communication Mode | Required | |-|-|-|-|-|
-| `BatchNodeManagement.<region>` [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 29876-29877 | TCP | Classic | Yes |
+| BatchNodeManagement.*region* [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 29876-29877 | TCP | Classic | Yes |
| Source IP addresses for remotely accessing compute nodes | 3389 (Windows), 22 (Linux) | TCP | Classic or Simplified | No | Configure inbound traffic on port 3389 (Windows) or 22 (Linux) only if you need to permit remote access
through configuring [pool endpoints](pool-endpoint-configuration.md).
| Destination Service Tag | Destination Ports | Protocol | Pool Communication Mode | Required | |-|-|-|-|-|
-| `BatchNodeManagement.<region>` [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 443 | * | Simplified | Yes |
-| `Storage.<region>` [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 443 | TCP | Classic | Yes |
+| BatchNodeManagement.*region* [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 443 | * | Simplified | Yes |
+| Storage.*region* [service tag](../../articles/virtual-network/network-security-groups-overview.md#service-tags) | 443 | TCP | Classic | Yes |
-Outbound to `BatchNodeManagement.<region>` service tag is required in `classic` pool communication mode
+Outbound to BatchNodeManagement.*region* service tag is required in `classic` pool communication mode
if using Job Manager tasks or if your tasks must communicate back to the Batch service. For outbound to
-`BatchNodeManagement.<region>` in `simplified` pool communication mode, the Batch service currently only
+BatchNodeManagement.*region* in `simplified` pool communication mode, the Batch service currently only
uses TCP protocol, but UDP may be required for future compatibility. For [pools without public IP addresses](simplified-node-communication-pool-no-public-ip.md) using `simplified` communication mode and with a node management private endpoint, an NSG isn't needed.
-For more information about outbound security rules for the `BatchNodeManagement` service tag, see
+For more information about outbound security rules for the BatchNodeManagement.*region* service tag, see
[Use simplified compute node communication](simplified-compute-node-communication.md). ## Pools in the Cloud Services Configuration
To ensure that the nodes in your pool work in a VNet that has forced tunneling e
For classic communication mode pools: -- The Batch service needs to communicate with nodes for scheduling tasks. To enable this communication, add a UDR corresponding to the `BatchNodeManagement.<region>` [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) in the region where your Batch account exists. Set the **Next hop type** to **Internet**.
+- The Batch service needs to communicate with nodes for scheduling tasks. To enable this communication, add a UDR corresponding to the BatchNodeManagement.*region* [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) in the region where your Batch account exists. Set the **Next hop type** to **Internet**.
- Ensure that outbound TCP traffic to Azure Storage on destination port 443 (specifically, URLs of the form `*.table.core.windows.net`, `*.queue.core.windows.net`, and `*.blob.core.windows.net`) isn't blocked by your on-premises network. For [simplified communication mode](simplified-compute-node-communication.md) pools without using node management private endpoint: -- Ensure that outbound TCP/UDP traffic to the Azure Batch `BatchNodeManagement.<region>` service tag on destination port 443 isn't blocked by your on-premises network. Currently only TCP protocol is used, but UDP may be required for future compatibility.
+- Ensure that outbound TCP/UDP traffic to the Azure Batch BatchNodeManagement.*region* service tag on destination port 443 isn't blocked by your on-premises network. Currently only TCP protocol is used, but UDP may be required for future compatibility.
For all pools: - If you use virtual file mounts, review the [networking requirements](virtual-file-mount.md#networking-requirements), and ensure that no required traffic is blocked. > [!WARNING]
-> Batch service IP addresses can change over time. To prevent outages due to Batch service IP address changes, do not directly specify IP addresses. Instead use the `BatchNodeManagement.<region>` [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes).
+> Batch service IP addresses can change over time. To prevent outages due to Batch service IP address changes, do not directly specify IP addresses. Instead use the BatchNodeManagement.*region* [service tag](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes).
## Next steps
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 10/31/2022 Last updated : 11/14/2022
Review the following guidance related to connectivity in your Batch solutions.
### Network Security Groups (NSGs) and User Defined Routes (UDRs)
-When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you're closely following the guidelines regarding the use of the `BatchNodeManagement.<region>` service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended; don't use underlying Batch service IP addresses as they can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
+When provisioning [Batch pools in a virtual network](batch-virtual-network.md), ensure that you're closely following the guidelines regarding the use of the BatchNodeManagement.*region* service tag, ports, protocols and direction of the rule. Use of the service tag is highly recommended; don't use underlying Batch service IP addresses as they can change over time. Using Batch service IP addresses directly can cause instability, interruptions, or outages for your Batch pools.
-For User Defined Routes (UDRs), it's recommended to use `BatchNodeManagement.<region>` [service tags](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) instead of Batch service IP addresses as they can change over time.
+For User Defined Routes (UDRs), it's recommended to use BatchNodeManagement.*region* [service tags](../virtual-network/virtual-networks-udr-overview.md#service-tags-for-user-defined-routes) instead of Batch service IP addresses as they can change over time.
### Honoring DNS
batch Simplified Node Communication Pool No Public Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/simplified-node-communication-pool-no-public-ip.md
Title: Create a simplified node communication pool without public IP addresses (preview) description: Learn how to create an Azure Batch simplified node communication pool without public IP addresses. Previously updated : 11/08/2022 Last updated : 11/14/2022
The example below shows how to use the [Batch Service REST API](/rest/api/batchs
### REST API URI ```http
-POST {batchURL}/pools?api-version=2020-03-01.11.0
+POST {batchURL}/pools?api-version=2022-10-01.16.0
client-request-id: 00000000-0000-0000-0000-000000000000 ```
client-request-id: 00000000-0000-0000-0000-000000000000
```json "pool": {
- "id": "pool2",
- "vmSize": "standard_a1",
+ "id": "pool-npip",
+ "vmSize": "standard_d2s_v3",
"virtualMachineConfiguration": { "imageReference": { "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "18.04-lts"
+ "offer": "0001-com-ubuntu-server-jammy",
+ "sku": "22_04-lts"
},
- "nodeAgentSKUId": "batch.node.ubuntu 18.04"
+ "nodeAgentSKUId": "batch.node.ubuntu 22.04"
} "networkConfiguration": { "subnetId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Network/virtualNetworks/<your_vnet_name>/subnets/<your_subnet_name>",
client-request-id: 00000000-0000-0000-0000-000000000000
} }, "resizeTimeout": "PT15M",
- "targetDedicatedNodes": 5,
+ "targetDedicatedNodes": 2,
"targetLowPriorityNodes": 0,
- "taskSlotsPerNode": 3,
+ "taskSlotsPerNode": 1,
"taskSchedulingPolicy": { "nodeFillType": "spread" }, "enableAutoScale": false,
- "enableInterNodeCommunication": true,
- "metadata": [
- {
- "name": "myproperty",
- "value": "myvalue"
- }
- ]
+ "enableInterNodeCommunication": false,
+ "targetNodeCommunicationMode": "simplified"
} ```
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
VNet is the fundamental building block for your private network in Azure. VNet enables many Azure resources to securely communicate with each other, the internet, and on-premises networks. VNet is like a traditional network you would operate in your own data center. However, VNet also has the benefits of Azure infrastructure, scale, availability, and isolation. ## How VNet Injection works in Chaos Studio
-VNet injection allows Chaos resource provider to inject containerized workloads into your VNet. This means that resources without public internet access can be accessed via a private IP address on the VNet. Below are the steps you can follow for vnet injection:
+VNet injection allows Chaos resource provider to inject containerized workloads into your VNet. This means that resources without public endpoints can be accessed via a private IP address on the VNet. Below are the steps you can follow for vnet injection:
1. Register the Microsoft.ContainerInstance resource provider with your subscription (if applicable). 2. Re-register the Microsoft.Chaos resource provider with your subscription. 3. Create a subnet named ChaosStudioSubnet in the VNet you want to inject into.
cloud-services-extended-support In Place Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-overview.md
For more information, see [Overview of Platform-supported migration of IaaS reso
## Supported resources and features available for migration associated with Cloud Services (classic) - Storage Accounts-- Virtual Networks
+- Virtual Networks (Azure Batch not supported)
- Network Security Groups - Reserved Public IP addresses - Endpoint Access Control Lists
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
These are top scenarios involving combinations of resources, features and Cloud
| Affinity Groups | Not supported. Remove any affinity groups before migration. | | Virtual networks using [virtual network peering](../virtual-network/virtual-network-peering-overview.md)| Before migrating a virtual network that is peered to another virtual network, delete the peering, migrate the virtual network to Resource Manager and re-create peering. This can cause downtime depending on the architecture. | | Virtual networks that contain App Service environments | Not supported |
+| Virtual networks with Azure Batch Deployments | Not supported |
| Virtual networks that contain HDInsight services | Not supported. | Virtual networks that contain Azure API Management deployments | Not supported. <br><br> To migrate the virtual network, change the virtual network of the API Management deployment. This is a no downtime operation. | | Classic Express Route circuits | Not supported. <br><br>These circuits need to be migrated to Azure Resource Manager before beginning PaaS migration. To learn more, see [Moving ExpressRoute circuits from the classic to the Resource Manager deployment model](../expressroute/expressroute-howto-move-arm.md). |
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
[4038779]: https://support.microsoft.com/kb/4038779 [4038786]: https://support.microsoft.com/kb/4038786 [4038793]: https://support.microsoft.com/kb/4038793
-[4040966]: https://support.microsoft.com/kb/4040966
+[4040966]: https://support.microsoft.com/topic/description-of-the-security-only-update-for-the-net-framework-3-5-1-for-windows-7-sp1-and-windows-server-2008-r2-sp1-september-12-2017-bd775b30-8761-a576-7c43-19ce534267f0
[4040960]: https://support.microsoft.com/kb/4040960 [4040965]: https://support.microsoft.com/kb/4040965 [4040959]: https://support.microsoft.com/kb/4040959
cloud-services Cloud Services Nodejs Develop Deploy Express App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-develop-deploy-express-app.md
Node.js includes a minimal set of functionality in the core runtime. Developers often use 3rd party modules to provide additional functionality when developing a Node.js application. In this tutorial
-you will create a new application using the [Express](https://github.com/expressjs/express) module, which provides an MVC framework for creating Node.js web applications.
+you'll create a new application using the [Express](https://github.com/expressjs/express) module, which provides an MVC framework for creating Node.js web applications.
A screenshot of the completed application is below:
A screenshot of the completed application is below:
## Create a Cloud Service Project [!INCLUDE [install-dev-tools](../../includes/install-dev-tools.md)]
-Perform the following steps to create a new cloud service project named 'expressapp':
+Perform the following steps to create a new cloud service project named `expressapp`:
1. From the **Start Menu** or **Start Screen**, search for **Windows PowerShell**. Finally, right-click **Windows PowerShell** and select **Run As Administrator**. ![Azure PowerShell icon](./media/cloud-services-nodejs-develop-deploy-express-app/azure-powershell-start.png)
-2. Change directories to the **c:\\node** directory and then enter the following commands to create a new solution named **expressapp** and a web role named **WebRole1**:
+2. Change directories to the **c:\\node** directory and then enter the following commands to create a new solution named `expressapp` and a web role named **WebRole1**:
```powershell PS C:\node> New-AzureServiceProject expressapp
Perform the following steps to create a new cloud service project named 'express
PS C:\node\expressapp\WebRole1> express ```
- You will be prompted to overwrite your earlier application. Enter **y** or **yes** to continue. Express will generate the app.js file and a folder structure for building your application.
+ You'll be prompted to overwrite your earlier application. Enter **y** or **yes** to continue. Express will generate the app.js file and a folder structure for building your application.
![The output of the express command](./media/cloud-services-nodejs-develop-deploy-express-app/node23.png) 3. To install additional dependencies defined in the package.json file,
Perform the following steps to create a new cloud service project named 'express
var app = require('./app'); ```
- This change is required since we moved the file (formerly **bin/www**,) to the same directory as the app file being required. After making this change, save the **server.js** file.
+ This change is required since we moved the file (formerly `bin/www`) to the same directory as the app file being required. After making this change, save the **server.js** file.
6. Use the following command to run the application in the Azure emulator: ```powershell
Azure".
![The contents of the index.jade file.](./media/cloud-services-nodejs-develop-deploy-express-app/getting-started-19.png)
- Jade is the default view engine used by Express applications. For more
- information on the Jade view engine, see [http://jade-lang.com][http://jade-lang.com].
+ Jade is the default view engine used by Express applications.
+
2. Modify the last line of text by appending **in Azure**. ![The index.jade file, the last line reads: p Welcome to \#{title} in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node31.png) 3. Save the file and exit Notepad.
-4. Refresh your browser and you will see your changes.
+4. Refresh your browser and you'll see your changes.
![A browser window, the page contains Welcome to Express in Azure](./media/cloud-services-nodejs-develop-deploy-express-app/node32.png)
For more information, see the [Node.js Developer Center](/azure/developer/javasc
[Node.js Web Application]: https://www.windowsazure.com/develop/nodejs/tutorials/getting-started/ [Express]: https://expressjs.com/
-[http://jade-lang.com]: http://jade-lang.com
+[http://jade-lang.com]: http://jade-lang.com
cloud-services Cloud Services Python Ptvs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-ptvs.md
$is_python2 = $env:PYTHON2 -eq "on"
$nl = [Environment]::NewLine if (-not $is_emulated){
- Write-Output "Checking if python is installed...$nl"
+ Write-Output "Checking if Python is installed...$nl"
if ($is_python2) { & "${env:SystemDrive}\Python27\python.exe" -V | Out-Null }
cognitive-services Concept Generating Thumbnails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-generating-thumbnails.md
Previously updated : 03/11/2018 Last updated : 11/09/2022
The Computer Vision smart-cropping utility takes one or more aspect ratios in th
> [!IMPORTANT] > This feature uses face detection to help determine important regions in the image. The detection does not involve distinguishing one face from another face, predicting or classifying facial attributes, or creating a facial template (a unique set of numbers generated from an image that represents the distinctive features of a face).
+## Examples
+
+The generated bounding box can vary widely depending on what you specify for aspect ratio, as shown in the following images.
+
+| Aspect ratio | Bounding box |
+|-|--|
+| original | :::image type="content" source="Images/cropped-original.png" alt-text="Photo of a man with a dog at a table."::: |
+| 0.75 | :::image type="content" source="Images/cropped-075-bb.png" alt-text="Photo of a man with a dog at a table. A 0.75 ratio bounding box is drawn."::: |
+| 1.00 | :::image type="content" source="Images/cropped-1-0-bb.png" alt-text="Photo of a man with a dog at a table. A 1.00 ratio bounding box is drawn."::: |
+| 1.50 | :::image type="content" source="Images/cropped-150-bb.png" alt-text="Photo of a man with a dog at a table. A 1.50 ratio bounding box is drawn."::: |
++ ## Use the API
cognitive-services Add Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/add-faces.md
Title: "Example: Add faces to a PersonGroup - Face"
description: This guide demonstrates how to add a large number of persons and faces to a PersonGroup object with the Azure Cognitive Services Face service. -+ Last updated 04/10/2019-+ ms.devlang: csharp
cognitive-services Use Large Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/use-large-scale.md
Title: "Example: Use the Large-Scale feature - Face"
description: This guide is an article on how to scale up from existing PersonGroup and FaceList objects to LargePersonGroup and LargeFaceList objects. -+ Last updated 05/01/2019-+ ms.devlang: csharp
cognitive-services Identity Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/identity-api-reference.md
Title: API Reference - Face
description: API reference provides information about the Person, LargePersonGroup/PersonGroup, LargeFaceList/FaceList, and Face Algorithms APIs. -+ Last updated 02/17/2021-+ # Face API reference list
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
# What is Custom Neural Voice?
-Custom Neural Voice (CNV) is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice by providing your audio samples as training data.
+Custom Neural Voice (CNV) is a text-to-speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. With Custom Neural Voice, you can build a highly natural-sounding voice for your brand or characters by providing human speech samples as training data.
> [!IMPORTANT] > Custom Neural Voice access is [limited](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext) based on eligibility and usage criteria. Request access on the [intake form](https://aka.ms/customneural).
-Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=stt-tts). The prebuilt neural voices work very well in most text-to-speech scenarios.
+Out of the box, [text-to-speech](text-to-speech.md) can be used with prebuilt neural voices for each [supported language](language-support.md?tabs=stt-tts). The prebuilt neural voices work very well in most text-to-speech scenarios if a unique voice isn't required.
Custom Neural Voice is based on the neural text-to-speech technology and the multilingual, multi-speaker, universal model. You can create synthetic voices that are rich in speaking styles, or adaptable cross languages. The realistic and natural sounding voice of Custom Neural Voice can represent brands, personify machines, and allow users to interact with applications conversationally. See the [supported languages](language-support.md?tabs=stt-tts) for Custom Neural Voice.
Before you get started in Speech Studio, here are some considerations:
- [Design a persona](record-custom-voice-samples.md#choose-your-voice-talent) of the voice that represents your brand by using a persona brief document. This document defines elements such as the features of the voice, and the character behind the voice. This helps to guide the process of creating a custom neural voice model, including defining the scripts, selecting your voice talent, training, and voice tuning. - [Select the recording script](record-custom-voice-samples.md#script-selection-criteria) to represent the user scenarios for your voice. For example, you can use the phrases from bot conversations as your recording script if you're creating a customer service bot. Include different sentence types in your scripts, including statements, questions, and exclamations.
-Here's an overview of the steps to create a Custom Neural Voice in Speech Studio:
+Here's an overview of the steps to create a custom neural voice in Speech Studio:
-1. [Create a project](how-to-custom-voice.md) to contain your data, voice models, tests, and endpoints. Each project is specific to a country and language.
+1. [Create a project](how-to-custom-voice.md) to contain your data, voice models, tests, and endpoints. Each project is specific to a country and language. If you are going to create multiple voices, it's recommended that you create a project for each voice.
1. [Set up voice talent](how-to-custom-voice.md). Before you can train a neural voice, you must submit a recording of the voice talent's consent statement. The voice talent statement is a recording of the voice talent reading a statement that they consent to the usage of their speech data to train a custom voice model. 1. [Prepare training data](how-to-custom-voice-prepare-data.md) in the right [format](how-to-custom-voice-training-data.md). It's a good idea to capture the audio recordings in a professional quality recording studio to achieve a high signal-to-noise ratio. The quality of the voice model depends heavily on your training data. Consistent volume, speaking rate, pitch, and consistency in expressive mannerisms of speech are required. 1. [Train your voice model](how-to-custom-voice-create-voice.md). Select at least 300 utterances to create a custom neural voice. A series of data quality checks are automatically performed when you upload them. To build high-quality voice models, you should fix any errors and submit again.
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
After you validate your data files, you can use them to build your Custom Neural
- [Neural](?tabs=neural#train-your-custom-neural-voice-model): Create a voice in the same language of your training data, select **Neural** method. -- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) (Preview): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=stt-tts) for cross lingual training. You don't need to prepare training data in the target language, but your test script must be in the target language.
+- [Neural - cross lingual](?tabs=crosslingual#train-your-custom-neural-voice-model) (Preview): Create a secondary language for your voice model to speak a different language from your training data. For example, with the `zh-CN` training data, you can create a voice that speaks `en-US`. The language of the training data and the target language must both be one of the [languages that are supported](language-support.md?tabs=stt-tts) for cross lingual voice training. You don't need to prepare training data in the target language, but your test script must be in the target language.
-- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model) (Preview): Create a custom neural voice that speaks in multiple styles/emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobook and content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create up to 10 custom styles by providing style samples as additional training data for the same voice.
+- [Neural - multi style](?tabs=multistyle#train-your-custom-neural-voice-model) (Preview): Create a custom neural voice that speaks in multiple styles and emotions, without adding new training data. Multi-style voices are particularly useful for video game characters, conversational chatbots, audiobooks, content readers, and more. To create a multi-style voice, you just need to prepare a set of general training data (at least 300 utterances), and select one or more of the preset target speaking styles. You can also create up to 10 custom styles by providing style samples as additional training data for the same voice.
+
+The language of the training data must be one of the [languages that are supported](language-support.md?tabs=stt-tts) for custom neural voice neural, cross-lingual, or multi-style training.
## Train your Custom Neural Voice model
To create a custom neural voice in Speech Studio, follow these steps for one of
1. Select **Next**. 1. Optionally, you can add up to 10 custom speaking styles: 1. Select **Add a custom style** and thoughtfully enter a custom style name of your choice. This name will be used by your application within the `style` element of [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md#adjust-speaking-styles). You can also use the custom style name as SSML via the [Audio Content Creation](how-to-audio-content-creation.md) tool in [Speech Studio](https://speech.microsoft.com/portal/audiocontentcreation).
- 1. Select style samples as training data.
+ 1. Select style samples as training data. It's recommended that the style samples are all from the same voice talent profile.
1. Select **Next**. 1. Select a speaker file with the voice talent statement that corresponds to the speaker in your training data. 1. Select **Next**.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Previously updated : 03/22/2022 Last updated : 11/14/2022
-# Speech Service release notes
+# Speech service release notes
-See below for information about changes to Speech services and resources.
+See below for information about new features and other changes to the Speech service.
## What's new?
-* Speech SDK 1.24.0 and Speech CLI 1.24.0 were released in October 2022. See details below.
-* Custom speech-to-text container v3.1.0 released in March 2022, with support to get display models.
-* TTS Service August 2022, five new voices in public preview were released.
+* Speech SDK 1.24.0 and Speech CLI 1.24.0 were released in October 2022.
+* Speech-to-text and text-to-speech container versions were updated in October 2022.
+* TTS Service November 2022, several voices for `es-MX`, `it-IT`, and `pr-BR` locales were made generally available.
* TTS Service September 2022, all the prebuilt neural voices have been upgraded to high-fidelity voices with 48kHz sample rate. ## Release notes
communication-services Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights.md
The **Quality** tab under **Voice and video** allows users to inspect the qualit
- The proportion of **Impacted calls**, where an impacted call is defined as a call that has at least one poor quality stream -- **Participant end reasons**, which keep track of the reason why a participant left a call. End reasons are [SIP codes](https://en.wikipedia.org/wiki/List_of_SIP_response_codes), which are numeric codes that describe the specific status of a signaling request. SIP codes can be grouped into six categories: *Success*, *Client Failure*, *Server Failure*, *Global Failure*, *Redirection*, and *Provisional*. The distribution of SIP code categories are shown in the pie chart on the left hand side, while a list of the specific SIP codes for participant end reasons is provided on the right hand side
+- **Participant end reasons**, which keep track of the reason why a participant left a call. End reasons are [SIP codes](https://en.wikipedia.org/wiki/List_of_SIP_response_codes), which are numeric codes that describe the specific status of a signaling request. SIP codes can be grouped into six categories: *Success*, *Client Failure*, *Server Failure*, *Global Failure*, *Redirection*, and *Provisional*. The distribution of SIP code categories is shown in the pie chart on the left hand side, while a list of the specific SIP codes for participant end reasons is provided on the right hand side
:::image type="content" source="media\workbooks\voice-and-video-quality.png" alt-text="Voice and video quality":::
The **SMS** tab displays the operations and results for SMS usage through an Azu
The **Email** tab displays delivery status, email size, and email count: :::image type="content" source="media\workbooks\azure-communication-services-insights-email.png" alt-text="Screenshot displays email count, size and email delivery status level that illustrate email insights":::
+The **Recording** tab displays data relevant to total recordings, recording format, recording channel types and number of recording per call:
++++ ## Editing dashboards The **Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
For each area, we have external pages to track and review our SDKs. You can cons
| - | | - | | | | -- | - | | Azure Resource Manager | - | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Communication) | [PyPi](https://pypi.org/project/azure-mgmt-communication/) | - | - | - | [Go via GitHub](https://github.com/Azure/azure-sdk-for-go/releases/tag/v46.3.0) | | Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) ([docs](/objectivec/communication-services/calling/)) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
+| Call Automation | - | [NuGet](https://www.nuget.org/packages/Azure.Communication.CallAutomation) | - | [Maven](https://search.maven.org/search?q=a:azure-communication-callautomation) | - | - | - |
| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - | | Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - | | Email | [npm](https://www.npmjs.com/package/@azure/communication-email) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Email) | [PyPi](https://pypi.org/project/azure-communication-email/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-email) | - | - | - |
confidential-computing Guest Attestation Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/guest-attestation-example.md
Previously updated : 09/29/2022 Last updated : 11/14/2022
To use a sample application in C++ for use with the guest attestation APIs, foll
1. Sign in to your VM.
-1. Clone the sample Linux application.
+1. Clone the [sample Linux application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-platform-checker-exe/Linux).
1. Install the `build-essential` package. This package installs everything required for compiling the sample application.
To use a sample application in C++ for use with the guest attestation APIs, foll
#### [Windows](#tab/windows) 1. Install Visual Studio with the [**Desktop development with C++** workload](/cpp/build/vscpp-step-0-installation).
-1. Clone the sample Windows application.
+1. Clone the [sample Windows application](https://github.com/Azure/confidential-computing-cvm-guest-attestation/tree/main/cvm-platform-checker-exe/Windows).
1. Build your project. From the **Build** menu, select **Build Solution**. 1. After the build succeeds, go to the `Release` build folder. 1. Run the application by running the `AttestationClientApp.exe`.
confidential-ledger Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/architecture.md
Previously updated : 04/15/2021 Last updated : 11/14/2022
This image provides an architectural overview of Azure confidential ledger, and
- [Overview of Microsoft Azure confidential ledger](overview.md) - [Authenticating Azure confidential ledger nodes](authenticate-ledger-nodes.md)
+- [Azure Confidential Ledger write transaction receipts](write-transaction-receipts.md)
confidential-ledger Authenticate Ledger Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/authenticate-ledger-nodes.md
Previously updated : 04/15/2021 Last updated : 11/14/2022
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
Previously updated : 04/15/2021 Last updated : 11/14/2022
confidential-ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/overview.md
Previously updated : 04/15/2021 Last updated : 11/14/2022
confidential-ledger Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-portal.md
Title: Quickstart ΓÇô Microsoft Azure confidential ledger with the Azure portal
description: Learn to use the Microsoft Azure confidential ledger through the Azure portal Previously updated : 10/18/2021 Last updated : 11/14/2022
confidential-ledger Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md
Title: Quickstart ΓÇô Microsoft Azure confidential ledger Python client library
description: Learn to use the Microsoft Azure confidential ledger client library for Python Previously updated : 04/27/2021 Last updated : 11/14/2022
We'll finish setup by setting some variables for use in your application: the re
> Each ledger must have a globally unique name. Replace \<your-unique-ledger-name\> with the name of your ledger in the following example. ```python
-resource_group = "myResourceGroup"
+resource_group = "<azure-resource-group>"
ledger_name = "<your-unique-ledger-name>" subscription_id = "<azure-subscription-id>"
network_identity = identity_client.get_ledger_identity(
ledger_tls_cert_file_name = "networkcert.pem" with open(ledger_tls_cert_file_name, "w") as cert_file:
- cert_file.write(network_identity.ledger_tls_certificate)
+ cert_file.write(network_identity['ledgerTlsCertificate'])
``` Now we can use the network certificate, along with the ledger URL and our credentials, to create a confidential ledger client.
ledger_client = ConfidentialLedgerClient(
) ```
-We are prepared to write to the ledger. We will do so using the `create_ledger_entry` function.
+We are prepared to write to the ledger. We will do so using the `create_ledger_entry` function.
```python
-append_result = ledger_client.create_ledger_entry(entry_contents="Hello world!")
-print(append_result.transaction_id)
+sample_entry = {"contents": "Hello world!"}
+append_result = ledger_client.create_ledger_entry(entry=sample_entry)
+print(append_result['transactionId'])
``` The print function will return the transaction ID of your write to the ledger, which can be used to retrieve the message you wrote to the ledger. ```python
-latest_entry = ledger_client.get_current_ledger_entry(transaction_id=append_result.transaction_id)
+entry = ledger_client.get_ledger_entry(transaction_id=append_result['transactionId'])['entry']
+print(f"Entry (transaction id = {entry['transactionId']}) in collection {entry['collectionId']}: {entry['contents']}")
+```
+
+If you just want the latest transaction that was committed to the ledger, you can use the `get_current_ledger_entry` function.
++
+```python
+latest_entry = ledger_client.get_current_ledger_entry()
print(f"Current entry (transaction id = {latest_entry['transactionId']}) in collection {latest_entry['collectionId']}: {latest_entry['contents']}") ```
-The print function will return "Hello world!", as that is the message in the ledger that that corresponds to the transaction ID.
+The print function will return "Hello world!", as that's the message in the ledger that corresponds to the transaction ID and is the latest transaction.
## Full sample code
from azure.mgmt.confidentialledger.models import ConfidentialLedger
from azure.confidentialledger import ConfidentialLedgerClient from azure.confidentialledger.certificate import ConfidentialLedgerCertificateClient
-from azure.confidentialledger import TransactionState
# Set variables
-rg = "myResourceGroup"
-ledger_name = "<unique-ledger-name>"
+resource_group = "<azure-resource-group>"
+ledger_name = "<your-unique-ledger-name>"
subscription_id = "<azure-subscription-id>" identity_url = "https://identity.confidential-ledger.core.azure.com"
ledger_properties = ConfidentialLedger(**properties)
# Create a ledger
-confidential_ledger_mgmt.ledger.begin_create(rg, ledger_name, ledger_properties)
+confidential_ledger_mgmt.ledger.begin_create(resource_group, ledger_name, ledger_properties)
# Get the details of the ledger you just created
-print(f"{rg} / {ledger_name}")
+print(f"{resource_group} / {ledger_name}")
print("Here are the details of your newly created ledger:")
-myledger = confidential_ledger_mgmt.ledger.get(rg, ledger_name)
+myledger = confidential_ledger_mgmt.ledger.get(resource_group, ledger_name)
print (f"- Name: {myledger.name}") print (f"- Location: {myledger.location}")
network_identity = identity_client.get_ledger_identity(
ledger_tls_cert_file_name = "networkcert.pem" with open(ledger_tls_cert_file_name, "w") as cert_file:
- cert_file.write(network_identity.ledger_tls_certificate)
+ cert_file.write(network_identity['ledgerTlsCertificate'])
ledger_client = ConfidentialLedgerClient(
ledger_client = ConfidentialLedgerClient(
) # Write to the ledger
-append_result = ledger_client.create_ledger_entry(entry_contents="Hello world!")
-print(append_result.transaction_id)
+sample_entry = {"contents": "Hello world!"}
+ledger_client.create_ledger_entry(entry=sample_entry)
# Read from the ledger
-entry = ledger_client.get_current_ledger_entry(transaction_id=append_result.transaction_id)
+latest_entry = ledger_client.get_current_ledger_entry()
print(f"Current entry (transaction id = {latest_entry['transactionId']}) in collection {latest_entry['collectionId']}: {latest_entry['contents']}") ```
+## Pollers
+
+If you'd like to wait for your write transaction to be committed to your ledger, you can use the `begin_create_ledger_entry` function. This will return a poller to wait until the entry is durably committed.
+
+```python
+sample_entry = {"contents": "Hello world!"}
+ledger_entry_poller = ledger_client.begin_create_ledger_entry(
+ entry=sample_entry
+)
+ledger_entry_result = ledger_entry_poller.result()
+```
+
+Querying an older ledger entry requires the ledger to read the entry from disk and validate it. You can use the `begin_get_ledger_entry` function to create a poller that will wait until the queried entry is in a ready state to view.
+
+```python
+get_entry_poller = ledger_client.begin_get_ledger_entry(
+ transaction_id=ledger_entry_result['transactionId']
+)
+entry = get_entry_poller.result()
+```
+ ## Clean up resources Other Azure confidential ledger articles can build upon this quickstart. If you plan to continue on to work with subsequent quickstarts and tutorials, you may wish to leave these resources in place.
az group delete --resource-group myResourceGroup
## Next steps - [Overview of Microsoft Azure confidential ledger](overview.md)
+- [Verify Azure Confidential Ledger write transaction receipts](verify-write-transaction-receipts.md)
confidential-ledger Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-template.md
Previously updated : 04/15/2021 Last updated : 11/14/2022 # Quickstart: Create an Microsoft Azure confidential ledger with an ARM template
confidential-ledger Register Ledger Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/register-ledger-resource-provider.md
Previously updated : 04/15/2021 Last updated : 11/14/2022
confidential-ledger Verify Write Transaction Receipts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/verify-write-transaction-receipts.md
+
+ Title: Verify Azure Confidential Ledger write transaction receipts
+description: Verify Azure Confidential Ledger write transaction receipts
++ Last updated : 11/07/2022+++++
+# Verify Azure Confidential Ledger write transaction receipts
+
+An Azure Confidential Ledger write transaction receipt represents a cryptographic Merkle proof that the corresponding write transaction has been globally committed by the CCF network. Azure Confidential Ledger users can get a receipt over a committed write transaction at any point in time to verify that the corresponding write operation was successfully recorded into the immutable ledger.
+
+For more information about Azure Confidential Ledger write transaction receipts, see the [dedicated article](write-transaction-receipts.md).
+
+## Receipt verification steps
+
+A write transaction receipt can be verified following a specific set of steps outlined in the following subsections. The same steps are outlined in the [CCF Documentation](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#receipt-verification).
+
+### Leaf node computation
+
+The first step is to compute the SHA-256 hash of the leaf node in the Merkle Tree corresponding to the committed transaction. A leaf node is composed of the ordered concatenation of the following fields that can be found in an Azure Confidential Ledger receipt, under `leaf_components`:
+
+1. `write_set_digest`
+2. SHA-256 digest of `commit_evidence`
+3. `claims_digest` fields
+
+These values need to be concatenated as arrays of bytes: both `write_set_digest` and `claims_digest` would need to be converted from strings of hexadecimal digits to arrays of bytes; on the other hand, the hash of `commit_evidence` (as an array of bytes) can be obtained by applying the SHA-256 hash function over the UTF-8 encoded `commit_evidence` string.
+
+Similarly, the leaf node hash digest can be computed by applying the SHA-256 hash function over the result concatenation of the resulting bytes.
+
+### Root node computation
+
+The second step is to compute the SHA-256 hash of the root of the Merkle Tree at the time the transaction was committed. The computation is done by iteratively concatenating and hashing the result of the previous iteration (starting from the leaf node hash computed in the previous step) with the ordered nodes' hashes provided in the `proof` field of a receipt. The `proof` list is provided as an ordered list and its elements need to be iterated in the given order.
+
+The concatenation needs to be done on the bytes representation with respect to the relative order indicated in the objects provided in the `proof` field (either `left` or `right`).
+
+* If the key of the current element in `proof` is `left`, then the result of the previous iteration should be appended to the current element value.
+* If the key of the current element in `proof` is `right`, then the result of the previous iteration should be prepended to the current element value.
+
+After each concatenation, the SHA-256 function needs to be applied in order to obtain the input for the next iteration. This process follows the standard steps to compute the root node of a [Merkle Tree](https://en.wikipedia.org/wiki/Merkle_tree) data structure given the required nodes for the computation.
+
+### Verify signature over root node
+
+The third step is to verify that the cryptographic signature produced over the root node hash is valid using the signing node certificate in the receipt. The verification process follows the standard steps for digital signature verification for messages signed using the [Elliptic Curve Digital Signature Algorithm (ECDSA)](https://wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm). More specifically, the steps are:
+
+1. Decode the base64 string `signature` into an array of bytes.
+2. Extract the ECDSA public key from the signing node certificate `cert`.
+3. Verify that the signature over the root of the Merkle Tree (computed using the instructions in the previous subsection) is authentic using the extracted public key from the previous step. This step effectively corresponds to a standard [digital signature](https://wikipedia.org/wiki/Digital_signature) verification process using ECDSA. There are many libraries in the most popular programming languages that allow verifying an ECDSA signature using a public key certificate over some data (for example, [ecdsa](https://pypi.org/project/ecdsa/) for Python).
+
+### Verify signing node certificate endorsement
+
+In addition to the above, it's also required to verify that the signing node certificate is endorsed (that is, signed) by the current ledger certificate. This step doesn't depend on the other three previous steps and can be carried out independently from the others.
+
+It's possible that the current service identity that issued the receipt is different from the one that endorsed the signing node (for example, due to a certificate renewal). In this case, it's required to verify the chain of certificates trust from the signing node certificate (that is, the `cert` field in the receipt) up to the trusted root Certificate Authority (CA) (that is, the current service identity certificate) through other previous service identities (that is, the `service_endorsements` list field in the receipt). The `service_endorsements` list is provided as an ordered list from the oldest to the latest service identity.
+
+Certificate endorsement need to be verified for the entire chain and follows the exact same digital signature verification process outlined in the previous subsection. There are popular open-source cryptographic libraries (for example, [OpenSSL](https://www.openssl.org/)) that can be typically used to carry out a certificate endorsement step.
+
+### More resources
+
+For more information about the content of an Azure Confidential Ledger write transaction receipt and explanation of each field, see the [dedicated article](write-transaction-receipts.md#write-transaction-receipt-content). The [CCF documentation](https://microsoft.github.io/CCF) also contains more information about receipt verification and other related resources at the following links:
+
+* [Receipt Verification](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#receipt-verification)
+* [CCF Glossary](https://microsoft.github.io/CCF/main/overview/glossary.html)
+* [Merkle Tree](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html)
+* [Cryptography](https://microsoft.github.io/CCF/main/architecture/cryptography.html)
+* [Certificates](https://microsoft.github.io/CCF/main/operations/certificates.html)
+
+## Verify write transaction receipts
+
+### Setup and pre-requisites
+
+For reference purposes, we provide sample code in Python to fully verify Azure Confidential Ledger write transaction receipts following the steps outlined above.
+
+To run the full verification algorithm, the current service network certificate and a write transaction receipt from a running Confidential Ledger resource are required. Refer to [this article](write-transaction-receipts.md#get-write-transaction-receipts) for details on how to fetch a write transaction receipt and the service certificate from a Confidential Ledger instance.
+
+### Code walkthrough
+
+The following code can be used to initialize the required objects and run the receipt verification algorithm. A separate utility (`verify_receipt`) is used to run the full verification algorithm, and accepts input the content of the `receipt` field in a `GET_RECEIPT` response as a dictionary and the service certificate as a simple string. The function throws an exception if the receipt isn't valid or if any error was encountered during the processing.
+
+It's assumed that both the receipt and the service certificate can be loaded from files. Make sure to update both the `service_certificate_file_name` and `receipt_file_name` constants with the respective files names of the service certificate and receipt you would like to verify.
+
+```python
+import json
+
+# Constants
+service_certificate_file_name = "<your-service-certificate-file>"
+receipt_file_name = "<your-receipt-file>"
+
+# Use the receipt and the service identity to verify the receipt content
+with open(service_certificate_file_name, "r") as service_certificate_file, open(
+ΓÇ» ΓÇ» receipt_file_name, "r"
+) as receipt_file:
+
+ # Load relevant files content
+ΓÇ» ΓÇ» receipt = json.loads(receipt_file.read())["receipt"]
+ΓÇ» ΓÇ» service_certificate_cert = service_certificate_file.read()
+
+ΓÇ» ΓÇ» try:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» verify_receipt(receipt, service_certificate_cert)
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» print("Receipt verification succeeded")
+
+ΓÇ» ΓÇ» except Exception as e:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» print("Receipt verification failed")
+
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» # Raise caught exception to look at the error stack
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» raise e
+```
+
+As the verification process requires some cryptographic and hashing primitives, the following libraries are used to facilitate the computation.
+
+* The [CCF Python library](https://microsoft.github.io/CCF/main/audit/python_library.html): the module provides a set of tools for receipt verification.
+* The [Python cryptography library](https://cryptography.io/en/latest/): a widely used library that includes various cryptographic algorithms and primitives.
+* The [hashlib module](https://docs.python.org/3/library/hashlib.html), part of the Python standard library: a module that provides a common interface for popular hashing algorithms.
+
+```python
+from ccf.receipt import verify, check_endorsements, root
+from cryptography.x509 import load_pem_x509_certificate, Certificate
+from hashlib import sha256
+from typing import Dict, List, Any
+```
+
+Inside the `verify_receipt` function, we check that the given receipt is valid and contains all the required fields.
+
+```python
+# Check that all the fields are present in the receipt
+assert "cert" in receipt
+assert "is_signature_transaction" in receipt
+assert "leaf_components" in receipt
+assert "claims_digest" in receipt["leaf_components"]
+assert "commit_evidence" in receipt["leaf_components"]
+assert "write_set_digest" in receipt["leaf_components"]
+assert "proof" in receipt
+assert "service_endorsements" in receipt
+assert "signature" in receipt
+```
+
+We initialize the variables that are going to be used in the rest of the program.
+
+```python
+# Set the variables
+node_cert_pem = receipt["cert"]
+is_signature_transaction = receipt["is_signature_transaction"]
+claims_digest_hex = receipt["leaf_components"]["claims_digest"]
+commit_evidence_str = receipt["leaf_components"]["commit_evidence"]
+write_set_digest_hex = receipt["leaf_components"]["write_set_digest"]
+proof_list = receipt["proof"]
+service_endorsements_certs_pem = receipt["service_endorsements"]
+root_node_signature = receipt["signature"]
+```
+
+In the following snipper, we check that the receipt isn't related to a signature transaction. Receipts for signature transactions aren't allowed for Confidential Ledger.
+
+```python
+# Check that this receipt is not for a signature transaction
+assert not is_signature_transaction
+```
+
+We can load the PEM certificates for the service identity, the signing node, and the endorsements certificates from previous service identities using the cryptography library.
+
+```python
+# Load service and node PEM certificates
+service_cert = load_pem_x509_certificate(service_cert_pem.encode())
+node_cert = load_pem_x509_certificate(node_cert_pem.encode())
+
+# Load service endorsements PEM certificates
+service_endorsements_certs = [
+ΓÇ» ΓÇ» load_pem_x509_certificate(pem.encode())
+ΓÇ» ΓÇ» for pem in service_endorsements_certs_pem
+]
+```
+
+The first step of the verification process is to compute the digest of the leaf node.
+
+```python
+# Compute leaf of the Merkle Tree corresponding to our transaction
+leaf_node_hex = compute_leaf_node(
+ΓÇ» ΓÇ» claims_digest_hex, commit_evidence_str, write_set_digest_hex
+)
+```
+
+The `compute_leaf_node` function accepts as parameters the leaf components of the receipt (the `claims_digest`, the `commit_evidence`, and the `write_set_digest`) and returns the leaf node hash in hexadecimal form.
+
+As detailed above, we compute the digest of `commit_evidence` (using the SHA256 `hashlib` function). Then, we convert both `write_set_digest` and `claims_digest` into arrays of bytes. Finally, we concatenate the three arrays, and we digest the result using the SHA256 function.
+
+```python
+def compute_leaf_node(
+ΓÇ» ΓÇ» claims_digest_hex: str, commit_evidence_str: str, write_set_digest_hex: str
+) -> str:
+ΓÇ» ΓÇ» """Function to compute the leaf node associated to a transaction
+ΓÇ» ΓÇ» given its claims digest, commit evidence, and write set digest."""
+
+ΓÇ» ΓÇ» # Digest commit evidence string
+ΓÇ» ΓÇ» commit_evidence_digest = sha256(commit_evidence_str.encode()).digest()
+
+ΓÇ» ΓÇ» # Convert write set digest to bytes
+ΓÇ» ΓÇ» write_set_digest = bytes.fromhex(write_set_digest_hex)
+
+ΓÇ» ΓÇ» # Convert claims digest to bytes
+ΓÇ» ΓÇ» claims_digest = bytes.fromhex(claims_digest_hex)
+
+ΓÇ» ΓÇ» # Create leaf node by hashing the concatenation of its three components
+ΓÇ» ΓÇ» # as bytes objects in the following order:
+ΓÇ» ΓÇ» # 1. write_set_digest
+ΓÇ» ΓÇ» # 2. commit_evidence_digest
+ΓÇ» ΓÇ» # 3. claims_digest
+ΓÇ» ΓÇ» leaf_node_digest = sha256(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» write_set_digest + commit_evidence_digest + claims_digest
+ΓÇ» ΓÇ» ).digest()
+
+ΓÇ» ΓÇ» # Convert the result into a string of hexadecimal digits
+ΓÇ» ΓÇ» return leaf_node_digest.hex()
+```
+
+After computing the leaf, we can compute the root of the Merkle tree.
+
+```python
+# Compute root of the Merkle Tree
+root_node = root(leaf_node_hex, proof_list)
+```
+
+We use the function `root` provided as part of the CCF Python library. The function successively concatenates the result of the previous iteration with a new element from `proof`, digests the concatenation, and then repeats the step for every element in `proof` with the previously computed digest. The concatenation needs to respect the order of the nodes in the Merkle Tree to make sure the root is recomputed correctly.
+
+```python
+def root(leaf: str, proof: List[dict]):
+ΓÇ» ΓÇ» """
+ΓÇ» ΓÇ» Recompute root of Merkle tree from a leaf and a proof of the form:
+ΓÇ» ΓÇ» [{"left": digest}, {"right": digest}, ...]
+ΓÇ» ΓÇ» """
+
+ΓÇ» ΓÇ» current = bytes.fromhex(leaf)
+
+ΓÇ» ΓÇ» for n in proof:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» if "left" in n:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» current = sha256(bytes.fromhex(n["left"]) + current).digest()
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» else:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» current = sha256(current + bytes.fromhex(n["right"])).digest()
+ΓÇ» ΓÇ» return current.hex()
+```
+
+After computing the root node hash, we can verify the signature contained in the receipt over the root to validate that the signature is correct.
+
+```python
+# Verify signature of the signing node over the root of the tree
+verify(root_node, root_node_signature, node_cert)
+```
+
+Similarly, the CCF library provides a function `verify` to do this verification. We use the ECDSA public key of the signing node certificate to verify the signature over the root of the tree.
+
+```python
+def verify(root: str, signature: str, cert: Certificate):
+ΓÇ» ΓÇ» """
+ΓÇ» ΓÇ» Verify signature over root of Merkle Tree
+ΓÇ» ΓÇ» """
+
+ΓÇ» ΓÇ» sig = base64.b64decode(signature)
+ΓÇ» ΓÇ» pk = cert.public_key()
+ΓÇ» ΓÇ» assert isinstance(pk, ec.EllipticCurvePublicKey)
+ΓÇ» ΓÇ» pk.verify(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» sig,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» bytes.fromhex(root),
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ec.ECDSA(utils.Prehashed(hashes.SHA256())),
+ΓÇ» ΓÇ» )
+```
+
+The last step of receipt verification is validating the certificate that was used to sign the root of the Merkle tree.
+
+```python
+# Verify node certificate is endorsed by the service certificates through endorsements
+check_endorsements(node_cert, service_cert, service_endorsements_certs)
+```
+
+Likewise, we can use the CCF utility `check_endorsements` to validate that the certificate of the signing node is endorsed by the service identity. The certificate chain could be composed of previous service certificates, so we should validate that the endorsement is applied transitively if `service_endorsements` isn't an empty list.
+
+```python
+def check_endorsement(endorsee: Certificate, endorser: Certificate):
+ΓÇ» ΓÇ» """
+ΓÇ» ΓÇ» Check endorser has endorsed endorsee
+ΓÇ» ΓÇ» """
+
+ΓÇ» ΓÇ» digest_algo = endorsee.signature_hash_algorithm
+ΓÇ» ΓÇ» assert digest_algo
+ΓÇ» ΓÇ» digester = hashes.Hash(digest_algo)
+ΓÇ» ΓÇ» digester.update(endorsee.tbs_certificate_bytes)
+ΓÇ» ΓÇ» digest = digester.finalize()
+ΓÇ» ΓÇ» endorser_pk = endorser.public_key()
+ΓÇ» ΓÇ» assert isinstance(endorser_pk, ec.EllipticCurvePublicKey)
+ΓÇ» ΓÇ» endorser_pk.verify(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» endorsee.signature, digest, ec.ECDSA(utils.Prehashed(digest_algo))
+ΓÇ» ΓÇ» )
+
+def check_endorsements(
+ΓÇ» ΓÇ» node_cert: Certificate, service_cert: Certificate, endorsements: List[Certificate]
+):
+ΓÇ» ΓÇ» """
+ΓÇ» ΓÇ» Check a node certificate is endorsed by a service certificate, transitively through a list of endorsements.
+ΓÇ» ΓÇ» """
+
+ΓÇ» ΓÇ» cert_i = node_cert
+ΓÇ» ΓÇ» for endorsement in endorsements:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» check_endorsement(cert_i, endorsement)
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» cert_i = endorsement
+ΓÇ» ΓÇ» check_endorsement(cert_i, service_cert)
+```
+
+As an alternative, we could also validate the certificate by using the OpenSSL library using a similar method.
+
+```python
+from OpenSSL.crypto import (
+ΓÇ» ΓÇ» X509,
+ΓÇ» ΓÇ» X509Store,
+ΓÇ» ΓÇ» X509StoreContext,
+)
+
+def verify_openssl_certificate(
+ΓÇ» ΓÇ» node_cert: Certificate,
+ΓÇ» ΓÇ» service_cert: Certificate,
+ΓÇ» ΓÇ» service_endorsements_certs: List[Certificate],
+) -> None:
+ΓÇ» ΓÇ» """Verify that the given node certificate is a valid OpenSSL certificate through
+ΓÇ» ΓÇ» the service certificate and a list of endorsements certificates."""
+
+ΓÇ» ΓÇ» store = X509Store()
+
+ΓÇ» ΓÇ» # pyopenssl does not support X509_V_FLAG_NO_CHECK_TIME. For recovery of expired
+ΓÇ» ΓÇ» # services and historical receipts, we want to ignore the validity time. 0x200000
+ΓÇ» ΓÇ» # is the bitmask for this option in more recent versions of OpenSSL.
+ΓÇ» ΓÇ» X509_V_FLAG_NO_CHECK_TIME = 0x200000
+ΓÇ» ΓÇ» store.set_flags(X509_V_FLAG_NO_CHECK_TIME)
+
+ΓÇ» ΓÇ» # Add service certificate to the X.509 store
+ΓÇ» ΓÇ» store.add_cert(X509.from_cryptography(service_cert))
+
+ΓÇ» ΓÇ» # Prepare X.509 endorsement certificates
+ΓÇ» ΓÇ» certs_chain = [X509.from_cryptography(cert) for cert in service_endorsements_certs]
+
+ΓÇ» ΓÇ» # Prepare X.509 node certificate
+ΓÇ» ΓÇ» node_cert_pem = X509.from_cryptography(node_cert)
+
+ΓÇ» ΓÇ» # Create X.509 store context and verify its certificate
+ΓÇ» ΓÇ» ctx = X509StoreContext(store, node_cert_pem, certs_chain)
+ΓÇ» ΓÇ» ctx.verify_certificate()
+```
+
+### Sample code
+
+The full sample code used in the code walkthrough can be found below.
+
+#### Main program
+
+```python
+import json
+
+# Use the receipt and the service identity to verify the receipt content
+with open("network_certificate.pem", "r") as service_certificate_file, open(
+ΓÇ» ΓÇ» "receipt.json", "r"
+) as receipt_file:
+
+ΓÇ» ΓÇ» # Load relevant files content
+ΓÇ» ΓÇ» receipt = json.loads(receipt_file.read())["receipt"]
+ΓÇ» ΓÇ» service_certificate_cert = service_certificate_file.read()
+
+ΓÇ» ΓÇ» try:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» verify_receipt(receipt, service_certificate_cert)
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» print("Receipt verification succeeded")
+
+ΓÇ» ΓÇ» except Exception as e:
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» print("Receipt verification failed")
+
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» # Raise caught exception to look at the error stack
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» raise e
+```
+
+#### Receipt verification
+
+```python
+from cryptography.x509 import load_pem_x509_certificate, Certificate
+from hashlib import sha256
+from typing import Dict, List, Any
+
+from OpenSSL.crypto import (
+ΓÇ» ΓÇ» X509,
+ΓÇ» ΓÇ» X509Store,
+ΓÇ» ΓÇ» X509StoreContext,
+)
+
+from ccf.receipt import root, verify, check_endorsements
+
+def verify_receipt(receipt: Dict[str, Any], service_cert_pem: str) -> None:
+ΓÇ» ΓÇ» """Function to verify that a given write transaction receipt is valid based
+ΓÇ» ΓÇ» on its content and the service certificate.
+ΓÇ» ΓÇ» Throws an exception if the verification fails."""
+
+ΓÇ» ΓÇ» # Check that all the fields are present in the receipt
+ΓÇ» ΓÇ» assert "cert" in receipt
+ΓÇ» ΓÇ» assert "is_signature_transaction" in receipt
+ΓÇ» ΓÇ» assert "leaf_components" in receipt
+ΓÇ» ΓÇ» assert "claims_digest" in receipt["leaf_components"]
+ΓÇ» ΓÇ» assert "commit_evidence" in receipt["leaf_components"]
+ΓÇ» ΓÇ» assert "write_set_digest" in receipt["leaf_components"]
+ΓÇ» ΓÇ» assert "proof" in receipt
+ΓÇ» ΓÇ» assert "service_endorsements" in receipt
+ΓÇ» ΓÇ» assert "signature" in receipt
+
+ΓÇ» ΓÇ» # Set the variables
+ΓÇ» ΓÇ» node_cert_pem = receipt["cert"]
+ΓÇ» ΓÇ» is_signature_transaction = receipt["is_signature_transaction"]
+ΓÇ» ΓÇ» claims_digest_hex = receipt["leaf_components"]["claims_digest"]
+ΓÇ» ΓÇ» commit_evidence_str = receipt["leaf_components"]["commit_evidence"]
+
+ΓÇ» ΓÇ» write_set_digest_hex = receipt["leaf_components"]["write_set_digest"]
+ΓÇ» ΓÇ» proof_list = receipt["proof"]
+ΓÇ» ΓÇ» service_endorsements_certs_pem = receipt["service_endorsements"]
+ΓÇ» ΓÇ» root_node_signature = receipt["signature"]
+
+ΓÇ» ΓÇ» # Check that this receipt is not for a signature transaction
+ΓÇ» ΓÇ» assert not is_signature_transaction
+
+ΓÇ» ΓÇ» # Load service and node PEM certificates
+ΓÇ» ΓÇ» service_cert = load_pem_x509_certificate(service_cert_pem.encode())
+ΓÇ» ΓÇ» node_cert = load_pem_x509_certificate(node_cert_pem.encode())
+
+ΓÇ» ΓÇ» # Load service endorsements PEM certificates
+ΓÇ» ΓÇ» service_endorsements_certs = [
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» load_pem_x509_certificate(pem.encode())
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» for pem in service_endorsements_certs_pem
+ΓÇ» ΓÇ» ]
+
+ΓÇ» ΓÇ» # Compute leaf of the Merkle Tree
+ΓÇ» ΓÇ» leaf_node_hex = compute_leaf_node(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» claims_digest_hex, commit_evidence_str, write_set_digest_hex
+ΓÇ» ΓÇ» )
+
+ΓÇ» ΓÇ» # Compute root of the Merkle Tree
+ΓÇ» ΓÇ» root_node = root(leaf_node_hex, proof_list)
+
+ΓÇ» ΓÇ» # Verify signature of the signing node over the root of the tree
+ΓÇ» ΓÇ» verify(root_node, root_node_signature, node_cert)
+
+ΓÇ» ΓÇ» # Verify node certificate is endorsed by the service certificates through endorsements
+ΓÇ» ΓÇ» check_endorsements(node_cert, service_cert, service_endorsements_certs)
+
+ΓÇ» ΓÇ» # Alternative: Verify node certificate is endorsed by the service certificates through endorsements
+ΓÇ» ΓÇ» verify_openssl_certificate(node_cert, service_cert, service_endorsements_certs)
+
+def compute_leaf_node(
+ΓÇ» ΓÇ» claims_digest_hex: str, commit_evidence_str: str, write_set_digest_hex: str
+) -> str:
+ΓÇ» ΓÇ» """Function to compute the leaf node associated to a transaction
+ΓÇ» ΓÇ» given its claims digest, commit evidence, and write set digest."""
+
+ΓÇ» ΓÇ» # Digest commit evidence string
+ΓÇ» ΓÇ» commit_evidence_digest = sha256(commit_evidence_str.encode()).digest()
+
+ΓÇ» ΓÇ» # Convert write set digest to bytes
+ΓÇ» ΓÇ» write_set_digest = bytes.fromhex(write_set_digest_hex)
+
+ΓÇ» ΓÇ» # Convert claims digest to bytes
+ΓÇ» ΓÇ» claims_digest = bytes.fromhex(claims_digest_hex)
+
+ΓÇ» ΓÇ» # Create leaf node by hashing the concatenation of its three components
+ΓÇ» ΓÇ» # as bytes objects in the following order:
+ΓÇ» ΓÇ» # 1. write_set_digest
+ΓÇ» ΓÇ» # 2. commit_evidence_digest
+ΓÇ» ΓÇ» # 3. claims_digest
+ΓÇ» ΓÇ» leaf_node_digest = sha256(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» write_set_digest + commit_evidence_digest + claims_digest
+ΓÇ» ΓÇ» ).digest()
+
+ΓÇ» ΓÇ» # Convert the result into a string of hexadecimal digits
+ΓÇ» ΓÇ» return leaf_node_digest.hex()
+
+def verify_openssl_certificate(
+ΓÇ» ΓÇ» node_cert: Certificate,
+ΓÇ» ΓÇ» service_cert: Certificate,
+ΓÇ» ΓÇ» service_endorsements_certs: List[Certificate],
+) -> None:
+ΓÇ» ΓÇ» """Verify that the given node certificate is a valid OpenSSL certificate through
+ΓÇ» ΓÇ» the service certificate and a list of endorsements certificates."""
+
+ΓÇ» ΓÇ» store = X509Store()
+
+ΓÇ» ΓÇ» # pyopenssl does not support X509_V_FLAG_NO_CHECK_TIME. For recovery of expired
+ΓÇ» ΓÇ» # services and historical receipts, we want to ignore the validity time. 0x200000
+ΓÇ» ΓÇ» # is the bitmask for this option in more recent versions of OpenSSL.
+ΓÇ» ΓÇ» X509_V_FLAG_NO_CHECK_TIME = 0x200000
+ΓÇ» ΓÇ» store.set_flags(X509_V_FLAG_NO_CHECK_TIME)
+
+ΓÇ» ΓÇ» # Add service certificate to the X.509 store
+ΓÇ» ΓÇ» store.add_cert(X509.from_cryptography(service_cert))
+
+ΓÇ» ΓÇ» # Prepare X.509 endorsement certificates
+ΓÇ» ΓÇ» certs_chain = [X509.from_cryptography(cert) for cert in service_endorsements_certs]
+
+ΓÇ» ΓÇ» # Prepare X.509 node certificate
+ΓÇ» ΓÇ» node_cert_pem = X509.from_cryptography(node_cert)
+
+ΓÇ» ΓÇ» # Create X.509 store context and verify its certificate
+ΓÇ» ΓÇ» ctx = X509StoreContext(store, node_cert_pem, certs_chain)
+ΓÇ» ΓÇ» ctx.verify_certificate()
+```
+
+## Next steps
+
+* [Azure Confidential Ledger write transaction receipts](write-transaction-receipts.md)
+* [Overview of Microsoft Azure confidential ledger](overview.md)
+* [Azure confidential ledger architecture](architecture.md)
confidential-ledger Write Transaction Receipts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/write-transaction-receipts.md
+
+ Title: Azure Confidential Ledger write transaction receipts
+description: Azure Confidential Ledger write transaction receipts
++ Last updated : 11/07/2022+++++
+# Azure Confidential Ledger write transaction receipts
+
+To enforce transaction integrity guarantees, an Azure Confidential Ledger uses a [Merkle tree](https://en.wikipedia.org/wiki/Merkle_tree) data structure to record the hash of all transactions blocks that are appended to the immutable ledger. After a write transaction is committed, Azure Confidential Ledger users can get a cryptographic Merkle proof, or receipt, over the entry produced in a Confidential Ledger to verify that the write operation was correctly saved. A write transaction receipt is proof that the system has committed the corresponding transaction and can be used to verify that the entry has been effectively appended to the ledger.
+
+More details about how a Merkle Tree is used in a Confidential Ledger can be found in the [CCF documentation](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html).
+
+## Get write transaction receipts
+
+### Setup and pre-requisites
+
+Azure Confidential Ledger users can get a receipt for a specific transaction by using the [Azure Confidential Ledger client library](quickstart-python.md#use-the-data-plane-client-library). The following example shows how to get a write receipt using the [client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/confidentialledger/azure-confidentialledger), but the steps are the same with any other supported SDK for Azure Confidential Ledger.
+
+We assume that a Confidential Ledger resource has already been created using the Azure Confidential Ledger Management library. If you don't have an existing ledger resource yet, create one using the [following instructions](quickstart-python.md#use-the-control-plane-client-library).
+
+### Code walkthrough
+
+We start by setting up the imports for our Python program.
+
+```python
+import json
+
+# Import the Azure authentication library
+from azure.identity import DefaultAzureCredential
+
+# Import the Confidential Ledger Data Plane SDK
+from azure.confidentialledger import ConfidentialLedgerClient
+from azure.confidentialledger.certificate import ConfidentialLedgerCertificateClient
+```
+
+The following are the constant values used to set up the Azure Confidential Ledger client. Make sure to update the `ledger_name` constant with the unique name of your Confidential Ledger resource.
+
+```python
+# Constants for our program
+ledger_name = "<your-unique-ledger-name>"
+identity_url = "https://identity.confidential-ledger.core.azure.com"
+ledger_url = "https://" + ledger_name + ".confidential-ledger.azure.com"
+```
+
+We authenticate using the [DefaultAzureCredential class](/python/api/azure-identity/azure.identity.defaultazurecredential).
+
+```python
+# Setup authentication
+credential = DefaultAzureCredential()
+```
+
+Then, we get and save the Confidential Ledger service certificate using the Certificate client from the [Confidential Ledger Identity URL](https://identity.confidential-ledger.core.azure.com/ledgerIdentity). The service certificate is a network identity public key certificate used as root of trust for [TLS](https://microsoft.github.io/CCF/main/overview/glossary.html#term-TLS) server authentication. In other words, it's used as the Certificate Authority (CA) for establishing a TLS connection with any of the nodes in the CCF network.
+
+```python
+# Create a Certificate client and use it to
+# get the service identity for our ledger
+identity_client = ConfidentialLedgerCertificateClient(identity_url)
+network_identity = identity_client.get_ledger_identity(
+     ledger_id=ledger_name
+)
+
+# Save network certificate into a file for later use
+ledger_tls_cert_file_name = "network_certificate.pem"
+
+with open(ledger_tls_cert_file_name, "w") as cert_file:
+ΓÇ» ΓÇ» cert_file.write(network_identity["ledgerTlsCertificate"])
+```
+
+Next, we can use our credentials, the fetched network certificate, and our unique ledger URL to create a Confidential Ledger client.
+
+```python
+# Create Confidential Ledger client
+ledger_client = ConfidentialLedgerClient(
+     endpoint=ledger_url,
+     credential=credential,
+     ledger_certificate_path=ledger_tls_cert_file_name
+)
+```
+
+Using the Confidential Ledger client, we can run any supported operations on an Azure Confidential Ledger instance. For example, we can append a new entry to the ledger and wait for corresponding write transaction to be committed.
+
+ ```python
+# The method begin_create_ledger_entry returns a poller that
+# we can use to wait for the transaction to be committed
+create_entry_poller = ledger_client.begin_create_ledger_entry(
+ΓÇ» ΓÇ» {"contents": "Hello World!"}
+)
+
+create_entry_result = create_entry_poller.result()
+```
+
+After the transaction is committed, we can use the client to get a receipt over the entry appended to the ledger in the previous step using the respective [transaction ID](https://microsoft.github.io/CCF/main/overview/glossary.html#term-Transaction-ID).
+
+```python
+# The method begin_get_receipt returns a poller that
+# we can use to wait for the receipt to be available by the system
+get_receipt_poller = ledger_client.begin_get_receipt(
+ΓÇ» ΓÇ» create_entry_result["transactionId"]
+)
+
+get_receipt_result = get_receipt_poller.result()
+```
+
+### Sample code
+
+The full sample code used in the code walkthrough can be found below.
+
+```python
+import json
+
+# Import the Azure authentication library
+from azure.identity import DefaultAzureCredential
+
+# Import the Confidential Ledger Data Plane SDK
+from azure.confidentialledger import ConfidentialLedgerClient
+from azure.confidentialledger.certificate import ConfidentialLedgerCertificateClient
+
+from receipt_verification import verify_receipt
+
+# Constants
+ledger_name = "<your-unique-ledger-name>"
+identity_url = "https://identity.confidential-ledger.core.azure.com"
+ledger_url = "https://" + ledger_name + ".confidential-ledger.azure.com"
+
+# Setup authentication
+credential = DefaultAzureCredential()
+
+# Create Ledger Certificate client and use it to
+# retrieve the service identity for our ledger
+identity_client = ConfidentialLedgerCertificateClient(identity_url)
+network_identity = identity_client.get_ledger_identity(ledger_id=ledger_name)
+
+# Save network certificate into a file for later use
+ledger_tls_cert_file_name = "network_certificate.pem"
+
+with open(ledger_tls_cert_file_name, "w") as cert_file:
+ΓÇ» ΓÇ» cert_file.write(network_identity["ledgerTlsCertificate"])
+
+# Create Confidential Ledger client
+ledger_client = ConfidentialLedgerClient(
+ΓÇ» ΓÇ» endpoint=ledger_url,
+ΓÇ» ΓÇ» credential=credential,
+ΓÇ» ΓÇ» ledger_certificate_path=ledger_tls_cert_file_name,
+)
+
+# The method begin_create_ledger_entry returns a poller that
+# we can use to wait for the transaction to be committed
+create_entry_poller = ledger_client.begin_create_ledger_entry(
+ΓÇ» ΓÇ» {"contents": "Hello World!"}
+)
+create_entry_result = create_entry_poller.result()
+
+# The method begin_get_receipt returns a poller that
+# we can use to wait for the receipt to be available by the system
+get_receipt_poller = ledger_client.begin_get_receipt(
+ΓÇ» ΓÇ» create_entry_result["transactionId"]
+)
+get_receipt_result = get_receipt_poller.result()
+
+# Save fetched receipt into a file
+with open("receipt.json", "w") as receipt_file:
+ΓÇ» ΓÇ» receipt_file.write(json.dumps(get_receipt_result, sort_keys=True, indent=2))
+```
+
+## Write transaction receipt content
+
+Here's an example of a JSON response payload returned by an Azure Confidential Ledger instance when calling the `GET_RECEIPT` endpoint.
+
+```json
+{
+ "receipt": {
+ "cert": "--BEGIN CERTIFICATE--\nMIIB0jCCAXmgAwIBAgIQPxdrEtGY+SggPHETin1XNzAKBggqhkjOPQQDAjAWMRQw\nEgYDVQQDDAtDQ0YgTmV0d29yazAeFw0yMjA3MjAxMzUzMDFaFw0yMjEwMTgxMzUz\nMDBaMBMxETAPBgNVBAMMCENDRiBOb2RlMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcD\nQgAEWy81dFeEZ79gVJnfHiPKjZ54fZvDcFlntFwJN8Wf6RZa3PaV5EzwAKHNfojj\noXT4xNkJjURBN7q+1iE/vvc+rqOBqzCBqDAJBgNVHRMEAjAAMB0GA1UdDgQWBBQS\nwl7Hx2VkkznJNkVZUbZy+TOR/jAfBgNVHSMEGDAWgBTrz538MGI/SdV8k8EiJl5z\nfl3mBTBbBgNVHREEVDBShwQK8EBegjNhcGljY2lvbmUtdGVzdC1sZWRnZXIuY29u\nZmlkZW50aWFsLWxlZGdlci5henVyZS5jb22CFWFwaWNjaW9uZS10ZXN0LWxlZGdl\ncjAKBggqhkjOPQQDAgNHADBEAiAsGawDcYcH/KzF2iK9Ldx/yABUoYSNti2Cyxum\n9RRNKAIgPB/XGh/FQS3nmZLExgBVXkDYdghQu/NCY/hHjQ9AvWg=\n--END CERTIFICATE--\n",
+ "is_signature_transaction": false,
+ "leaf_components": {
+ "claims_digest": "0000000000000000000000000000000000000000000000000000000000000000",
+ "commit_evidence": "ce:2.40:f36ffe2930ec95d50ebaaec26e2bec56835abd051019eb270f538ab0744712a4",
+ "write_set_digest": "8452624d10bdd79c408c0f062a1917aa96711ea062c508c745469636ae1460be"
+ },
+ "node_id": "70e995887e3e6b73c80bc44f9fbb6e66b9f644acaddbc9c0483cfc17d77af24f",
+ "proof": [
+ {
+ "left": "b78230f9abb27b9b803a9cae4e4cec647a3be1000fc2241038867792d59d4bc1"
+ },
+ {
+ "left": "a2835d4505b8b6b25a0c06a9c8e96a5204533ceac1edf2b3e0e4dece78fbaf35"
+ }
+ ],
+ "service_endorsements": [],
+ "signature": "MEUCIQCjtMqk7wOtUTgqlHlCfWRqAco+38roVdUcRv7a1G6pBwIgWKpCSdBmhzgEdwguUW/Cj/Z5bAOA8YHSoLe8KzrlqK8="
+ },
+ "state": "Ready",
+ "transactionId": "2.40"
+}
+```
+
+The JSON response contains the following fields at the root level.
+
+* **receipt**: It contains the values that can be used to verify the validity of the receipt for the corresponding write transaction.
+
+* **state**: The status of the returned JSON response. The following are the possible values allowed:
+
+ * `Ready`: The receipt returned in the response is available
+ * `Loading`: The receipt isn't yet available to be retrieved and the request will have to be retried
+
+* **transactionId**: The transaction ID associated with the write transaction receipt.
+
+The `receipt` field contains the following fields.
+
+* **cert**: String with the [PEM](https://en.wikipedia.org/wiki/Privacy-Enhanced_Mail) public key certificate of the CCF node that signed the write transaction. The certificate of the signing node should always be endorsed by the service identity certificate. See also more details about how transactions get regularly signed and how the signature transactions are appended to the ledger in CCF at the following [link](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html).
+
+* **is_signature_transaction**: Boolean value indicating whether the receipt is related to a signature transaction or not. Receipts for signature transactions can't be retrieved for Confidential Ledgers.
+
+* **node_id**: Hexadecimal string representing the [SHA-256](https://en.wikipedia.org/wiki/SHA-2) hash digest of the public key of the signing CCF node.
+
+* **leaf_components**: The components of the leaf node hash in the [Merkle Tree](https://en.wikipedia.org/wiki/Merkle_tree) that are associated to the specified transaction. A Merkle Tree is a tree data structure that records the hash of every transaction and guarantees the integrity of the ledger. For more information on how a Merkle Tree is used in CCF, see the related [CCF documentation](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html).
+
+* **proof**: List of key-value pairs representing the Merkle Tree nodes hashes that, when combined with the leaf node hash corresponding to the given transaction, allow the recomputation of the root hash of the tree. Thanks to the properties of a Merkle Tree, it's possible to recompute the root hash of the tree only a subset of nodes. The elements in this list are in the form of key-value pairs: keys indicate the relative position with respect to the parent node in the tree at a certain level; values are the SHA-256 hash digests of the node given, as hexadecimal strings.
+
+* **service_endorsements**: List of PEM-encoded certificates strings representing previous service identities certificates. It's possible that the service identity that endorsed the signing node isn't the same as the one that issued the receipt. For example, the service certificate is renewed after a disaster recovery of a Confidential Ledger. The list of past service certificates allows auditors to build the chain of trust from the CCF signing node to the current service certificate.
+
+* **signature**: Base64 string representing the signature of the root of the Merkle Tree at the given transaction, by the signing CCF node.
+
+The `leaf_components` field contains the following fields.
+
+* **claims_digest**: Hexadecimal string representing the SHA-256 hash digest of the [application claim](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#application-claims) attached by the Confidential Ledger application at the time the transaction was executed. Application claims are currently unsupported as the Confidential Ledger application doesn't attach any claim when executing a write transaction.
+
+* **commit_evidence**: A unique string produced per transaction, derived from the transaction ID and the ledger secrets. For more information about the commit evidence, see the related [CCF documentation](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#commit-evidence).
+
+* **write_set_digest**: Hexadecimal string representing the SHA-256 hash digest of the [Key-Value store](https://microsoft.github.io/CCF/main/build_apps/kv/https://docsupdatetracker.net/index.html), which contains all the keys and values written at the time the transaction was completed. For more information about the write set, see the related [CCF documentation](https://microsoft.github.io/CCF/main/overview/glossary.html#term-Write-Set).
+
+### More resources
+
+For more information about write transaction receipts and how CCF ensures the integrity of each transaction, see the following links:
+
+* [Write Receipts](https://microsoft.github.io/CCF/main/use_apps/verify_tx.html#write-receipts)
+* [Receipts](https://microsoft.github.io/CCF/main/audit/receipts.html)
+* [CCF Glossary](https://microsoft.github.io/CCF/main/overview/glossary.html)
+* [Merkle Tree](https://microsoft.github.io/CCF/main/architecture/merkle_tree.html)
+* [Cryptography](https://microsoft.github.io/CCF/main/architecture/cryptography.html)
+* [Certificates](https://microsoft.github.io/CCF/main/operations/certificates.html)
+
+## Next steps
+
+* [Verify Azure Confidential Ledger write transaction receipts](verify-write-transaction-receipts.md)
+* [Overview of Microsoft Azure confidential ledger](overview.md)
+* [Azure confidential ledger architecture](architecture.md)
container-registry Container Registry Firewall Access Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-firewall-access-rules.md
To pull or push images or other artifacts to an Azure container registry, a clie
* **Registry REST API endpoint** - Authentication and registry management operations are handled through the registry's public REST API endpoint. This endpoint is the login server name of the registry. Example: `myregistry.azurecr.io`
+* **Registry REST API endpoint for certificates** - Azure container registry uses a wildcard SSL certificate for all subdomains. When connecting to the Azure container registry using SSL, the client must be able to download the certificate for the TLS handshake. In such cases, `azurecr.io` must also be accessible.
+ * **Storage (data) endpoint** - Azure [allocates blob storage](container-registry-storage.md) in Azure Storage accounts on behalf of each registry to manage the data for container images and other artifacts. When a client accesses image layers in an Azure container registry, it makes requests using a storage account endpoint provided by the registry. If your registry is [geo-replicated](container-registry-geo-replication.md), a client might need to interact with the data endpoint in a specific region or in multiple replicated regions.
If you need to access Microsoft Container Registry (MCR) from behind a firewall,
<!-- LINKS - Internal --> [az-acr-update]: /cli/azure/acr#az_acr_update
-[az-acr-show-endpoints]: /cli/azure/acr#az_acr_show_endpoints
+[az-acr-show-endpoints]: /cli/azure/acr#az_acr_show_endpoints
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
Your account is free for 30 days. After expiration, a new sandbox account can be
-### Move data to your new account
+### Move data to new account
-1. Navigate back to the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide. Select **Next** to move on to the third step and move your data.
-
- :::image type="content" source="media/try-free/account-creation-options.png" lightbox="media/try-free/account-creation-options.png" alt-text="Screenshot of the sign-in/sign-up experience to upgrade your current account.":::
+If you desire, you can migrate your existing data from the free account to the newly created account.
-## Migrate your data
+#### [NoSQL](#tab/nosql)
-### [NoSQL / MongoDB / Cassandra / Gremlin / Table](#tab/nosql+mongodb+cassandra+gremlin+table)
+1. Navigate back to the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide. Select **Next** to move on to the third step and move your data.
-> [!NOTE]
-> While this example uses API for NoSQL, the steps are similar for the APIs for MongoDB, Cassandra, Gremlin, or Table.
+ :::image type="content" source="media/try-free/account-creation-options.png" lightbox="media/try-free/account-creation-options.png" alt-text="Screenshot of the sign-in/sign-up experience to upgrade your current account.":::
1. Locate your **Primary Connection string** for the Azure Cosmos DB account you created for your data. This information can be found within the **Keys** page of your new account.
Your account is free for 30 days. After expiration, a new sandbox account can be
1. Select **Next** to move the data to your account. Provide your email address to be notified by email once the migration has been completed.
-### [PostgreSQL](#tab/postgresql)
+#### [PostgreSQL](#tab/postgresql)
+
+1. Navigate back to the **Upgrade** page from the [Start upgrade](#start-upgrade) section of this guide. Select **Next** to move on to the third step and move your data.
+
+ :::image type="content" source="media/try-free/account-creation-options.png" lightbox="media/try-free/account-creation-options.png" alt-text="Screenshot of the sign-in/sign-up experience to upgrade your current account.":::
1. Locate your **PostgreSQL connection URL** of the Azure Cosmos DB account you created for your data. This information can be found within the **Connection String** page of your new account.
Your account is free for 30 days. After expiration, a new sandbox account can be
1. Select **Next** to move the data to your account.
+#### [MongoDB / Cassandra / Gremlin / Table](#tab/mongodb+cassandra+gremlin+table)
+
+> [!IMPORTANT]
+> Data migration is not available for the APIs for MongoDB, Cassandra, Gremlin, or Table.
+ ## Delete your account
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
na Previously updated : 10/12/2022 Last updated : 11/14/2022 # What is Azure DDoS Protection?
Azure DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP
## SKU
-Azure DDoS Protection is offered in two available SKUs, DDoS IP Protection and DDoS Network Protection. For more information about the SKUs, see [SKU comparison](ddos-protection-sku-comparison.md).
+Azure DDoS Protection is offered in two available SKUs, DDoS IP Protection Preview and DDoS Network Protection. For more information about the SKUs, see [SKU comparison](ddos-protection-sku-comparison.md).
### Native platform integration
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-reference-architectures.md
The load balancer distributes incoming internet requests to the VM instances. Vi
DDoS Network Protection is enabled on the virtual network of the Azure (internet) load balancer that has the public IP associated with it.
-#### DDoS IP Protection virtual machine architecture
+#### DDoS IP Protection Preview virtual machine architecture
:::image type="content" source="./media/reference-architectures/ddos-ip-protection-virtual-machine.png" alt-text="Diagram of the DDoS IP Protection reference architecture for an application running on load-balanced virtual machines.":::
There are many ways to implement an N-tier architecture. The following diagrams
In this architecture diagram DDoS Network Protection is enabled on the virtual network. All public IPs in the virtual network get DDoS protection for Layer 3 and 4. For Layer 7 protection, deploy Application Gateway in the WAF SKU. For more information on this reference architecture, see [Windows N-tier application on Azure](/azure/architecture/reference-architectures/virtual-machines-windows/n-tier).
-#### DDoS IP Protection Windows N-tier architecture
+#### DDoS IP Protection Preview Windows N-tier architecture
:::image type="content" source="./media/reference-architectures/ddos-ip-protection-n-tier.png" alt-text="Diagram of the DDoS IP Protection reference architecture for an application running on Windows N-tier." lightbox="./media/reference-architectures/ddos-ip-protection-n-tier.png":::
For more information about this reference architecture, see [Highly available mu
In this architecture diagram DDoS Network Protection is enabled on the web app gateway virtual network.
-#### DDoS IP Protection with PaaS web application architecture
+#### DDoS IP Protection Preview with PaaS web application architecture
:::image type="content" source="./media/reference-architectures/ddos-ip-protection-paas-web-app.png" alt-text="Diagram of DDoS IP Protection reference architecture for a PaaS web application." lightbox="./media/reference-architectures/ddos-ip-protection-paas-web-app.png":::
DDoS Protection is designed for services that are deployed in a virtual network.
In this architecture diagram Azure DDoS Network Protection is enabled on the hub virtual network.
-#### DDoS IP Protection hub-and-spoke network
+#### DDoS IP Protection Preview hub-and-spoke network
:::image type="content" source="./media/reference-architectures/ddos-ip-protection-azure-firewall-bastion.png" alt-text="Diagram showing DDoS IP Protection Hub-and-spoke architecture with firewall, bastion, and DDoS Protection." lightbox="./media/reference-architectures/ddos-ip-protection-azure-firewall-bastion.png":::
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Previously updated : 10/19/2022 Last updated : 11/14/2022
The sections in this article discuss the resources and settings of Azure DDoS Pr
Azure DDoS Network Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. For more information about enabling DDoS Network Protection, see [Quickstart: Create and configure Azure DDoS Network Protection using the Azure portal](manage-ddos-protection.md).
-## DDoS IP Protection
+## DDoS IP Protection Preview
DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added
ddos-protection Manage Ddos Protection Powershell Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/manage-ddos-protection-powershell-ip.md
Title: 'Quickstart: Create and configure Azure DDoS IP Protection using PowerShell'
-description: Learn how to create Azure DDoS IP Protection using PowerShell
+ Title: 'Quickstart: Create and configure Azure DDoS IP Protection Preview using PowerShell'
+description: Learn how to create Azure DDoS IP Protection Preview using PowerShell
Previously updated : 10/12/2022 Last updated : 11/14/2022
-# Quickstart: Create and configure Azure DDoS IP Protection using Azure PowerShell
+# Quickstart: Create and configure Azure DDoS IP Protection Preview using Azure PowerShell
Get started with Azure DDoS IP Protection by using Azure PowerShell. In this quickstart, you'll enable DDoS IP protection and link it to a public IP address utilizing PowerShell.
In this quickstart, you'll enable DDoS IP protection and link it to a public IP
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-## Enable DDoS IP Protection for a public IP address
+## Enable DDoS IP Protection Preview for a public IP address
You can enable DDoS IP Protection when creating a public IP address. In this example, we'll name our public IP address _myStandardPublicIP_:
New-AzPublicIpAddress -Name myStandardPublicIP -ResourceGroupName MyResourceGrou
> [!NOTE] > DDoS IP Protection is enabled only on Public IP Standard SKU.
-### Enable DDoS IP Protection for an existing public IP address
+### Enable DDoS IP Protection Preview for an existing public IP address
You can associate an existing public IP address:
$protectionMode = $publicIp.DdosSettings.ProtectionMode
$protectionMode ```
-## Disable DDoS IP Protection for an existing public IP address
+## Disable DDoS IP Protection Preview for an existing public IP address
```azurepowershell-interactive $publicIp = Get-AzPublicIpAddress -Name myStandardPublicIP -ResourceGroupName MyResourceGroup
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
Title: Microsoft Defender for Cloud's asset inventory description: Learn about Microsoft Defender for Cloud's asset management experience providing full visibility over all your Defender for Cloud monitored resources. Previously updated : 11/13/2022 Last updated : 11/14/2022
The asset management possibilities for this tool are substantial and continue to
|Release state:|General availability (GA)| |Pricing:|Free<br> Some features of the inventory page, such as the [software inventory](#access-a-software-inventory) require paid solutions to be in-place| |Required roles and permissions:|All users|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) <br> <br> Software inventory is not currently supported on the National cloud.|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) <br> <br> Software inventory is not currently supported in national clouds.|
## What are the key features of asset inventory?
Using the [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), asset
## Access a software inventory
-Software inventory can be enabled either using the agentless scanner or with the agent based Microsoft Defender for Endpoint integration.
+To access the software inventory, you'll need one of the following **paid** solutions:
+
+- [Agentless machine scanning](concept-agentless-data-collection.md) from [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md).
+- [Agentless machine scanning](concept-agentless-data-collection.md) from [Defender for Servers P2](defender-for-servers-introduction.md#defender-for-servers-plans).
+- [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) from [Defender for Servers](defender-for-servers-introduction.md).
If you've already enabled the integration with Microsoft Defender for Endpoint and enabled Microsoft Defender for Servers, you'll have access to the software inventory.
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Agentless scanning for VMs provides vulnerability assessment and software invent
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux | | Instance types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) |
-| Encryption: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted ΓÇô platform managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted |
+| Encryption: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted ΓÇô Azure Disk Encryption - platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô Azure Disk Encryption - customer-managed keys (CMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô Disk Encryption Set - platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô Disk Encryption Set - customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted |
## How agentless scanning for VMs works
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
You can then review the progress of the tasks by subscription, recommendation, o
|Aspect|Details| |-|:-| |Release state:|Preview.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
-|Pricing:|Free while in preview and will be a paid service after preview|
+| Prerequisite: | Requires the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) to be enabled.|
|Required roles and permissions:|Azure - **Contributor**, **Security Admin**, or **Owner** on the subscription<br>AWS, GCP ΓÇô **Contributor**, **Security Admin**, or **Owner** on the connector| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP accounts|
+> [!NOTE]
+> Starting January 1, 2023, governance capabilities will require Defender Cloud Security Posture Management (CSPM) plan enablement.
+> Customers deciding to keep Defender CSPM plan off on scopes with governance content:
+> - Existing assignments remain as is and continue to work with no customization option or ability to create new ones.
+> - Existing rules will remain as is but wonΓÇÖt trigger new assignments creation.
+ ### Defining governance rules to automatically set the owner and due date of recommendations Governance rules can identify resources that require remediation according to specific recommendations or severities, and the rule assigns an owner and due date to make sure the recommendations are handled. Many governance rules can apply to the same recommendations, so the rule with lower priority value is the one that assigns the owner and due date.
By default, email notifications are sent to the resource owners weekly to provid
To define a governance rule that assigns an owner and due date:
-1. In the **Environment settings**, select the Azure subscription, AWS account, or Google project that you want to define the rule for.
-1. In **Governance rules (preview)**, select **Add rule**.
+1. Navigate to **Environment settings** > **Governance rules**.
+
+1. Select **Create governance rule**.
+ 1. Enter a name for the rule.
-1. Set a priority for the rule. You can see the priority for the existing rules in the list of governance rules.
+1. Select a scope to apply the rule to and use exclusions if needed. Rules for management scope (Azure management groups, AWS master accounts, GCP organizations) are applied prior to the rules on a single scope.
+
+1. Priority is assigned automatically after scope selection. You can override this field if needed.
+ 1. Select the recommendations that the rule applies to, either: - **By severity** - The rule assigns the owner and due date to any recommendation in the subscription that doesn't already have them assigned.
- - **By name** - Select the specific recommendations that the rule applies to.
+ - **By specific recommendations** - Select the specific recommendations that the rule applies to.
1. Set the owner to assign to the recommendations either: - **By resource tag** - Enter the resource tag on your resources that defines the resource owner. - **By email address** - Enter the email address of the owner to assign to the recommendations. 1. Set the **remediation timeframe**, which is the time between when the resources are identified to require remediation and the time that the remediation is due.
-1. If you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
+1. If you don't want the resources to impact your secure score until they're overdue, select **Apply grace period**.
1. If you don't want either the owner or the owner's manager to receive weekly emails, clear the notification options. 1. Select **Create**.
If there are existing recommendations that match the definition of the governanc
> [!NOTE] > When you delete or disable a rule, all existing assignments and notifications will remain.
+> [!TIP]
+> Here are some sample use-cases for the at-scale experience:
+> - View and manage all governance rules effective in the organization using a single page.
+> - Create and apply rules on multiple scopes at once using management scopes cross cloud.
+> - Check effective rules on selected scope using the scope filter.
+ ## Manually assigning owners and due dates for recommendation remediation For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that are given a due date don't impact your secure score unless they become overdue.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
For more information about migrating servers from Defender for Endpoint to Defen
| Pricing: | Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#defender-for-servers-plans) | | Supported environments: | :::image type="icon" source="./medi) (formerly Windows Virtual Desktop), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) | | Required roles and permissions: | - To enable/disable the integration: **Security admin** or **Owner**<br>- To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government (Windows only)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 10/03/2022 Last updated : 11/14/2022 # What's new in Microsoft Defender for Cloud?
Updates in November include:
- [Protect containers in your entire GKE organization with Defender for Containers](#protect-containers-in-your-entire-gke-organization-with-defender-for-containers) - [Validate Defender for Containers protections with sample alerts](#validate-defender-for-containers-protections-with-sample-alerts)
+- [Governance rules at scale (Preview)](#governance-rules-at-scale-preview)
### Protect containers in your entire GKE organization with Defender for Containers
You can now create sample alerts also for Defender for Containers plan. The new
Learn more about [alert validation](alert-validation.md).
+### Governance rules at scale (Preview)
+
+We're happy to announce the new ability to apply governance rules at scale (Preview) in Defender for Cloud.
+
+With this new experience, security teams are able to define governance rules in bulk for various scopes (subscriptions and connectors). Security teams can accomplish this task by using management scopes such as Azure management groups, AWS master accounts or GCP organizations.
+
+Additionally, the Governance rules (Preview) page presents all of the available governance rules that are effective in the organizationΓÇÖs environments.
+
+Learn more about the [new governance rules at-scale experience](governance-rules.md).
+
+> [!NOTE]
+> As of January 1, 2023, in order to experience the capabilities offered by Governance, you must have the [Defender CSPM plan](concept-cloud-security-posture-management.md) enabled on your subscription or connector.
+ ## October 2022 Updates in October include:
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
Previously updated : 10/31/2022 Last updated : 11/14/2022 #Customer intent: As an administrator, I want to evaluate Azure DNS Private Resolver so I can determine if I want to use it instead of my current DNS resolver service.
Virtual network links enable name resolution for virtual networks that are linke
## DNS forwarding rulesets
-A DNS forwarding ruleset is a group of DNS forwarding rules (up to 1,000) that can be applied to one or more outbound endpoints, or linked to one or more virtual networks. This is a 1:N relationship. Rulesets are associated with a specific outbound endpoint. For more information, see [DNS forwarding rulesets](private-resolver-endpoints-rulesets.md#dns-forwarding-rulesets).
+A DNS forwarding ruleset is a group of DNS forwarding rules (up to 25) that can be applied to one or more outbound endpoints, or linked to one or more virtual networks. This is a 1:N relationship. Rulesets are associated with a specific outbound endpoint. For more information, see [DNS forwarding rulesets](private-resolver-endpoints-rulesets.md#dns-forwarding-rulesets).
## DNS forwarding rules
event-grid Azure Active Directory Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/azure-active-directory-events.md
These events are triggered when a [User](/graph/api/resources/user) or [Group](/
| Event name | Description | | - | -- |
- | **Microsoft.Graph.UserCreated** | Triggered when a user in Azure AD is created. |
- | **Microsoft.Graph.UserUpdated** | Triggered when a user in Azure AD is updated. |
- | **Microsoft.Graph.UserDeleted** | Triggered when a user in Azure AD is deleted. |
- | **Microsoft.Graph.GroupCreated** | Triggered when a group in Azure AD is created. |
- | **Microsoft.Graph.GroupUpdated** | Triggered when a group in Azure AD is updated. |
- | **Microsoft.Graph.GroupDeleted** | Triggered when a group in Azure AD is deleted. |
+ | **Microsoft.Graph.UserUpdated** | Triggered when a user in Azure AD is created and updated. |
+ | **Microsoft.Graph.UserDeleted** | Triggered when a user in Azure AD is permanently deleted. |
+ | **Microsoft.Graph.GroupUpdated** | Triggered when a group in Azure AD is created and updated. |
+ | **Microsoft.Graph.GroupDeleted** | Triggered when a group in Azure AD is permanently deleted. |
+
+> [!NOTE]
+> By default, deleting a user or a group is only a soft delete operation, which means that the user or group is marked as deleted but the user or group object still exists. Microsoft Graph sends an updated event when users are soft deleted. To permanently delete a user, navigate to the **Delete users** page in the Azure portal and select **Delete permanently**. Steps to permanently delete a group are similar.
## Example event When an event is triggered, the Event Grid service sends data about that event to subscribing destinations. This section contains an example of what that data would look like for each Azure AD event.
-### Microsoft.Graph.UserCreated event
-
-```json
-[{
- "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
- "type": "Microsoft.Graph.UserCreated",
- "source": "/tenants/<tenant-id>/applications/<application-id>",
- "subject": "Users/<user-id>",
- "time": "2022-05-24T22:24:31.3062901Z",
- "datacontenttype": "application/json",
- "specversion": "1.0",
- "data": {
- "changeType": "created",
- "clientState": "<guid>",
- "resource": "Users/<user-id>",
- "resourceData": {
- "@odata.type": "#Microsoft.Graph.User",
- "@odata.id": "Users/<user-id>",
- "id": "<user-id>",
- "organizationId": "<tenant-id>",
- "eventTime": "2022-05-24T22:24:31.3062901Z",
- "sequenceNumber": <sequence-number>
- },
- "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
- "subscriptionId": "<microsoft-graph-subscription-id>",
- "tenantId": "<tenant-id>
- }
-}]
-```
- ### Microsoft.Graph.UserUpdated event ```json
When an event is triggered, the Event Grid service sends data about that event t
} }] ```
-### Microsoft.Graph.GroupCreated event
-```json
-[{
- "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce",
- "type": "Microsoft.Graph.GroupCreated",
- "source": "/tenants/<tenant-id>/applications/<application-id>",
- "subject": "Groups/<group-id>",
- "time": "2022-05-24T22:24:31.3062901Z",
- "datacontenttype": "application/json",
- "specversion": "1.0",
- "data": {
- "changeType": "created",
- "clientState": "<guid>",
- "resource": "Groups/<group-id>",
- "resourceData": {
- "@odata.type": "#Microsoft.Graph.Group",
- "@odata.id": "Groups/<group-id>",
- "id": "<group-id>",
- "organizationId": "<tenant-id>",
- "eventTime": "2022-05-24T22:24:31.3062901Z",
- "sequenceNumber": <sequence-number>
- },
- "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00",
- "subscriptionId": "<microsoft-graph-subscription-id>",
- "tenantId": "<tenant-id>
- }
-}]
-```
### Microsoft.Graph.GroupUpdated event ```json
event-grid Event Grid Event Hubs Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-grid-event-hubs-integration.md
Title: 'Tutorial: Send Event Hubs data to data warehouse - Event Grid' description: Describes how to store Event Hubs captured data in Azure Synapse Analytics via Azure Functions and Event Grid triggers. Previously updated : 12/07/2020 Last updated : 11/14/2022 ms.devlang: csharp # Tutorial: Stream big data into a data warehouse
-Azure [Event Grid](overview.md) is an intelligent event routing service that enables you to react to notifications or events from apps and services. For example, it can trigger an Azure Function to process Event Hubs data that's captured to a Blob storage or Data Lake Storage. This [sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) shows you how to use Event Grid and Azure Functions to migrate captured Event Hubs data from blob storage to Azure Synapse Analytics, specifically a dedicated SQL pool.
+Azure [Event Grid](overview.md) is an intelligent event routing service that enables you to react to notifications or events from apps and services. For example, it can trigger an Azure function to process Event Hubs data that's captured to a Blob storage or Data Lake Storage. This [sample](https://github.com/Azure/azure-event-hubs/tree/master/samples/e2e/EventHubsCaptureEventGridDemo) shows you how to use Event Grid and Azure Functions to migrate captured Event Hubs data from blob storage to Azure Synapse Analytics, specifically a dedicated SQL pool.
[!INCLUDE [event-grid-event-hubs-functions-synapse-analytics.md](./includes/event-grid-event-hubs-functions-synapse-analytics.md)]
event-hubs Event Hubs Federation Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-federation-patterns.md
samples and the [Use Apache Kafka MirrorMaker with Event Hubs](event-hubs-kafka-
### Streams and order preservation
-Replication, either through Azure Functions or Azure Stream Analytics, does not
+Replication, either through Azure Functions or Azure Stream Analytics, doesn't
aim to assure the creation of exact 1:1 clones of a source Event Hub into a target Event Hub, but focuses on preserving the relative order of events where the application requires it. The application communicates this by grouping
existing properties, with values separated by semicolons.
### Failover
-If you are using replication for disaster recovery purposes, to protect against
+If you're using replication for disaster recovery purposes, to protect against
regional availability events in the Event Hubs service, or against network interruptions, any such failure scenario will require performing a failover from one Event Hub to the next, telling producers and/or consumers to use the secondary endpoint.
-For all failover scenarios, it is assumed that the required elements of the
+For all failover scenarios, it's assumed that the required elements of the
namespaces are structurally identical, meaning that Event Hubs and Consumer Groups are identically named and that shared access signature rules and/or role-based access control rules are set up in the same way. You can create (and
One candidate approach is to hold the information in DNS SRV records in a DNS
you control and point to the respective Event Hub endpoints. > [!IMPORTANT]
-> Mind that Event Hubs does not allow for its endpoints to be
-> directly aliased with CNAME records, which means you will use DNS as a
+> Mind that Event Hubs doesn't allow for its endpoints to be
+> directly aliased with CNAME records, which means you'll use DNS as a
> resilient lookup mechanism for endpoint addresses and not to directly resolve > IP address information. Assume you own the domain `example.com` and, for your application, a zone
-`test.example.com`. For two alternate Event Hubs, you will now create two
+`test.example.com`. For two alternate Event Hubs, you'll now create two
further nested zones, and an SRV record in each. The SRV records are, following common convention, prefixed with
from about the same position where processing was interrupted.
To realize either scenario and using the event processor of your respective Azure SDK,
-[you will create a new checkpoint store](event-processor-balance-partition-load.md#checkpointing)
+[you will create a new checkpoint store](event-processor-balance-partition-load.md#checkpoint)
and provide an initial partition position, based on the _timestamp_ that you want to resume processing from.
Variations of these patters are:
those Event Hubs containing the same streams, no matter where events are produced.
-The first two pattern variations are trivial and do not differ from plain
+The first two pattern variations are trivial and don't differ from plain
replication tasks. The last scenario requires excluding already replicated events from being
Examples for such modifications are:
input event transfer or aggregate a set of events that are then transferred together. - **_Validation_** - Event data from external sources often need to be checked
- for whether they are in compliance with a set of rules before they may be
+ for whether they're in compliance with a set of rules before they may be
forwarded. The rules may be expressed using schemas or code. Events that are found not to be in compliance may be dropped, with the issue noted in logs, or may be forwarded to a special target destination to handle them further.
Examples for such modifications are:
contained in the events. - **_Filtering_** - Some events arriving from a source might have to be withheld from the target based on some rule. A filter tests the event against a rule
- and drops the event if the event does not match the rule. Filtering out
+ and drops the event if the event doesn't match the rule. Filtering out
duplicate events by observing certain criteria and dropping subsequent events with the same values is a form of filtering. - **_Cryptography_** - A replication task may have to decrypt content arriving
select * into dest2Output from inputSource where Info = 2
The log projection pattern flattens the event stream onto an indexed database, with events becoming records in the database. Typically, events are added to the same collection or table, and the Event Hub partition key becomes part of the
-the primary key looking for making the record unique.
+primary key looking for making the record unique.
Log projection can produce a time-series historian of your event data or a compacted view, whereby only the latest event is retained for each partition
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
Title: Balance partition load across multiple instances - Azure Event Hubs | Microsoft Docs description: Describes how to balance partition load across multiple instances of your application using an event processor and the Azure Event Hubs SDK. Previously updated : 09/15/2021 Last updated : 11/14/2022 # Balance partition load across multiple instances of your application
-To scale your event processing application, you can run multiple instances of the application and have it balance the load among themselves. In the older versions, [EventProcessorHost](event-hubs-event-processor-host.md) allowed you to balance the load between multiple instances of your program and checkpoint events when receiving. In the newer versions (5.0 onwards), **EventProcessorClient** (.NET and Java), or **EventHubConsumerClient** (Python and JavaScript) allows you to do the same. The development model is made simpler by using events. You subscribe to the events that you're interested in by registering an event handler. If you're using the old version of the client library, see the following migration guides: [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md), [Java](https://github.com/Azure/azure-sdk-for-jav).
-This article describes a sample scenario for using multiple instances to read events from an event hub and then give you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
+To scale your event processing application, you can run multiple instances of the application and have the load balanced among themselves. In the older versions, [EventProcessorHost](event-hubs-event-processor-host.md) allowed you to balance the load between multiple instances of your program and checkpoint events when receiving the events. In the newer versions (5.0 onwards), **EventProcessorClient** (.NET and Java), or **EventHubConsumerClient** (Python and JavaScript) allows you to do the same. The development model is made simpler by using events. You subscribe to the events that you're interested in by registering an event handler. If you're using the old version of the client library, see the following migration guides: [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md), [Java](https://github.com/Azure/azure-sdk-for-jav).
+
+This article describes a sample scenario for using multiple instances of client `applications to read events from an event hub. It also gives you details about features of event processor client, which allows you to receive events from multiple partitions at once and load balance with other consumers that use the same event hub and consumer group.
> [!NOTE] > The key to scale for Event Hubs is the idea of partitioned consumers. In contrast to the [competing consumers](/previous-versions/msp-n-p/dn568101(v=pandp.10)) pattern, the partitioned consumer pattern enables high scale by removing the contention bottleneck and facilitating end to end parallelism.
As an example scenario, consider a home security company that monitors 100,000 h
Each sensor pushes data to an event hub. The event hub is configured with 16 partitions. On the consuming end, you need a mechanism that can read these events, consolidate them (filter, aggregate, and so on) and dump the aggregate to a storage blob, which is then projected to a user-friendly web page.
-## Write the consumer application
+## Consumer application
When designing the consumer in a distributed environment, the scenario must handle the following requirements: 1. **Scale:** Create multiple consumers, with each consumer taking ownership of reading from a few Event Hubs partitions. 2. **Load balance:** Increase or reduce the consumers dynamically. For example, when a new sensor type (for example, a carbon monoxide detector) is added to each home, the number of events increases. In that case, the operator (a human) increases the number of consumer instances. Then, the pool of consumers can rebalance the number of partitions they own, to share the load with the newly added consumers. 3. **Seamless resume on failures:** If a consumer (**consumer A**) fails (for example, the virtual machine hosting the consumer suddenly crashes), then other consumers can pick up the partitions owned by **consumer A** and continue. Also, the continuation point, called a *checkpoint* or *offset*, should be at the exact point at which **consumer A** failed, or slightly before that.
-4. **Consume events:** While the previous three points deal with the management of the consumer, there must be code to consume the events and do something useful with it. For example, aggregate it and upload it to blob storage.
+4. **Consume events:** While the previous three points deal with the management of the consumer, there must be code to consume events and do something useful with it. For example, aggregate it and upload it to blob storage.
## Event processor or consumer client
-You don't need to build your own solution to meet these requirements. The Azure Event Hubs SDKs provide this functionality. In .NET or Java SDKs, you use an event processor client (EventProcessorClient), and in Python and JavaScript SDKs, you use EventHubConsumerClient. In the old version of SDK, it was the event processor host (EventProcessorHost) that supported these features.
+You don't need to build your own solution to meet these requirements. The Azure Event Hubs SDKs provide this functionality. In .NET or Java SDKs, you use an event processor client (`EventProcessorClient`), and in Python and JavaScript SDKs, you use `EventHubConsumerClient`. In the old version of SDK, it was the event processor host (`EventProcessorHost`) that supported these features.
-For the majority of production scenarios, we recommend that you use the event processor client for reading and processing events. The processor client is intended to provide a robust experience for processing events across all partitions of an event hub in a performant and fault tolerant manner while providing a means to checkpoint its progress. Event processor clients can work cooperatively within the context of a consumer group for a given event hub. Clients will automatically manage distribution and balancing of work as instances become available or unavailable for the group.
+For most production scenarios, we recommend that you use the event processor client for reading and processing events. The processor client is intended to provide a robust experience for processing events across all partitions of an event hub in a performant and fault tolerant manner while providing a means to checkpoint its progress. Event processor clients can work cooperatively within the context of a consumer group for a given event hub. Clients will automatically manage distribution and balancing of work as instances become available or unavailable for the group.
-## Partition ownership tracking
+## Partition ownership
An event processor instance typically owns and processes events from one or more partitions. Ownership of partitions is evenly distributed among all the active event processor instances associated with an event hub and consumer group combination.
-Each event processor is given a unique identifier and claims ownership of partitions by adding or updating an entry in a checkpoint store. All event processor instances communicate with this store periodically to update its own processing state as well as to learn about other active instances. This data is then used to balance the load among the active processors. New instances can join the processing pool to scale up. When instances go down, either because of failures or to scale down, partition ownership is gracefully transferred to other active processors.
+Each event processor is given a unique identifier and claims ownership of partitions by adding or updating an entry in a checkpoint store. All event processor instances communicate with this store periodically to update its own processing state and to learn about other active instances. This data is then used to balance the load among the active processors. New instances can join the processing pool to scale up. When instances go down, either because of failures or to scale down, partition ownership is gracefully transferred to other active processors.
Partition ownership records in the checkpoint store keep track of Event Hubs namespace, event hub name, consumer group, event processor identifier (also known as owner), partition ID, and the last modified time.
-| Event Hubs namespace | Event Hub name | **Consumer group** | Owner | Partition ID | Last modified time |
+| Event Hubs namespace | Event hub name | **Consumer group** | Owner | Partition ID | Last modified time |
| - | -- | :-- | :-- | :-- | : | | mynamespace.servicebus.windows.net | myeventhub | myconsumergroup | 3be3f9d3-9d9e-4c50-9491-85ece8334ff6 | 0 | 2020-01-15T01:22:15 | | mynamespace.servicebus.windows.net | myeventhub | myconsumergroup | f5cc5176-ce96-4bb4-bbaa-a0e3a9054ecf | 1 | 2020-01-15T01:22:17 |
Partition ownership records in the checkpoint store keep track of Event Hubs nam
| | | : | | | | | mynamespace.servicebus.windows.net | myeventhub | myconsumergroup | 844bd8fb-1f3a-4580-984d-6324f9e208af | 15 | 2020-01-15T01:22:00 |
-Each event processor instance acquires ownership of a partition and starts processing the partition from last known [checkpoint](#checkpointing). If a processor fails (VM shuts down), then other instances detect it by looking at the last modified time. Other instances try to get ownership of the partitions previously owned by the inactive instance, and the checkpoint store guarantees that only one of the instances succeeds in claiming ownership of a partition. So, at any given point of time, there is at most one processor that receives events from a partition.
+Each event processor instance acquires ownership of a partition and starts processing the partition from last known [checkpoint](#checkpoint). If a processor fails (VM shuts down), then other instances detect it by looking at the last modified time. Other instances try to get ownership of the partitions previously owned by the inactive instance. The checkpoint store guarantees that only one of the instances succeeds in claiming ownership of a partition. So, at any given point of time, there is at most one processor that receives events from a partition.
## Receive messages
-When you create an event processor, you specify the functions that will process events and errors. Each call to the function that processes events delivers a single event from a specific partition. It's your responsibility to handle this event. If you want to make sure the consumer processes every message at least once, you need to write your own code with retry logic. But be cautious about poisoned messages.
+When you create an event processor, you specify functions that will process events and errors. Each call to the function that processes events delivers a single event from a specific partition. It's your responsibility to handle this event. If you want to make sure the consumer processes every message at least once, you need to write your own code with retry logic. But be cautious about poisoned messages.
We recommend that you do things relatively fast. That is, do as little processing as possible. If you need to write to storage and do some routing, it's better to use two consumer groups and have two event processors.
-## Checkpointing
+## Checkpoint
*Checkpointing* is a process by which an event processor marks or commits the position of the last successfully processed event within a partition. Marking a checkpoint is typically done within the function that processes the events and occurs on a per-partition basis within a consumer group.
When the checkpoint is performed to mark an event as processed, an entry in chec
## Thread safety and processor instances
-By default, the function that processes the events is called sequentially for a given partition. Subsequent events and calls to this function from the same partition queue up behind the scenes as the event pump continues to run in the background on other threads. Events from different partitions can be processed concurrently and any shared state that is accessed across partitions have to be synchronized.
+By default, the function that processes events is called sequentially for a given partition. Subsequent events and calls to this function from the same partition queue up behind the scenes as the event pump continues to run in the background on other threads. Events from different partitions can be processed concurrently and any shared state that is accessed across partitions have to be synchronized.
## Next steps See the following quick starts:
frontdoor Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/managed-identity.md
Azure Front Door also supports using managed identities to access Key Vault certificate. A managed identity generated by Azure Active Directory (Azure AD) allows your Azure Front Door instance to easily and securely access other Azure AD-protected resources, such as Azure Key Vault. Azure manages this identity, so you don't have to create or rotate any secrets. For more information about managed identities, seeΓÇ»[What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
+> [!IMPORTANT]
+> Migration identity for Azure Front Door is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ > [!NOTE] > Once you enable managed identities in Azure Front Door and grant proper permissions to access Key Vault, Azure Front Door will always use managed identities to access Key Vault for customer certificate. >
frontdoor Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md
# Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
-> [!NOTE]
-> Migration capability for Azure Front Door is currently in Public Preview without an SLA and isn't recommended for production environments.
- Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users with the Microsoft global network. This article will guide you through the migration process to migrate your Front Door (classic) profile to either a Standard or Premium tier profile to begin using these latest features.
+> [!IMPORTANT]
+> Migration capability for Azure Front Door is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ ## Prerequisites * Review the [About Front Door tier migration](tier-migration.md) article.
frontdoor Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/routing-methods.md
na Previously updated : 03/11/2022 Last updated : 11/08/2022
All Front Door configurations have backend health monitoring and automated insta
> [!NOTE] > When you use the [Front Door rules engine](front-door-rules-engine.md), you can configure a rule to [override route configurations](front-door-rules-engine-actions.md#route-configuration-overrides) in Azure Front Door Standard and Premium tier or [override the backend pool](front-door-rules-engine-actions.md#route-configuration-overrides) in Azure Front Door (classic) for a request. The origin group or backend pool set by the rules engine overrides the routing process described in this article.
+## Overall decision flow
+
+The following diagram shows the overall decision flow:
++
+The decision steps are:
+
+1. **Available origins:** Select all origins that are enabled and returned healthy (200 OK) for the health probe.
+ - *Example: Suppose there are six origins A, B, C, D, E, and F, and among them C is unhealthy and E is disabled. The list of available origins is A, B, D, and F.*
+1. **Priority:** The top priority origins among the available ones are selected.
+ - *Example: Suppose origin A, B, and D have priority 1 and origin F has a priority of 2. Then, the selected origins will be A, B, and D.*
+1. **Latency signal (based on health probe):** Select the origins within the allowable latency range from the Front Door environment where the request arrived. This signal is based on the latency sensitivity setting on the origin group, as well as the latency of the closer origins.
+ - *Example: Suppose Front Door has measured the latency from the environment where the request arrived to origin A at 15 ms, while the latency for B is 30 ms and D is 60 ms away. If the origin group's latency sensitivity is set to 30 ms, then the lowest latency pool consists of origins A and B, because D is beyond 30 ms away from the closest origin that is A.*
+1. **Weights:** Lastly, Azure Front Door will round robin the traffic among the final selected group of origins in the ratio of weights specified.
+ - *Example: If origin A has a weight of 5 and origin B has a weight of 8, then the traffic will be distributed in the ratio of 5:8 among origins A and B.*
+
+If session affinity is enabled, then the first request in a session follows the flow listed above. Subsequent requests are sent to the origin selected in the first request.
+ ## <a name = "latency"></a>Lowest latencies based traffic-routing Deploying origins in two or more locations across the globe can improve the responsiveness of your applications by routing traffic to the destination that is 'closest' to your end users. Latency is the default traffic-routing method for your Front Door configuration. This routing method forwards requests from your end users to the closest origin behind Azure Front Door. This routing mechanism combined with the anycast architecture of Azure Front Door ensures that each of your end users gets the best performance based on their location. The 'closest' origin isn't necessarily closest as measured by geographic distance. Instead, Azure Front Door determines the closest origin by measuring network latency. Read more about [Azure Front Door routing architecture](front-door-routing-architecture.md).
-The following table shows the overall decision flow:
-
+Each Front Door environment measures the origin latency separately. This means that different users in different locations are routed to the origin with the best performance for that environment.
>[!NOTE] > By default, the latency sensitivity property is set to 0 ms. With this setting the request is always forwarded to the fastest available origins and weights on the origin don't take effect unless two origins have the same network latency.
The weighted method enables some useful scenarios:
* **Application migration to Azure**: Create an origin group with both Azure and external origins. Adjust the weight of the origins to prefer the new origins. You can gradually set this up starting with having the new origins disabled, then assigning them the lowest weights, slowly increasing it to levels where they take most traffic. Then finally disabling the less preferred origins and removing them from the group. * **Cloud-bursting for additional capacity**: Quickly expand an on-premises deployment into the cloud by putting it behind Front Door. When you need extra capacity in the cloud, you can add or enable more origins and specify what portion of traffic goes to each origin.
-## <a name = "affinity"></a>Session Affinity
+## <a name = "affinity"></a>Session affinity
By default, without session affinity, Azure Front Door forwards requests originating from the same client to different origins. Certain stateful applications or in certain scenarios when ensuing requests from the same user prefers the same origin to process the initial request. The cookie-based session affinity feature is useful when you want to keep a user session on the same origin. When you use managed cookies with SHA256 of the origin URL as the identifier in the cookie, Azure Front Door can direct ensuing traffic from a user session to the same origin for processing.
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
# About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
-> [!NOTE]
-> Migration capability for Azure Front Door is currently in Public Preview without an SLA and isn't recommended for production environments.
- Azure Front Door Standard and Premium tiers were released in March 2022 as the next generation content delivery network service. The newer tiers combine the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall (WAF). With features such as Private Link integration, enhanced rules engine and advanced diagnostics you have the ability to secure and accelerate your web applications to bring a better experience to your customers.
+> [!IMPORTANT]
+> Migration capability for Azure Front Door is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ Azure recommends migrating to the newer tiers to benefit from the new features and improvements over the Classic tier. To help with the migration process, Azure Front Door provides a zero-downtime migration to migrate your workload from Azure Front Door (class) to either Standard or Premium tier. In this article you'll learn about the migration process, understand the breaking changes involved, and what to do before, during and after the migration.
frontdoor Tier Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-upgrade.md
Azure Front Door supports upgrading from Standard to Premium tier for more advanced capabilities and an increase in quota limits. The upgrade won't cause any downtime to your services or applications. For more information about the differences between Standard and Premium tier, see [Tier comparison](standard-premium/tier-comparison.md).
+> [!IMPORTANT]
+> Upgrading Azure Front Door tier is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ This article will walk you through how to perform the tier upgrade on the configuration page of a Front Door Standard profile. Once upgraded, you'll be charge for the Azure Front Door Premium monthly base fee at an hourly rate. > [!IMPORTANT]
hdinsight Hdinsight Hadoop Hue Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-hue-linux.md
description: Learn how to install Hue on HDInsight clusters and use tunneling to
Previously updated : 03/31/2020 Last updated : 11/14/2022 # Install and use Hue on HDInsight Hadoop clusters
Use the information in the table below for your Script Action. See [Customize HD
|Bash script URI|`https://hdiconfigactions.blob.core.windows.net/linuxhueconfigactionv02/install-hue-uber-v02.sh`| |Node type(s):|Head|
-## Use Hue with HDInsight clusters
-
-You can only have one user account with Hue on regular clusters. For multi-user access, enable [Enterprise Security Package](./domain-joined/hdinsight-security-overview.md) on the cluster. SSH Tunneling is the only way to access Hue on the cluster once it's running. Tunneling via SSH allows the traffic to go directly to the headnode of the cluster where Hue is running. After the cluster has finished provisioning, use the following steps to use Hue on an HDInsight cluster.
-
-> [!NOTE]
-> We recommend using Firefox web browser to follow the instructions below.
-
-1. Use the information in [Use SSH Tunneling to access Apache Ambari web UI, ResourceManager, JobHistory, NameNode, Oozie, and other web UI's](hdinsight-linux-ambari-ssh-tunnel.md) to create an SSH tunnel from your client system to the HDInsight cluster, and then configure your Web browser to use the SSH tunnel as a proxy.
-
-1. Use [ssh command](./hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
-
- ```cmd
- ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net
- ```
-
-1. Once connected, use the following command to obtain the fully qualified domain name of the primary headnode:
-
- ```bash
- hostname -f
- ```
-
- This will return a name similar to the following:
-
- ```output
- myhdi-nfebtpfdv1nubcidphpap2eq2b.ex.internal.cloudapp.net
- ```
-
- This is the hostname of the primary headnode where the Hue website is located.
-
-1. Use the browser to open the Hue portal at `http://HOSTNAME:8888`. Replace HOSTNAME with the name you obtained in the previous step.
-
- > [!NOTE]
- > When you log in for the first time, you will be prompted to create an account to log in to the Hue portal. The credentials you specify here will be limited to the portal and are not related to the admin or SSH user credentials you specified while provision the cluster.
-
- :::image type="content" source="./media/hdinsight-hadoop-hue-linux/hdinsight-hue-portal-login.png" alt-text="HDInsight hue portal login window":::
- ### Run a Hive query 1. From the Hue portal, select **Query Editors**, and then select **Hive** to open the Hive editor.
You can only have one user account with Hue on regular clusters. For multi-user
## Next steps
-[Customize HDInsight clusters with Script Actions](hdinsight-hadoop-customize-cluster-linux.md)
+[Customize HDInsight clusters with Script Actions](hdinsight-hadoop-customize-cluster-linux.md)
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Title: Tutorial - Azure IoT in-store analytics | Microsoft Docs
-description: This tutorial shows how to deploy and use create an in-store analytics retail application in IoT Central.
+description: This tutorial shows how to deploy and use and create an in-store analytics retail application in IoT Central.
Use the application template to:
The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard. ### Condition monitoring sensors (1)
-An IoT solution starts with a set of sensors capturing meaningful signals from within a retail store environment. It is reflected by different kinds of sensors on the far left of the architecture diagram above.
+An IoT solution starts with a set of sensors capturing meaningful signals from within a retail store environment. It's reflected by different kinds of sensors on the far left of the architecture diagram above.
### Gateway devices (2)
To select a predefined application theme:
1. Select **Settings** on the masthead.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/settings-icon.png" alt-text="Azure IoT Central application settings.":::
- 2. Select a new **Theme**. 3. Select **Save**.
To create a custom theme:
1. Expand the left pane, if not already expanded.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/dashboard-expand.png" alt-text="Azure IoT Central left pane.":::
-
-1. Select **Customization > App appearance**.
+1. Select **Customization > Appearance**.
-1. Use the **Change** button to choose an image to upload as the **Application logo**. Optionally, specify a value for **Logo alt text**.
+1. Use the **Change** button to choose an image to upload as the masthead logo. Optionally, specify a value for **Logo alt text**.
1. Use the **Change** button to choose a **Browser icon** image that will appear on browser tabs.
-1. Optionally, replace the default **Browser colors** by adding HTML hexadecimal color codes. For the **Header**, add *#008575*. For the **Accent**, add *#A1F3EA*.
-
-1. Select **Save**.
-
- After you save, the application updates the browser colors, the logo in the masthead, and the browser icon.
+1. Optionally, replace the default **Browser colors** by adding HTML hexadecimal color codes. For the **Header**, add *#008575*. For the **Accent**, add *#A1F3EA*.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/saved-application-settings.png" alt-text="Azure IoT Central updated application settings.":::
+1. Select **Save**. After you save, the application updates the browser colors, the logo in the masthead, and the browser icon.
To update the application image:
-1. Select **Customization > App appearance.**
+1. Select **Application > Management.**
-1. Use the **Select image** button to choose an image to upload as the application image. This image appears on the application tile in the **My Apps** page of the IoT Central application manager.
+1. Select **Change** to choose an image to upload as the application image. This image appears on the application tile in the **My Apps** page of the IoT Central application manager.
1. Select **Save**. 1. Optionally, navigate to the **My Apps** view on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website. The application tile displays the updated application image.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/customize-application-image.png" alt-text="Azure IoT Central customize application image.":::
- ### Create device templates You can create device templates that enable you and the application operators to configure and manage devices. You can create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
To add a RuuviTag device template to your application:
1. Select **+ New** to create a new device template.
-1. Find and select the **RuuviTag Multisensor** device template in the Azure IoT device catalog.
+1. Find and select the **RuuviTag Multisensor** device template in the Azure IoT device catalog.
1. Select **Next: Review**.
To add a RuuviTag device template to your application:
1. Select **Device templates** on the left pane. The page displays all device templates included in the application template, and the RuuviTag device template you just added.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/device-templates-list.png" alt-text="Azure IoT Central RuuviTag sensor device template.":::
### Customize device templates
Here, you use the first two methods to customize the device template for your Ru
To customize the built-in interfaces of the RuuviTag device template:
-1. Select **Device Templates** in the left pane.
+1. Select **Device Templates** in the left pane.
-1. Select the template for RuuviTag sensors.
+1. Select the template for RuuviTag sensors.
1. Hide the left pane. The summary view of the template displays the device capabilities.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png" alt-text="Azure IoT Central RuuviTag device template summary view.":::
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png" alt-text="Screenshot showing the in-store analytics application RuuviTag device template." lightbox="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png":::
-1. Select **RuvviTag** model in the RuuviTag device template menu.
+1. Select **RuvviTag** model in the RuuviTag device template menu.
1. Scroll in the list of capabilities and find the `RelativeHumidity` telemetry type. It's the row item with the editable **Display name** value of *RelativeHumidity*.
For the `RelativeHumidity` telemetry type, make the following changes:
1. Select **Save** to save your changes.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template-customize.png" alt-text="Screenshot that shows the Customize screen and highlights the Save button.":::
- To add a cloud property to a device template in your application: Specify the following values to create a custom property to store the location of each device:
Specify the following values to create a custom property to store the location o
1. Select *String* in the **Schema** dropdown. A string type enables you to associate a location name string with any device based on the template. For instance, you could associate an area in a store with each device.
-1. Set **Minimum Length** to *2*.
+1. Set **Minimum Length** to *2*.
1. Set **Trim Whitespace** to **On**. 1. Select **Save** to save your custom cloud property.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-template-cloud-property.png" alt-text="Azure IoT Central RuuviTag device template customization.":::
-
-1. Select **Publish**.
+1. Select **Publish**.
Publishing a device template makes it visible to application operators. After you've published a template, use it to generate simulated devices for testing, or to connect real devices to your application. If you already have devices connected to your application, publishing a customized template pushes the changes to the devices. ### Add devices
-After you have created and customized device templates, it's time to add devices.
+After you have created and customized device templates, it's time to add devices.
For this tutorial, you use the following set of real and simulated devices to build the application:
Complete the steps in the following two articles to connect a real Rigado gatewa
As part of using sensors in your Azure IoT Central application to monitor conditions, you can create rules to run actions when certain conditions are met. A rule is associated with a device template and one or more devices, and contains conditions that must be met based on device telemetry or events. A rule also has one or more associated actions. The actions may include sending email notifications, or triggering a webhook action to send data to other services. The **In-store analytics - checkout** application template includes some predefined rules for the devices in the application.
-In this section, you create a new rule that checks the maximum relative humidity level based on the RuuviTag sensor telemetry. You add an action to the rule so that if the humidity exceeds the maximum, the application sends an email.
+In this section, you create a new rule that checks the maximum relative humidity level based on the RuuviTag sensor telemetry. You add an action to the rule so that if the humidity exceeds the maximum, the application sends an email.
-To create a rule:
+To create a rule:
1. Expand the left pane.
To create a rule:
1. Select **+ New**.
-1. Enter *Humidity level* as the name of the rule.
+1. Enter *Humidity level* as the name of the rule.
-1. Choose the RuuviTag device template in **Target devices**. The rule you define will apply to all sensors based on that template. Optionally, you could create a filter that would apply the rule only to a defined subset of the sensors.
+1. Choose the RuuviTag device template in **Device template**. The rule you define will apply to all sensors based on that template. Optionally, you could create a filter that would apply the rule only to a defined subset of the sensors.
-1. Choose `Humidity` as the **Telemetry**. It's the device capability that you customized in a previous step.
+1. Choose `RelativeHumidity` as the **Telemetry**. It's the device capability that you customized in a previous step.
-1. Choose `Is greater than` as the **Operator**.
+1. Choose `Is greater than` as the **Operator**.
-1. Enter a typical upper range indoor humidity level for your environment as the **Value**. For example, enter *65*. You've set a condition for your rule that occurs when relative humidity in any RuuviTag real or simulated sensor exceeds this value. You may need to adjust the value up or down depending on the normal humidity range in your environment.
-
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/rules-add-conditions.png" alt-text="Azure IoT Central add rule conditions.":::
+1. Enter a typical upper range indoor humidity level for your environment as the **Value**. For example, enter *65*. You've set a condition for your rule that occurs when relative humidity in any RuuviTag real or simulated sensor exceeds this value. You may need to adjust the value up or down depending on the normal humidity range in your environment.
To add an action to the rule: 1. Select **+ Email**.
-1. Enter *High humidity notification* as the friendly **Display name** for the action.
+1. Enter *High humidity notification* as the friendly **Display name** for the action.
1. Enter the email address associated with your account in **To**. If you use a different email, the address you use must be for a user who has been added to the application. The user also needs to sign in and out at least once.
To add an action to the rule:
1. Select **Done** to complete the action.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/rules-add-action.png" alt-text="Azure IoT Central add actions to rules.":::
-
-1. Select **Save** to save and activate the new rule.
+1. Select **Save** to save and activate the new rule.
Within a few minutes, the specified email account should begin to receive emails. The application sends email each time a sensor indicates that the humidity level exceeded the value in your condition.
In this tutorial, you learned how to:
* Connect devices to your application * Add rules and actions to monitor conditions
-Now that you've created an Azure IoT Central condition monitoring application, here is the suggested next step:
+Now that you've created an Azure IoT Central condition monitoring application, here's the suggested next step:
> [!div class="nextstepaction"] > [Customize the dashboard](./tutorial-in-store-analytics-customize-dashboard.md)
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
To customize the dashboard, you have to edit the default dashboard in your appli
1. Open the condition monitoring application that you created in the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial.
-1. Select **Dashboard settings** and enter **Name** for your dashboard and select **Save**.
-
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/new-dashboard.png" alt-text="Azure IoT Central new dashboard.":::
+1. Select **Dashboard settings** and enter **Name** for your dashboard and select **Save**.
## Customize image tiles on the dashboard
-An Azure IoT Central application dashboard consists of one or more tiles. A tile is a rectangular container for displaying content on a dashboard. You associate various types of content with tiles, and you drag, drop, and resize tiles to customize a dashboard layout. There are several types of tiles for displaying content. Image tiles contain images, and you can add a URL that enables users to click the image. Label tiles display plain text. Markdown tiles contain formatted content and let you set an image, a URL, a title, and markdown code that renders as HTML. Telemetry, property, or command tiles display device-specific data.
+An Azure IoT Central application dashboard consists of one or more tiles. A tile is a rectangular container for displaying content on a dashboard. You associate various types of content with tiles, and you drag, drop, and resize tiles to customize a dashboard layout. There are several types of tiles for displaying content. Image tiles contain images, and you can add a URL that enables users to click the image. Label tiles display plain text. Markdown tiles contain formatted content and let you set an image, a URL, a title, and markdown code that renders as HTML. Telemetry, property, or command tiles display device-specific data.
In this section, you learn how to customize image tiles on the dashboard. To customize the image tile that displays a brand image on the dashboard:
-1. Select **Edit** on the dashboard toolbar.
+1. Select **Edit** on the dashboard toolbar.
1. Select **Edit** on the image tile that displays the Northwind brand image. 1. Change the **Title**. The title appears when a user hovers over the image.
-1. Select **Image**. A dialog opens and enables you to upload a custom image.
+1. Select **Image**. A dialog opens and enables you to upload a custom image.
1. Optionally, specify a URL for the image. 1. Select **Update**
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png" alt-text="Azure IoT Central save brand image.":::
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png" alt-text="Screenshot showing the in-store analytics application dashboard brand image tile." lightbox="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png":::
-1. Optionally, select **Configure** on the tile titled **Documentation**, and specify a URL for support content.
+1. Optionally, select **Configure** on the tile titled **Documentation**, and specify a URL for support content.
To customize the image tile that displays a map of the sensor zones in the store:
-1. Select **Configure** on the image tile that displays the default store zone map.
+1. Select **Configure** on the image tile that displays the default store zone map.
-1. Select **Image**, and use the dialog to upload a custom image of a store zone map.
+1. Select **Image**, and use the dialog to upload a custom image of a store zone map.
1. Select **Update**.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png" alt-text="Azure IoT Central save store map.":::
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png" alt-text="Screenshot showing the in-store analytics application dashboard store map tile." lightbox="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png":::
The example Contoso store map shows four zones: two checkout zones, a zone for apparel and personal care, and a zone for groceries and deli. In this tutorial, you'll associate sensors with these zones to provide telemetry.
In this section, you rearrange the dashboard in the **In-store analytics - check
To remove tiles that you don't plan to use in your application:
-1. Select **Edit** on the dashboard toolbar.
+1. Select **Edit** on the dashboard toolbar.
-1. Select **ellipsis** and **Delete** to remove the following tiles: **Back to all zones**, **Visit store dashboard**, **Occupancy**, **Warm-up checkout zone**, **Cool-down checkout zone**, **Occupancy sensor settings**, **Thermostat sensor settings**, and **Environment conditions** and all three tiles associated with **Checkout 3**. The Contoso store dashboard doesn't use these tiles.
+1. Select **ellipsis** and **Delete** to remove the following tiles: **Back to all zones**, **Visit store dashboard**, **Warm-up checkout zone**, **Cool-down checkout zone**, **Occupancy sensor settings**, **Thermostat settings**, **Wait time**, and **Environment conditions** and all three tiles associated with **Checkout 3**. The Contoso store dashboard doesn't use these tiles.
1. Select **Save**. Removing unused tiles frees up space in the edit page, and simplifies the dashboard view for operators.
To rearrange the remaining tiles:
1. Select **Save**.
-1. View your layout changes.
+1. View your layout changes.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/firmware-battery-tiles.png" alt-text="Azure IoT Central firmware battery tiles.":::
## Add telemetry tiles to display conditions
-After you customize the dashboard layout, you're ready to add tiles to show telemetry. To create a telemetry tile, select a device template and device instance, then select device-specific telemetry to display in the tile. The **In-store analytics - checkout** application template includes several telemetry tiles in the dashboard. The four tiles in the two checkout zones display telemetry from the simulated occupancy sensor. The **People traffic** tile shows counts in the two checkout zones.
+After you customize the dashboard layout, you're ready to add tiles to show telemetry. To create a telemetry tile, select a device template and device instance, then select device-specific telemetry to display in the tile. The **In-store analytics - checkout** application template includes several telemetry tiles in the dashboard. The four tiles in the two checkout zones display telemetry from the simulated occupancy sensor. The **People traffic** tile shows counts in the two checkout zones.
-In this section, you add two more telemetry tiles to show environmental telemetry from the RuuviTag sensors you added in the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial.
+In this section, you add two more telemetry tiles to show environmental telemetry from the RuuviTag sensors you added in the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial.
To add tiles to display environmental data from the RuuviTag sensors: 1. Select **Edit**.
-1. Select `RuuviTag` in the **Device template** list.
+1. Select `RuuviTag` in the **Device template** list.
-1. Select a **Device instance** of one of the two RuuviTag sensors. In the example Contoso store, select `Zone 1 Ruuvi` to create a telemetry tile for Zone 1.
+1. Select a **Device instance** of one of the two RuuviTag sensors. In the example Contoso store, select `Zone 1 Ruuvi` to create a telemetry tile for Zone 1.
-1. Select `Relative humidity` and `temperature` in the **Telemetry** list. These are the telemetry items that display for each zone on the tile.
+1. Select `Relative humidity` and `Temperature` in the **Telemetry** list. These are the telemetry items that display for each zone on the tile.
-1. Select **Combine**. A new tile appears to display combined humidity and temperature telemetry for the selected sensor.
+1. Select **Add tile**. A new tile appears to display combined humidity and temperature telemetry for the selected sensor.
-1. Select **Configure** on the new tile for the RuuviTag sensor.
+1. Select **Configure** on the new tile for the RuuviTag sensor.
-1. Change the **Title** to *Zone 1 environment*.
+1. Change the **Title** to *Zone 1 environment*.
-1. Select **Update configuration**.
+1. Select **Update**.
1. Repeat the previous steps to create a tile for the second sensor instance. Set the **Title** to *Zone 2 environment* and then select **Update configuration.**
-1. Drag the tile titled **Zone 2 environment** below the **Thermostat firmware** tile.
+1. Drag the tile titled **Zone 2 environment** below the **Thermostat firmware** tile.
-1. Drag the tile titled **Zone 1 environment** below the **People traffic** tile.
+1. Drag the tile titled **Zone 1 environment** below the **People traffic** tile.
1. Select **Save**. The dashboard displays zone telemetry in the two new tiles.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/all-ruuvitag-tiles.png" alt-text="Azure IoT Central all RuuviTag tiles.":::
To edit the **People traffic** tile to show telemetry for only two checkout zones:
-1. Select **Edit**.
-
-1. Select **Configure** on the **People traffic** tile.
-
-1. In **Telemetry** select **count 1**, **count 2**, and **count 3**.
-
-1. Select **Update configuration**. It clears the existing configuration on the tile.
-
-1. Select **Configure** again on the **People traffic** tile.
-
-1. In **Telemetry** select **count 1**, and **count 2**.
-
-1. Select **Update configuration**.
-
-1. Select **Save**. The updated dashboard displays counts for only your two checkout zones, which are based on the simulated occupancy sensor.
-
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/people-traffic-two-lanes.png" alt-text="Azure IoT Central people traffic two lanes.":::
-
-## Add property tiles to display device details
-
-Application operators use the dashboard to manage devices, and monitor status. Add a tile for each RuuviTag to enable operators to view the software version.
-
-To add a property tile for each RuuviTag:
- 1. Select **Edit**.
-1. Select `RuuviTag` in the **Device template** list.
-
-1. Select a **Device instance** of one of the two RuuviTag sensors. In the example Contoso store, select `Zone 1 Ruuvi` to create a telemetry tile for Zone 1.
-
-1. Select **Properties > Software version**.
-
-1. Select **Combine**.
-
-1. Select **Configure** on the newly created tile titled **Software version**.
-
-1. Change the **Title** to *Ruuvi 1 software version*.
+1. Select **Edit** on the **People traffic** tile.
-1. Select **Update configuration**.
+1. Remove the **count3** telemetry.
-1. Drag the tile titled **Ruuv 1 software version** below the **Zone 1 environment** tile.
-
-1. Repeat the previous steps to create a software version property tile for the second RuuviTag.
+1. Select **Update**.
-1. Select **Save**.
+1. Select **Save**. The updated dashboard displays counts for only your two checkout zones, which are based on the simulated occupancy sensor.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/add-ruuvi-property-tiles.png" alt-text="Azure IoT Central RuuviTag property tiles.":::
## Add command tiles to run commands
-Application operators also use the dashboard to manage devices by running commands. You can add command tiles to the dashboard that will execute predefined commands on a device. In this section, you add a command tile to enable operators to reboot the Rigado gateway.
+Application operators also use the dashboard to manage devices by running commands. You can add command tiles to the dashboard that will execute predefined commands on a device. In this section, you add a command tile to enable operators to reboot the Rigado gateway.
To add a command tile to reboot the gateway:
-1. Select **Edit**.
+1. Select **Edit**.
-1. Select `C500` in the **Device template** list. It is the template for the Rigado C500 gateway.
+1. Select `C500` in the **Device template** list. It is the template for the Rigado C500 gateway.
1. Select the gateway instance in **Device instance**.
-1. Select **Command > Reboot** and drag it into the dashboard beside the store map.
+1. Select the **Reboot** command.
+
+1. Select **Add tile**.
-1. Select **Save**.
+1. Select **Save**.
-1. View your completed Contoso dashboard.
+1. View your completed Contoso dashboard.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png" alt-text="Azure IoT Central complete dashboard customization.":::
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png" alt-text="Screenshot showing the completed in-store analytics application dashboard." lightbox="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png":::
1. Optionally, select the **Reboot** tile to run the reboot command on your gateway.
iot-central Tutorial In Store Analytics Export Data Visualize Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
Before you create the logic app, you need the device IDs of the two RuuviTag sen
1. Sign in to your **In-store analytics - checkout** IoT Central application. 1. Select **Devices** in the left pane. Then select **RuuviTag**.
-1. Make a note of the **Device IDs**. In the following screenshot, the IDs are **p7g7h8qqax** and **2ngij10dibe**:
+1. Make a note of the **Device IDs**. In the following screenshot, the IDs are **8r6vfyiv1x** and **1rvfk4ymk6z**:
The following steps show you how to create the logic app in the Azure portal:
Add four card tiles to show the queue length and dwell time for the two checkout
Resize and rearrange the tiles on your dashboard to look like the following screenshot: You could add some graphics resources to further customize the dashboard: ## Clean up resources
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
The benefits of a connected logistics solution include:
- Geo-fencing, route optimization, fleet management, and vehicle analytics. - Forecasting for predictable departure and arrival of shipments. *IoT tags (1)* provide physical, ambient, and environmental sensor capabilities such as temperature, humidity, shock, tilt, and light. IoT tags typically connect to gateway device through Zigbee (802.15.4). Tags are less expensive sensors and can be discarded at the end of a typical logistics journey to avoid challenges with reverse logistics.
Create the application using following steps:
1. **Create app** opens the **New application** form. Enter the following details: - * **Application name**: you can use default suggested name or enter your friendly application name. * **URL**: you can use suggested default URL or enter your friendly unique memorable URL. * **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources.
The following sections walk you through the key features of the application.
### Dashboard
-After deploying the application template, your default dashboard is a connected logistics operator focused portal. Northwind Trader is a fictitious logistics provider managing a cargo fleet at sea and on land. In this dashboard, you see two different gateways providing telemetry from shipments, along with associated commands, jobs, and actions.
+After you deploy the application, your default dashboard is a connected logistics operator focused portal. Northwind Trader is a fictitious logistics provider managing a cargo fleet at sea and on land. In this dashboard, you see two different gateways providing telemetry from shipments, along with associated commands, jobs, and actions.
This dashboard is pre-configured to show the critical logistics device operations activity.
The dashboard enables two different gateway device management operations:
* View the logistics routes for truck shipments and the [location](../core/howto-use-location-data.md) details of ocean shipments. * View the gateway status and other relevant information.-- * You can track the total number of gateways, active, and unknown tags. * You can do device management operations such as: update firmware, disable and enable sensors, update a sensor threshold, update telemetry intervals, and update device service contracts. * View device battery consumption. #### Device Template
Select **Device templates** to see the gateway capability model. A capability mo
**Gateway Telemetry & Property** - This interface defines all the telemetry related to sensors, location, and device information. The interface also defines device twin property capabilities such as sensor thresholds and update intervals.
+**Gateway Commands** - This interface organizes all the gateway command capabilities.
-**Gateway Commands** - This interface organizes all the gateway command capabilities:
- ### Rules
Select the **Rules** tab to the rules in this application template. These rules
**Lost gateway alert**: This rule triggers if the gateway doesn't report to the cloud for a prolonged period. The gateway could be unresponsive because of low battery, loss of connectivity, or device damage. ### Jobs Select the **Jobs** tab to create the jobs in this application. The following screenshot shows an example of jobs created. You can use jobs to do application-wide operations. The jobs in this application use device commands and twin capabilities to do tasks such as disabling specific sensors across all the gateways or modifying the sensor threshold depending on the shipment mode and route:
You can use jobs to do application-wide operations. The jobs in this application
## Clean up resources
-If you're not going to continue to use this application, delete the application template by visiting **Application** > **Management** and select **Delete**.
- ## Next steps
-Learn more about :
+Learn more about:
> [!div class="nextstepaction"] > [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
The benefits of a digital distribution center include:
- Efficient order tracking. - Reduced costs, improved productivity, and optimized usage. ### Video cameras (1)
Video cameras are the primary sensors in this digitally connected enterprise-sca
### Azure IoT Edge gateway (2)
-The "cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edge and the camera stream is processed by analytics pipeline. The video analytics processing pipeline at Azure IoT Edge brings many benefits, including decreased response time, low-bandwidth consumption, which results in low latency for rapid data processing. Only the most essential metadata, insights, or actions are sent to the cloud for further action or investigation.
+The "cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edge and the camera stream is processed by analytics pipeline. The video analytics processing pipeline at Azure IoT Edge brings many benefits, including decreased response time, low-bandwidth consumption, which results in low latency for rapid data processing. Only the most essential metadata, insights, or actions are sent to the cloud for further action or investigation.
### Device management with IoT Central
-Azure IoT Central is a solution development platform that simplifies IoT device and Azure IoT Edge gateway connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solutions to achieve a digital feedback loop in distribution centers.
+Azure IoT Central is a solution development platform that simplifies IoT device and Azure IoT Edge gateway connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solution to achieve a digital feedback loop in distribution centers.
### Business insights and actions using data egress (5,6)
Create the application using following steps:
1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab.
-1. Select **Create app** under **digital distribution center**.
+1. Select **Create app** under **Digital distribution center**.
To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md).
-## Walk through the application
+## Walk through the application
The following sections walk you through the key features of the application: ### Dashboard
-The default dashboard is a distribution center operator focused portal. Northwind Trader is a fictitious distribution center solution provider managing conveyor systems.
+The default dashboard is a distribution center operator focused portal. Northwind Trader is a fictitious distribution center solution provider managing conveyor systems.
In this dashboard, you'll see one gateway and one camera acting as an IoT device. Gateway is providing telemetry about packages such as valid, invalid, unidentified, and size along with associated device twin properties. All downstream commands are executed at IoT devices, such as a camera. This dashboard is pre-configured to showcase the critical distribution center device operations activity.
-The dashboard is logically organized to show the device management capabilities of the Azure IoT gateway and IoT device. You can:
+The dashboard is logically organized to show the device management capabilities of the Azure IoT gateway and IoT device. You can:
* Complete gateway command and control tasks. * Manage all the cameras in the solution.
-### Device Template
+### Device templates
-Click on the Device templates tab, and you'll see the gateway capability model. A capability model is structured around two different interfaces **Camera** and **Digital Distribution Gateway**
+Navigate to **Device templates**. The application has two device templates:
+* **Camera** - Organizes all the camera-specific command capabilities.
-**Camera** - This interface organizes all the camera-specific command capabilities
+* **Digital Distribution Gateway** - Represents all the telemetry coming from camera, cloud defined device twin properties and gateway info.
-
-**Digital Distribution Gateway** - This interface represents all the telemetry coming from camera, cloud defined device twin properties and gateway info.
--
-### Gateway Commands
-
-This interface organizes all the gateway command capabilities
- ### Rules
Select the rules tab to see two different rules that exist in this application t
**Too many invalid packages alert** - This rule is triggered when the camera detects a high number of invalid packages flowing through the conveyor system.
-**Large package** - This rule will trigger if the camera detects huge package that can't be inspected for the quality.
+**Large package** - This rule will trigger if the camera detects huge package that can't be inspected for the quality.
## Clean up resources
-If you're not going to continue to use this application, delete the application template by visiting **Application** > **Management** and click **Delete**.
- ## Next steps
-Learn more about :
+Learn more about:
> [!div class="nextstepaction"] > [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
The benefits of smart inventory management include:
This application template focuses on device connectivity, and the configuration and management of RFID and Bluetooth low energy (BLE) reader devices. ### RFID tags (1)
Azure IoT Edge server provides a place to preprocess that data locally before se
### Device management with IoT Central
-Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solutions to achieve a digital feedback loop in inventory management.
+Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solution to achieve a digital feedback loop in inventory management.
### Business insights and actions using data egress (3)
To learn more, see [Create an IoT Central application](../core/howto-create-iot-
The following sections walk you through the key features of the application:
-### Dashboard
+### Dashboard
+
+After you deploy the application, your default dashboard is a smart inventory management operator focused portal. Northwind Trader is a fictitious smart inventory provider managing warehouse with Bluetooth low energy (BLE) and retail store with Radio-frequency identification (RFID). In this dashboard, you'll see two different gateways providing telemetry about inventory along with associated commands, jobs, and actions that you can perform.
-After successfully deploying the application template, your default dashboard is a smart inventory management operator focused portal. Northwind Trader is a fictitious smart inventory provider managing warehouse with Bluetooth low energy (BLE) and retail store with Radio-frequency identification (RFID). In this dashboard, you'll see two different gateways providing telemetry about inventory along with associated commands, jobs, and actions that you can perform.
This dashboard is pre-configured to showcase the critical smart inventory management device operations activity. The dashboard is logically divided between two different gateway device management operations: * The warehouse is deployed with a fixed BLE gateway and BLE tags on pallets to track and trace inventory at a larger facility. * Retail store is implemented with a fixed RFID gateway and RFID tags at individual an item level to track and trace the stock in a store outlet. * View the gateway [location](../core/howto-use-location-data.md), status and related details.-
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-1.png" alt-text="Screenshot showing the top half of the smart inventory management dashboard.":::
- * You can easily track the total number of gateways, active, and unknown tags. * You can perform device management operations such as update firmware, disable sensor, enable sensor, update sensor threshold, update telemetry intervals and update device service contracts. * Gateway devices can perform on-demand inventory management with a complete or incremental scan.
- :::image type="content" source="media/tutorial-iot-central-smart-inventory-management/smart-inventory-management-dashboard-2.png" alt-text="Screenshot showing the bottom half of the smart inventory management dashboard.":::
### Device Template
-Click on the Device templates tab, and you'll see the gateway capability model. A capability model is structured around two different interfaces **Gateway Telemetry and Property** and **Gateway Commands**
+Select the Device templates tab, and you'll see the gateway capability model. A capability model is structured around two different interfaces **Gateway Telemetry and Property** and **Gateway Commands**
**Gateway Telemetry and Property** - This interface represents all the telemetry related to sensors, location, device info, and device twin property capability such as gateway thresholds and update intervals. - **Gateway Commands** - This interface organizes all the gateway command capabilities ### Rules
Select the rules tab to see two different rules that exist in this application t
**Unknown tags**: It's critical to track every RFID and BLE tags associated with an asset. If the gateway is detecting too many unknown tags, it's an indication of synchronization challenges with tag sourcing applications. ## Clean up resources
-If you're not going to continue to use this application, delete the application template by visiting **Application** > **Management** and click **Delete**.
- ## Next steps
-Learn more about :
+Learn more about:
> [!div class="nextstepaction"] > [IoT Central data integration](../core/overview-iot-central-solution-builder.md)-
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
The application template enables you to:
- Transform the environmental conditions within your fulfillment center into insights that the retail warehouse team can use. - Export the aggregated insights into existing or new business applications for the benefit of the retail staff members.
-![Azure IoT Central Store Analytics](./media/tutorial-micro-fulfillment-center-app/micro-fulfillment-center-architecture-frame.png)
### Robotic carriers (1)
An IoT solution starts with a set of sensors capturing meaningful signals from w
### Gateway devices (2)
-Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
+Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
### IoT Central application
Create the application using following steps:
1. Select **Create app** under **micro-fulfillment center**.
-## Walk through the application
+## Walk through the application
The following sections walk you through the key features of the application:
From the dashboard, you can:
* See device telemetry, such as the number of picks, the number of orders processed, and properties, such as the structure system status. * View the floor plan and location of the robotic carriers within the fulfillment structure. * Trigger commands, such as resetting the control system, updating the carrier's firmware, and reconfiguring the network.-
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/mfc-dashboard-1.png" alt-text="Screenshot of the top half of the Northwind Traders micro-fulfillment center dashboard.":::
- * See an example of the dashboard that an operator can use to monitor conditions within the fulfillment center. * Monitor the health of the payloads that are running on the gateway device within the fulfillment center.
- :::image type="content" source="media/tutorial-micro-fulfillment-center-app/mfc-dashboard-2.png" alt-text="Screenshot of the bottom half of the Northwind Traders micro-fulfillment center dashboard.":::
-- ### Device template If you select the device templates tab, you see that there are two different device types that are part of the template: * **Robotic Carrier**: This device template represents the definition for a functioning robotic carrier that has been deployed in the fulfillment structure, and is performing appropriate storage and retrieval operations. If you select the template, you see that the robot is sending device data, such as temperature and axis position, and properties like the robotic carrier status.
-* **Structure Condition Monitoring**: This device template represents a device collection that allows you to monitor environment condition, as well as the gateway device hosting various edge workloads to power your fulfillment center. The device sends telemetry data, such as the temperature, the number of picks, and the number of orders. It also sends information about the state and health of the compute workloads running in your environment.
-
+* **Structure Condition Monitoring**: This device template represents a device collection that allows you to monitor environment condition, as well as the gateway device hosting various edge workloads to power your fulfillment center. The device sends telemetry data, such as the temperature, the number of picks, and the number of orders. It also sends information about the state and health of the compute workloads running in your environment.
If you select the device groups tab, you also see that these device templates automatically have device groups created for them.
On the **Rules** tab, you see a sample rule that exists in the application templ
Use the sample rule as inspiration to define rules that are more appropriate for your business functions. ### Clean up resources
-If you're not going to continue to use this application, delete the application template. Go to **Application** > **Management**, and select **Delete**.
- ## Next steps
-Learn more about :
+Learn more about:
> [!div class="nextstepaction"] > [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
iot-dps Iot Dps Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-mqtt-support.md
Title: Understand Azure IoT Device Provisioning Service MQTT support | Microsoft Docs description: Developer guide - support for devices connecting to the Azure IoT Device Provisioning Service (DPS) device-facing endpoint using the MQTT protocol. -+ Last updated 02/25/2022 + # Communicate with your DPS using the MQTT protocol
iot-dps Quick Setup Auto Provision Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-bicep.md
description: Azure quickstart - Learn how to create an Azure IoT Hub Device Prov
Last updated 08/17/2022 -+ + # Quickstart: Set up the IoT Hub Device Provisioning Service (DPS) with Bicep
key-vault Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/certificate-scenarios.md
tags: azure-resource-manager
Previously updated : 06/13/2020 Last updated : 11/14/2022
key-vault Create Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate-scenarios.md
tags: azure-resource-manager
Previously updated : 01/07/2019 Last updated : 11/14/2022
key-vault Create Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate.md
tags: azure-resource-manager
Previously updated : 01/07/2019 Last updated : 11/14/2022
key-vault How To Export Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-export-certificate.md
Previously updated : 08/11/2020 Last updated : 11/14/2022 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store certificates in Azure.
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-cli.md
Previously updated : 01/27/2021 Last updated : 11/14/2022 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-java.md
description: Learn about the the Azure Key Vault Certificate client library for
Previously updated : 12/18/2020 Last updated : 11/14/2022
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-net.md
Title: Quickstart - Azure Key Vault certificates client library for .NET (versio
description: Learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library (version 4) Previously updated : 09/23/2020 Last updated : 11/14/2022
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-portal.md
Previously updated : 03/24/2020 Last updated : 11/14/2022 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store certificates in Azure
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-powershell.md
Previously updated : 01/27/2021 Last updated : 11/14/2022 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
### Limitations * IP based backends can only be used for Standard Load Balancers
- * Limit of 100 IP addresses in the backend pool for IP based LBs
* The backend resources must be in the same virtual network as the load balancer for IP based LBs * A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service * [Private endpoint resources](../private-link/private-endpoint-overview.md) can't be placed in a IP based backend pool
In this article, you learned about Azure Load Balancer backend pool management a
Learn more about [Azure Load Balancer](load-balancer-overview.md).
-Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backend pool management.
+Review the [REST API](/rest/api/load-balancer/loadbalancerbackendaddresspools/createorupdate) for IP based backend pool management.
load-balancer Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-overview.md
For pricing, see [Load Balancer pricing](https://azure.microsoft.com/pricing/det
* Gateway Load Balancer doesn't work with the Global Load Balancer tier. * Cross-tenant chaining isn't supported through the Azure portal. * Gateway Load Balancer doesn't currently support IPv6
-* Currently, Gateway Load Balancer frontends configured in Portal will automatically be created as no-zone. To create a zone-redundant frontend, use an alternative client such as ARM/CLI/PS.
## Next steps
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-batch-endpoint.md
---+++ Last updated 11/04/2022 #Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
You can follow along this sample in the following notebooks. In the cloned repos
First, let's connect to Azure Machine Learning workspace where we are going to work on.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
```azurecli az account set --subscription <subscription> az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
The workspace is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
workspace = "<workspace>"
ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
Open the [Azure ML studio portal](https://ml.azure.com) and log in using your credentials.
Batch endpoints run on compute clusters. They support both [Azure Machine Learni
Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_compute" :::
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
```python compute_name = "batch-cluster"
compute_cluster = AmlCompute(name=compute_name, description="amlcompute", min_in
ml_client.begin_create_or_update(compute_cluster) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
*Create a compute cluster as explained in the following tutorial [Create an Azure Machine Learning compute cluster](../how-to-create-attach-compute-cluster.md?tabs=azure-studio).*
Batch Deployments can only deploy models registered in the workspace. You can sk
> Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
```azurecli MODEL_NAME='mnist' az ml model create --name $MODEL_NAME --type "custom_model" --path "./mnist/model/" ```
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
```python model_name = 'mnist'
model = ml_client.models.create_or_update(
) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Models__ tab on the side menu. 1. Click on __Register__ > __From local files__.
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
ENDPOINT_NAME="mnist-batch" ```
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
endpoint_name="mnist-batch" ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
*You will configure the name of the endpoint later in the creation wizard.* 1. Configure your batch endpoint
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`.
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
| `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. | | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
```python # create a batch endpoint
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
| `auth_mode` | The authentication method for the batch endpoint. Currently only Azure Active Directory token-based authentication (`aad_token`) is supported. | | `defaults.deployment_name` | The name of the deployment that will serve as the default deployment for the endpoint. |
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
*You will create the endpoint in the same step you create the deployment.* 1. Create the endpoint:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment. :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="create_batch_endpoint" :::
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
```python ml_client.batch_endpoints.begin_create_or_update(endpoint) ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
*You will create the endpoint in the same step you are creating the deployment later.*
A deployment is a set of resources required for hosting the model that does the
1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
*No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
Let's get a reference to the environment:
A deployment is a set of resources required for hosting the model that does the
) ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Environments__ tab on the side menu. 1. Select the tab __Custom environments__ > __Create__.
A deployment is a set of resources required for hosting the model that does the
1. Create a deployment definition
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
__mnist-torch-deployment.yml__
A deployment is a set of resources required for hosting the model that does the
| `error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. | | `logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
```python deployment = BatchDeployment(
A deployment is a set of resources required for hosting the model that does the
* `environment_variables`- Dictionary of environment variable name-value pairs to set for each batch scoring job. * `logging_level`- The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`.
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__ > __Create__.
A deployment is a set of resources required for hosting the model that does the
1. Create the deployment:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
A deployment is a set of resources required for hosting the model that does the
> [!TIP] > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#adding-deployments-to-an-endpoint) section.
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
A deployment is a set of resources required for hosting the model that does the
ml_client.batch_endpoints.begin_create_or_update(endpoint) ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
In the wizard, click on __Create__ to start the deployment process.
A deployment is a set of resources required for hosting the model that does the
1. Check batch endpoint and deployment details.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
Use `show` to check endpoint and deployment details. To check a batch deployment, run the following code: :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="check_batch_deployment_detail" :::
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
To check a batch deployment, run the following code:
A deployment is a set of resources required for hosting the model that does the
ml_client.batch_deployments.get(name=deployment.name, endpoint_name=endpoint.name) ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
A deployment is a set of resources required for hosting the model that does the
Invoke a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for a period of time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="start_batch_scoring_job" :::
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
```python job = ml_client.batch_endpoints.invoke(
job = ml_client.batch_endpoints.invoke(
) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
Batch endpoints support reading files or folders that are located in different l
The batch scoring results are by default stored in the workspace's default blob store within a folder named by job name (a system-generated GUID). You can configure where to store the scoring outputs when you invoke the batch endpoint.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
Use `output-path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `--set output_file_name=<your-file-name>` to configure a new output file name. :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="start_batch_scoring_job_configure_output_settings" :::
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
Use `output_path` to configure any folder in an Azure Machine Learning registered datastore. The syntax for the `--output-path` is the same as `--input` when you're specifying a folder, that is, `azureml://datastores/<datastore-name>/paths/<path-on-datastore>/`. Use `output_file_name=<your-file-name>` to configure a new output file name.
job = ml_client.batch_endpoints.invoke(
) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
Some settings can be overwritten when invoke to make best use of the compute res
* Use __mini-batch size__ to overwrite the number of files to include on each mini-batch. The number of mini batches is decided by total input file counts and mini_batch_size. Smaller mini_batch_size generates more mini batches. Mini batches can be run in parallel, but there might be extra scheduling and invocation overhead. * Other settings can be overwritten other settings including __max retries__, __timeout__, and __error threshold__. These settings might impact the end to end batch scoring time for different workloads.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="start_batch_scoring_job_overwrite" :::
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
```python job = ml_client.batch_endpoints.invoke(
job = ml_client.batch_endpoints.invoke(
) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
job = ml_client.batch_endpoints.invoke(
Batch scoring jobs usually take some time to process the entire set of inputs.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`. :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="check_job_status" :::
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
The following code checks the job status and outputs a link to the Azure ML studio for further details.
The following code checks the job status and outputs a link to the Azure ML stud
ml_client.jobs.get(job.name) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
ml_client.jobs.get(job.name)
### Check batch scoring results
-Follow the below steps to view the scoring results in Azure Storage Explorer when the job is completed:
+Follow the steps below to view the scoring results in Azure Storage Explorer when the job is completed:
1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
Follow the below steps to view the scoring results in Azure Storage Explorer whe
1. Select the __Outputs + logs__ tab and then select **Show data outputs**. 1. From __Data outputs__, select the icon to open __Storage Explorer__.
+ :::image type="content" source="../media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="../media/how-to-use-batch-endpoint/view-data-outputs.png" :::
-The scoring results in Storage Explorer are similar to the following sample page:
+ The scoring results in Storage Explorer are similar to the following sample page:
+ :::image type="content" source="../media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="../media/how-to-use-batch-endpoint/scoring-view.png":::
## Adding deployments to an endpoint
In this example, you will learn how to add a second deployment __that solves the
1. Create an environment where your batch deployment will run. Include in the environment any dependency your code requires for running. You will also need to add the library `azureml-core` as it is required for batch deployments to work. The following environment definition has the required libraries to run a model with TensorFlow.
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
*No extra step is required for the Azure ML CLI. The environment definition will be included in the deployment file as an anonymous environment.*
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
Let's get a reference to the environment:
In this example, you will learn how to add a second deployment __that solves the
) ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Environments__ tab on the side menu. 1. Select the tab __Custom environments__ > __Create__.
In this example, you will learn how to add a second deployment __that solves the
3. Create a deployment definition
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
__mnist-keras-deployment__ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/mnist-keras-deployment.yml":::
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
```python deployment = BatchDeployment(
In this example, you will learn how to add a second deployment __that solves the
) ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
In this example, you will learn how to add a second deployment __that solves the
1. Create the deployment:
- # [Azure ML CLI](#tab/cli)
+ # [Azure CLI](#tab/azure-cli)
Run the following code to create a batch deployment under the batch endpoint and set it as the default deployment.
In this example, you will learn how to add a second deployment __that solves the
> [!TIP] > The `--set-default` parameter is missing in this case. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later.
- # [Azure ML SDK for Python](#tab/sdk)
+ # [Python](#tab/python)
Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
In this example, you will learn how to add a second deployment __that solves the
ml_client.batch_deployments.begin_create_or_update(deployment) ```
- # [studio](#tab/studio)
+ # [Studio](#tab/azure-studio)
In the wizard, click on __Create__ to start the deployment process.
In this example, you will learn how to add a second deployment __that solves the
To test the new non-default deployment, you will need to know the name of the deployment you want to run.
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="test_new_deployment" ::: Notice `--deployment-name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
```python job = ml_client.batch_endpoints.invoke(
job = ml_client.batch_endpoints.invoke(
Notice `deployment_name` is used to specify the deployment we want to execute. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
Notice `deployment_name` is used to specify the deployment we want to execute. T
Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
:::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="update_default_deployment" :::
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
```python endpoint = ml_client.batch_endpoints.get(endpoint_name)
endpoint.defaults.deployment_name = deployment.name
ml_client.batch_endpoints.begin_create_or_update(endpoint) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
ml_client.batch_endpoints.begin_create_or_update(endpoint)
## Delete the batch endpoint and the deployment
-# [Azure ML CLI](#tab/cli)
+# [Azure CLI](#tab/azure-cli)
If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion.
Run the following code to delete the batch endpoint and all the underlying deplo
::: code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="delete_endpoint" :::
-# [Azure ML SDK for Python](#tab/sdk)
+# [Python](#tab/python)
Delete endpoint:
Delete compute: optional, as you may choose to reuse your compute cluster with l
ml_client.compute.begin_delete(name=compute_name) ```
-# [studio](#tab/studio)
+# [Studio](#tab/azure-studio)
1. Navigate to the __Endpoints__ tab on the side menu. 1. Select the tab __Batch endpoints__.
machine-learning How To Create Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-data-assets.md
paths:
- pattern: ./*.txt transformations: - read_delimited:
- delimiter: ,
+ delimiter: ','
encoding: ascii header: all_files_same_headers ```
machine-learning Migrate To V2 Resource Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-resource-datastore.md
blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
subscription_id=subscription_id, resource_group_name=resource_group)
- datastore = ml_client.datastores.get(datastore_name='your datastore name')
+ datastore = ml_client.datastores.get(name='your datastore name')
``` ## Mapping of key functionality in SDK v1 and SDK v2
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
paths:
- pattern: ./*.txt transformations: - read_delimited:
- delimiter: ,
+ delimiter: ','
encoding: ascii header: all_files_same_headers - columns: [Trip_Pickup_DateTime, Trip_Dropoff_DateTime]
marketplace Determine Your Listing Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/determine-your-listing-type.md
Customers use the **Test Drive** button on your offer's listing page to get acce
## Contact Me
-This option is a simple listing of your application or service. Customers use the **Contact Me** button on your offer's listing page to request that you connect with them about your offer.
-
+This option is a simple listing of your application or service. Customers use the **Contact Me** button on your offer's listing page to ask that you connect with them about your offer. They can write notes to express their needs, so make sure to read them and to get back to valid leads. [See lead management] (~/partner-center-portal/commercial-marketplace-get-customer-leads.md)
## Get It Now- This listing option includes transactable offers (subscriptions or user-based pricing), bring your own license (BYOL) offers, and **Get It Now (Free)**. Transactable offers are sold through the commercial marketplace. Microsoft is responsible for billing and collections. Customers use the **Get It Now** button to get the offer. > [!NOTE]
Non-transactable offers earn benefits based on whether or not a free trial is at
## Next steps - To choose an offer type, see [Publishing guide by offer type](publisher-guide-by-offer-type.md).++
marketplace Commercial Marketplace Get Customer Leads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-get-customer-leads.md
Leads are customers interested in or deploying your offers from [Microsoft AppSo
Here are places where a lead is generated: -- A customer consents to sharing their information after they select **Contact me** from the commercial marketplace. This lead is an *initial interest* lead. We share information with you about the customer who has expressed interest in getting your product. The lead is the top of the acquisition funnel.-
- ![Dynamics 365 Contact Me](./media/commercial-marketplace-get-customer-leads/dynamics-365-contact-me.png)
+- A customer consents to sharing their information after they select **Contact me** from the commercial marketplace. This lead is an *initial interest* lead. We share information with you about the customer who has expressed interest in getting your product, including notes they wrote to express their needs. The lead is the top of the acquisition funnel.
+ ![Dynamics 365 Contact Me](./media/commercial-marketplace-get-customer-leads/dynamics-365-contact-me.png)
- A customer selects **Get It Now** (or selects **Create** in the [Azure portal](https://portal.azure.com/)) to get your offer. This lead is an *active* lead. We share information with you about the customer who has started to deploy your product.
In addition to Partner Center, you can have your offerΓÇÖs leads sent to your Cu
## Understand lead data
+If youΓÇÖre not using Partner Center, or an existing CRM system integration, hereΓÇÖs how to understand the data in leads:
Each lead you receive during the customer acquisition process has data in specific fields. The first field to look out for is the `LeadSource` field, which follows this format: **Source-Action** | **Offer**.- **Sources**: The value for this field is populated based on the marketplace that generated the lead. Possible values are `"AzureMarketplace"`, `"AzurePortal"`, and `"AppSource (SPZA)"`. **Actions**: The value for this field is populated based on the action the customer took in the marketplace that generated the lead.
Here are some recommendations for driving leads through your sales cycle:
- **Process**: Define a clear sales process, with milestones, analytics, and clear team ownership. - **Qualification**: Define prerequisites, which indicate whether a lead was fully qualified. Make sure sales or marketing representatives qualify leads carefully before taking them through the full sales process.-- **Follow-up**: Don't forget to follow up within 24 hours. You will get the lead in your CRM of choice immediately after the customer deploys a test drive; email them within while they are still warm. Request scheduling a phone call to better understand if your product is a good solution for their problem. Expect the typical transaction to require numerous follow-up calls.
+- **Follow-up**: Acting quickly on leads yields the best results and customers submitting Contact me leads expect a swift response, so don't forget to follow up within 24 hours. You will get the lead in your CRM of choice immediately after the customer deploys a test drive; email them within while they are still warm. Request scheduling a phone call to better understand if your product is a good solution for their problem. Expect the typical transaction to require numerous follow-up calls.
- **Nurture**: Nurture your leads to get you on the way to a higher profit margin. Check in, but don't bombard them. We recommend you email leads at least a few times before you close them out; don't give up after the first attempt. Remember, these customers directly engaged with your product and spent time in a free trial; they are great prospects. After the technical setup is in place, incorporate these leads into your current sales and marketing strategy and operational processes.
marketplace Commercial Marketplace Lead Management Instructions Marketo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md
Last updated 06/08/2022
# Use Marketo to manage commercial marketplace leads
+> [!IMPORTANT]
+> The marketo connector is not currently working due to a change in the Marketo platform. Use Leads from the Referrals workspace.
+ This article describes how to set up your Marketo CRM system to process sales leads from your offers in Microsoft AppSource and Azure Marketplace. ## Set up your Marketo CRM system
migrate Agent Based Migration Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/agent-based-migration-architecture.md
Title: Agent-based migration in Azure Migrate Server Migration description: Provides an overview of agent-based VMware VM migration in Azure Migrate.--++ ms. Last updated 02/17/2020
migrate How To Automate Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md
Title: Automate agentless VMware migrations in Azure Migrate description: Describes how to use scripts to migrate a large number of VMware VMs in Azure Migrate--++ ms. Last updated 5/2/2022
migrate Migrate Support Matrix Physical Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical-migration.md
Title: Support for physical server migration in Azure Migrate description: Learn about support for physical server migration in Azure Migrate.--++ ms. Last updated 06/14/2020
migrate Prepare Windows Server 2003 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-windows-server-2003-migration.md
Title: Prepare Windows Server 2003 servers for migration with Azure Migrate description: Learn how to prepare Windows Server 2003 servers for migration with Azure Migrate.--++ ms. Last updated 05/27/2020
migrate Quickstart Create Migrate Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/quickstart-create-migrate-project.md
Title: Quickstart to create an Azure Migrate project using an Azure Resource Manager template. description: In this quickstart, you learn how to create an Azure Migrate project using an Azure Resource Manager template (ARM template). Last updated 04/23/2021-+ -+
migrate Tutorial App Containerization Aspnet App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-app-service.md
Title: ASP.NET app containerization and migration to App Service description: This tutorial demonstrates how to containerize ASP.NET applications and migrate them to Azure App Service. -+ Last updated 07/02/2021-+ # ASP.NET app containerization and migration to Azure App Service
migrate Tutorial App Containerization Aspnet Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-kubernetes.md
Title: Azure App Containerization ASP.NET; Containerization and migration of ASP.NET applications to Azure Kubernetes. description: Tutorial:Containerize & migrate ASP.NET applications to Azure Kubernetes Service. -+ Last updated 6/30/2021-+ # ASP.NET app containerization and migration to Azure Kubernetes Service
migrate Tutorial App Containerization Azure Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-azure-pipeline.md
Title: Continuous Deployment for containerized applications with Azure DevOps description: Tutorial:Continuous Deployment for containerized applications with Azure DevOps-+ Last updated 11/08/2021-+ # Continuous deployment for containerized applications with Azure DevOps
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
Title: Containerization and migration of Java web applications to Azure App Service. description: Tutorial:Containerize & migrate Java web applications to Azure App Service. -+ Last updated 5/2/2022-+ # Java web app containerization and migration to Azure App Service
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
Title: Azure App Containerization Java; Containerization and migration of Java web applications to Azure Kubernetes. description: Tutorial:Containerize & migrate Java web applications to Azure Kubernetes Service. -+ Last updated 6/30/2021-+ # Java web app containerization and migration to Azure Kubernetes Service
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
Title: Migrate Hyper-V VMs to Azure with Azure Migrate Server Migration description: Learn how to migrate on-premises Hyper-V VMs to Azure with Azure Migrate Server Migration--++ ms. Last updated 06/20/2022
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Title: Migrate machines as physical server to Azure with Azure Migrate. description: This article describes how to migrate physical machines to Azure with Azure Migrate.--++ ms. Last updated 01/02/2021
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
Title: Migrate VMware vSphere VMs with agent-based Azure Migrate Server Migration description: Learn how to run an agent-based migration of VMware vSphere VMs with Azure Migrate.--++ ms. Last updated 10/04/2022
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
Title: Migrate VMware VMs to Azure (agentless) - PowerShell description: Learn how to run an agentless migration of VMware VMs with Azure Migrate through PowerShell.--++ Last updated 08/20/2021
mysql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-azure-ad-authentication.md
Title: Active Directory authentication - Azure Database for MySQL - Flexible Server Preview
+ Title: Active Directory authentication - Azure Database for MySQL - Flexible Server
description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for MySQL flexible server
-# Active Directory authentication - Azure Database for MySQL - Flexible Server Preview
+# Active Directory authentication - Azure Database for MySQL - Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of con
## Use the steps below to configure and use Azure AD authentication 1. Select your preferred authentication method for accessing the MySQL flexible server. By default, the authentication selected will be MySQL authentication only. Select Azure Active Directory authentication only or MySQL and Azure Active Directory authentication to enabled Azure AD authentication.
-2. Select the user managed identity (UMI) with the following privileges: _User.Read.All, GroupMember.Read.All_ and _Application.Read.ALL_, which can be used to configure Azure AD authentication.
+2. Select the user managed identity (UMI) with the following privileges to configure Azure AD authentication:
+ - [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information.
+ - [GroupMember.Read.All](/graph/permissions-reference#group-permissions): Allows access to Azure AD group information.
+ - [Application.Read.ALL](/graph/permissions-reference#application-resource-permissions): Allows access to Azure AD service principal (application) information.
+ 3. Add Azure AD Admin. It can be Azure AD Users or Groups, which will have access to Azure Database for MySQL flexible server. 4. Create database users in your database mapped to Azure AD identities. 5. Connect to your database by retrieving a token for an Azure AD identity and logging in.
Microsoft Azure Active Directory (Azure AD) authentication is a mechanism of con
## Architecture
-User-managed identities are required for Azure Active Directory authentication. When a User-Assigned Identity is linked to the flexible server, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity, and when the managed identity is deleted, the corresponding service principal is automatically removed. The service then uses the managed identity to request access tokens for services that support Azure AD authentication. Only a User-assigned Managed Identity (UMI) is currently supported by Azure Database for MySQL-Flexible Server. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
+User-managed identities are required for Azure Active Directory authentication. When a User-Assigned Identity is linked to the flexible server, the Managed Identity Resource Provider (MSRP) issues a certificate internally to that identity, and when the managed identity is deleted, the corresponding service principal is automatically removed.
+
+The service then uses the managed identity to request access tokens for services that support Azure AD authentication. Only a User-assigned Managed Identity (UMI) is currently supported by Azure Database for MySQL-Flexible Server. For more information, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) in Azure.
The following high-level diagram summarizes how authentication works using Azure AD authentication with Azure Database for MySQL. The arrows indicate communication pathways.
The following high-level diagram summarizes how authentication works using Azure
## Administrator structure
-When using Azure AD authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Azure AD administrator. Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server. Only one Azure AD administrator (a user or group) can be configured at a time.
+When using Azure AD authentication, there are two Administrator accounts for the MySQL server; the original MySQL administrator and the Azure AD administrator.
+
+Only the administrator based on an Azure AD account can create the first Azure AD contained database user in a user database. The Azure AD administrator login can be an Azure AD user or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server. Only one Azure AD administrator (a user or group) can be configured at a time.
:::image type="content" source="media/concepts-azure-ad-authentication/azure-ad-admin-structure.jpg" alt-text="Diagram of Azure ad admin structure.":::
For guidance about how to grant and use the permissions, refer [Microsoft Graph
After you grant the permissions to the UMI, they're enabled for all servers or instances that are created with the UMI assigned as a server identity.
-To create a new Azure AD database user, you must connect as the Azure AD administrator.
-
-Any Azure AD authentication is only possible if the Azure AD admin was created for Azure Database for MySQL Flexible server. If the Azure Active Directory admin was removed from the server, existing Azure Active Directory users created previously can no longer connect to the database using their Azure Active Directory credentials.
- ## Token Validation Azure AD authentication in Azure Database for MySQL flexible server ensures that the user exists in the MySQL server, and it checks the validity of the token by validating the contents of the token. The following token validation steps are performed:
Please note that management operations, such as adding new users, are only suppo
## Additional considerations - Only one Azure AD administrator can be configured for an Azure Database for MySQL Flexible server at any time. + - Only an Azure AD administrator for MySQL can initially connect to the Azure Database for MySQL Flexible server using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users or an Azure AD group. When the administrator is a group account, it can be used by any group member, enabling multiple Azure AD administrators for the MySQL Flexible server. Using a group account as an administrator enhances manageability by allowing you to centrally add and remove group members in Azure AD without changing the users or permissions in the MySQL Flexible server. + - If a user is deleted from Azure AD, that user will no longer be able to authenticate with Azure AD, and therefore it will no longer be possible to acquire an access token for that user. In this case, although the matching user will still be in the database, it will not be possible to connect to the server with that user. > [!NOTE] > Login with the deleted Azure AD user can still be done till the token expires (up to 60 minutes from token issuing). If you also remove the user from Azure Database for MySQL this access will be revoked immediately. - If the Azure AD admin is removed from the server, the server will no longer be associated with an Azure AD tenant, and therefore all Azure AD logins will be disabled for the server. Adding a new Azure AD admin from the same tenant will re-enable Azure AD logins. + - Azure Database for MySQL Flexible server matches access tokens to the Azure Database for MySQL user using the userΓÇÖs unique Azure AD user ID, as opposed to using the username. This means that if an Azure AD user is deleted in Azure AD and a new user created with the same name, Azure Database for MySQL considers that a different user. Therefore, if a user is deleted from Azure AD and then a new user with the same name added, the new user will not be able to connect with the existing user. ## Next steps
mysql Concepts Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md
Title: Data encryption with customer managed keys ΓÇô Azure Database for MySQL ΓÇô Flexible Server Preview
+ Title: Data encryption with customer managed keys ΓÇô Azure Database for MySQL ΓÇô Flexible Server
description: Learn how data encryption with customer-managed keys for Azure Database for MySQL flexible server enables you to bring your own key (BYOK) for data protection at rest
-# Customer managed keys data encryption ΓÇô Azure Database for MySQL ΓÇô Flexible Server Preview
+# Customer managed keys data encryption ΓÇô Azure Database for MySQL ΓÇô Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] With data encryption with customer-managed keys for Azure Database for MySQL - Flexible Server Preview, you can bring your own key (BYOK) for data protection at rest and implement separation of duties for managing keys and data. With customer managed keys (CMKs), the customer is responsible for and in a full control of key lifecycle management (key creation, upload, rotation, deletion), key usage permissions, and auditing operations on keys.
-> [!Note]
-> In the Public Preview, we can't enable geo redundancy on a flexible server that has CMK enabled, nor can we enable geo redundancy on a flexible server that has CMK enabled.
- ## Benefits Data encryption with customer-managed keys for Azure Database for MySQL Flexible server provides the following benefits:
To monitor the database state, and to enable alerting for the loss of transparen
## Replica with a customer managed key in Key Vault
-Once Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. When trying to encrypt Azure Database for MySQL flexible server with a customer managed key that already has a replica(s), we recommend configuring the replica(s) as well by adding the managed identity and key.
+Once Azure Database for MySQL flexible server is encrypted with a customer's managed key stored in Key Vault, any newly created copy of the server is also encrypted. When trying to encrypt Azure Database for MySQL flexible server with a customer managed key that already has a replica(s), we recommend configuring the replica(s) as well by adding the managed identity and key. If the flexible server is configured with geo-redundancy backup, the replica must be configured with the managed identity and key to which the identity has access and which resides in the server's geo-paired region.
## Restore with a customer managed key in Key Vault
-When attempting to restore an Azure Database for MySQL flexible server, you're given the option to select the User managed identity, and Key to encrypt the restore server.
+When attempting to restore an Azure Database for MySQL flexible server, you're given the option to select the User managed identity, and Key to encrypt the restore server. If the flexible server is configured with geo-redundancy backup, the restore server must be configured with the managed identity and key to which the identity has access and which resides in the server's geo-paired region.
To avoid issues while setting up customer-managed data encryption during restore or read replica creation, it's important to follow these steps on the source and restored/replica servers:
mysql How To Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md
Title: Set up Azure Active Directory authentication for Azure Database for MySQL flexible server Preview
+ Title: Set up Azure Active Directory authentication for Azure Database for MySQL flexible server
description: Learn how to set up Azure Active Directory authentication for Azure Database for MySQL flexible Server
-# Set up Azure Active Directory authentication for Azure Database for MySQL - Flexible Server Preview
+# Set up Azure Active Directory authentication for Azure Database for MySQL - Flexible Server
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
To create an Azure AD Admin user, please follow the following steps.
- **MySQL and Azure Active Directory authentication** ΓÇô Allows authentication using a native MySQL password or an Azure AD account. Turns _OFF_ the server parameter aad_auth_only
- > [!NOTE]
- > The server parameter aad_auth_only stays set to ON when the authentication type is changed to Azure Active Directory authentication only. We recommend disabling it manually when you opt for MySQL authentication only in the future.
- - **Select Identity** ΓÇô Select/Add User assigned managed identity. To allow the UMI to read from Microsoft Graph as the server identity, the following permissions are required. Alternatively, give the UMI the [Directory Readers](../../active-directory/roles/permissions-reference.md#directory-readers) role. - [User.Read.All](/graph/permissions-reference#user-permissions): Allows access to Azure AD user information.
The access token validity is anywhere between 5 minutes to 60 minutes. We recomm
When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token.
-> [!NOTE]
-> The newly restored server will also have the server parameter aad_auth_only set to ON if it was ON on the source server during failover. If you wish to use MySQL authentication on the restored server, you must manually disable this server parameter. Otherwise, an Azure AD Admin must be configured.
- #### Using MySQL CLI When using the CLI, you can use this short-hand to connect:
mysql How To Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-cli.md
Title: Set data encryption for Azure Database for MySQL flexible server by using the Azure CLI Preview
+ Title: Set data encryption for Azure Database for MySQL flexible server by using the Azure CLI
description: Learn how to set up and manage data encryption for your Azure Database for MySQL flexible server using Azure CLI.
-# Data encryption for Azure Database for MySQL - Flexible Server with Azure CLI Preview
+# Data encryption for Azure Database for MySQL - Flexible Server with Azure CLI
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
Set or change key and identity for data encryption:
az mysql flexible-server update --resource-group testGroup --name testserver \\ --key \<key identifier of newKey\> --identity newIdentity ```
-Set or change key, identity, backup key and backup identity for data encryption with geo redundant backup:
+Disable data encryption for flexible server:
```azurecli-interactive
-az mysql flexible-server update --resource-group testGroup --name testserver \\ --key \<key identifier of newKey\> --identity newIdentity \\ --backup-key \<key identifier of newBackupKey\> --backup-identity newBackupIdentity
+az mysql flexible-server update --resource-group testGroup --name testserver --disable-data-encryption
```
-Disable data encryption for flexible server:
+## Create flexible server with geo redundant backup and data encryption enabled
+
+```azurecli-interactive
+az mysql flexible-server create -g testGroup -n testServer --location testLocation \\
+--geo-redundant-backup Enabled \\
+--key <key identifier of testKey> --identity testIdentity \\
+--backup-key <key identifier of testBackupKey> --backup-identity testBackupIdentity
+```
+
+Set or change key, identity, backup key and backup identity for data encryption with geo redundant backup:
```azurecli-interactive
-az mysql flexible-server update --resource-group testGroup --name testserver --disable-data-encryption
+az mysql flexible-server update --resource-group testGroup --name testserver \\ --key \<key identifier of newKey\> --identity newIdentity \\ --backup-key \<key identifier of newBackupKey\> --backup-identity newBackupIdentity
``` ## Use an Azure Resource Manager template to enable data encryption
mysql How To Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md
Title: Set data encryption for Azure Database for MySQL flexible server by using the Azure portal Preview
+ Title: Set data encryption for Azure Database for MySQL flexible server by using the Azure portal
description: Learn how to set up and manage data encryption for your Azure Database for MySQL - Flexible Server using Azure portal.
-# Data encryption for Azure Database for MySQL - Flexible Server by using the Azure portal Preview
+# Data encryption for Azure Database for MySQL - Flexible Server by using the Azure portal
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)]
notification-hubs Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/android-sdk.md
Title: Send push notifications to Android using Azure Notification Hubs and Fire
description: In this tutorial, you learn how to use Azure Notification Hubs and Google Firebase Cloud Messaging to send push notifications to Android devices (version 1.0.0-preview1). Previously updated : 5/28/2020 Last updated : 11/14/2022
The first step is to create a project in Android Studio:
8. Copy and save the **Server key** for later use. You use this value to configure your hub.
+9. If you do not see a **Server key** on the **Firebase Cloud Messaging** tab, follow these steps:
+ 1. Select the three-dots menu of the **Cloud Messaging API (Legacy) Disabled** heading.
+ 1. Follow the link to **Manage API in Google Cloud Console**.
+ 1. In the Google Cloud Console, select the button to enable the Google Cloud Messaging API.
+ 1. Wait a few minutes.
+ 1. Go back to your Firebase console project **Cloud Messaging** tab, and refresh the page.
+ 1. See that the Cloud Messaging API header has changed to **Cloud Messaging API (Legacy) Enabled** and now shows a server key.
+
+ :::image type="content" source="media/android-sdk/notification-hubs-enable-firebase-cloud-messaging-legacy-api.png" alt-text="Portal screenshot showing Enable Cloud Messaging API (Legacy).":::
+
## Configure a notification hub 1. Sign in to the [Azure portal](https://portal.azure.com/).
also have the connection strings that are necessary to send notifications to a d
super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); NotificationHub.setListener(new CustomNotificationListener());
- NotificationHub.start(this.getApplication(), "Hub Name", ΓÇ£Connection-StringΓÇ¥);
+ NotificationHub.start(this.getApplication(), "Hub Name", "Connection-String");
} ```
purview Abap Functions Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/abap-functions-deployment-guide.md
The function finishes its execution and metadata is downloaded much faster if it
## Next steps -- [Register and scan SAP ECC source](register-scan-sapecc-source.md)-- [Register and scan SAP S/4HANA source](register-scan-saps4hana-source.md)-- [Register and scan SAP Business Warehouse (BW) source](register-scan-sap-bw.md)
+- [Connect to and manage SAP ECC source](register-scan-sapecc-source.md)
+- [Connect to and manage SAP S/4HANA source](register-scan-saps4hana-source.md)
+- [Connect to and manage SAP Business Warehouse (BW) source](register-scan-sap-bw.md)
purview Available Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/available-metadata.md
This article has a list of the metadata that is available for a Power BI tenant
## Next steps -- [Register and scan a Power BI tenant](register-scan-power-bi-tenant.md)-- [Register and scan Power BI across tenants](register-scan-power-bi-tenant-cross-tenant.md)-- [Register and scan Power BI troubleshooting](register-scan-power-bi-tenant-troubleshoot.md)
+- [Connect to and manage a Power BI tenant](register-scan-power-bi-tenant.md)
+- [Connect to and manage Power BI across tenants](register-scan-power-bi-tenant-cross-tenant.md)
+- [Connect to and manage Power BI troubleshooting](register-scan-power-bi-tenant-troubleshoot.md)
purview Concept Scans And Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-scans-and-ingestion.md
The technical metadata or classifications identified by the scanning process are
For more information, or for specific instructions for scanning sources, follow the links below. * To understand resource sets, see our [resource sets article](concept-resource-sets.md).
-* [How to register and scan an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan)
+* [How to govern an Azure SQL Database](register-scan-azure-sql-database.md#creating-the-scan)
* [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-data-sources.md
To create a hierarchy of collections, assign higher-level collections as a paren
## Next steps
-Learn how to register and scan various data sources:
+Learn how to discover and govern various data sources:
* [Azure Data Lake Storage Gen 2](register-scan-adls-gen2.md) * [Power BI tenant](register-scan-power-bi-tenant.md)
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
File sampling for resource sets by file types:
## Next steps -- [Register and scan Azure Blob storage source](register-scan-azure-blob-storage-source.md)
+- [Discover and govern Azure Blob storage source](register-scan-azure-blob-storage-source.md)
- [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md) - [Manage data sources in Microsoft Purview](manage-data-sources.md)
purview Register Scan Adls Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen1.md
Title: 'Register and scan Azure Data Lake Storage (ADLS) Gen1'
-description: This article outlines the process to register an Azure Data Lake Storage Gen1 data source in Microsoft Purview including instructions to authenticate and interact with the Azure Data Lake Storage Gen 1 source
+ Title: 'Connect to and manage Azure Data Lake Storage (ADLS) Gen1'
+description: This article outlines the process to register an Azure Data Lake Storage Gen1 data source in Microsoft Purview including instructions to authenticate and interact with the Azure Data Lake Storage Gen 1 source.
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
Title: 'Register and scan Azure Data Lake Storage (ADLS) Gen2'
-description: This article outlines the process to register an Azure Data Lake Storage Gen2 data source in Microsoft Purview including instructions to authenticate and interact with the Azure Data Lake Storage Gen2 source
+ Title: 'Discover and govern Azure Data Lake Storage (ADLS) Gen2'
+description: This article outlines the process to register an Azure Data Lake Storage Gen2 data source in Microsoft Purview including instructions to authenticate and interact with the Azure Data Lake Storage Gen2 source.
# Connect to Azure Data Lake Storage in Microsoft Purview
-This article outlines the process to register an Azure Data Lake Storage (ADLS Gen2) data source in Microsoft Purview including instructions to authenticate and interact with the ADLS Gen2 source.
+This article outlines the process to register and govern an Azure Data Lake Storage (ADLS Gen2) data source in Microsoft Purview including instructions to authenticate and interact with the ADLS Gen2 source.
## Supported capabilities
purview Register Scan Azure Arc Enabled Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-arc-enabled-sql-server.md
The account must have access to the **master** database. This is because the `sy
1. Select **Register**
-1. Select **SQL server** and then **Continue**
+1. Select **SQL server on Azure Arc-enabled servers** and then **Continue**
:::image type="content" source="media/register-scan-azure-arc-enabled-sql-server/set-up-azure-arc-enabled-sql-data-source.png" alt-text="Screenshot that shows how to set up the SQL data source.":::
To create and run a new scan, do the following:
1. Select the **Data Map** tab on the left pane in the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
-1. Select the SQL Server source that you registered.
+1. Select the Azure Arc-enabeld SQL Server source that you registered.
1. Select **New scan**
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
Title: 'Register and scan Azure Blob Storage'
+ Title: 'Discover and govern Azure Blob Storage'
description: This article outlines the process to register an Azure Blob Storage data source in Microsoft Purview including instructions to authenticate and interact with the Azure Blob Storage Gen2 source
# Connect to Azure Blob storage in Microsoft Purview
-This article outlines the process to register an Azure Blob Storage account in Microsoft Purview including instructions to authenticate and interact with the Azure Blob Storage source
+This article outlines the process to register and govern Azure Blob Storage accounts in Microsoft Purview including instructions to authenticate and interact with the Azure Blob Storage source
## Supported capabilities
purview Register Scan Azure Cosmos Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-cosmos-database.md
Title: 'Register and scan Azure Cosmos DB Database (SQL API)'
+ Title: 'Connect to Azure Cosmos DB Database (SQL API)'
description: This article outlines the process to register an Azure Cosmos DB instance in Microsoft Purview including instructions to authenticate and interact with the Azure Cosmos DB database
# Connect to Azure Cosmos DB for NoSQL in Microsoft Purview
-This article outlines the process to register an Azure Cosmos DB for NoSQL instance in Microsoft Purview, including instructions to authenticate and interact with the Azure Cosmos DB database source
+This article outlines the process to register and scan Azure Cosmos DB for NoSQL instance in Microsoft Purview, including instructions to authenticate and interact with the Azure Cosmos DB database source
## Supported capabilities
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
Title: Connect to and manage multiple Azure sources
+ Title: Discover and govern multiple Azure sources
description: This guide describes how to connect to multiple Azure sources in Microsoft Purview at once, and use Microsoft Purview's features to scan and manage your sources.
Last updated 10/28/2022
-# Connect to and manage multiple Azure sources in Microsoft Purview
+# Discover and govern multiple Azure sources in Microsoft Purview
This article outlines how to register multiple Azure sources and how to authenticate and interact with them in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
purview Register Scan Azure Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-postgresql.md
Last updated 11/02/2021
-# Connect to and manage an Azure Database for PostgreSQL in Microsoft Purview
+# Connect to and manages an Azure Database for PostgreSQL in Microsoft Purview
This article outlines how to register an Azure Database for PostgreSQL deployed with single server deployment option, as well as how to authenticate and interact with an Azure Database for PostgreSQL in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Title: 'Register and scan Azure SQL DB'
+ Title: 'Discover and govern Azure SQL DB'
description: This article outlines the process to register an Azure SQL database in Microsoft Purview including instructions to authenticate and interact with the Azure SQL DB source
Last updated 10/28/2022
-# Connect to Azure SQL Database in Microsoft Purview
+# Discover and govern Azure SQL Database in Microsoft Purview
This article outlines the process to register an Azure SQL data source in Microsoft Purview including instructions to authenticate and interact with the Azure SQL database source
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
If you are looking for a strictly technical deployment guide, this deployment ch
|32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Microsoft Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning to extend **sensitivity labels to Microsoft Purview Data Map** <br> For more information, see [licensing requirements to use sensitivity labels on files and database columns in Microsoft Purview](sensitivity-labels-frequently-asked-questions.yml) | |33 |Consent "**Extend labeling to assets in Microsoft Purview Data Map**" |Compliance Administrator <br> Azure Information Protection Administrator |Use this step if you're interested in extending sensitivity labels to your data in the data map. <br> For more information, see [Labeling in the Microsoft Purview Data Map](create-sensitivity-label.md). | |34 |Create new collections and assign roles in Microsoft Purview |*Collection admin* | [Create a collection and assign permissions in Microsoft Purview](./quickstart-create-collection.md). |
-|36 |Register and scan Data Sources in Microsoft Purview |*Data Source admin* <br> *Data Reader* or *Data Curator* | For more information, see [supported data sources and file types](azure-purview-connector-overview.md) |
+|36 |Govern Data Sources in Microsoft Purview |*Data Source admin* <br> *Data Reader* or *Data Curator* | For more information, see [supported data sources and file types](azure-purview-connector-overview.md) |
|35 |Grant access to data roles in the organization |*Collection admin* |Provide access to other teams to use Microsoft Purview: <ul><li> Data curator</li><li>Data reader</li><li>Collection admin</li><li>Data source admin</li><li>Policy Author</li><li>Workflow admin</li></ul> <br> For more information, see [Access control in Microsoft Purview](catalog-permissions.md). | ## Next steps
reliability Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/glossary.md
-# Reliability erminology
+# Reliability terminology
To better understand regions and availability zones in Azure, it helps to understand key terms or concepts.
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
Title: Migrate Azure API Management to availability zone support description: Learn how to migrate your Azure API Management instances to availability zone support.-+ Last updated 07/07/2022
reliability Migrate App Gateway V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-gateway-v2.md
Title: Migrate Azure Application Gateway Standard and WAF v2 deployments to availability zone support description: Learn how to migrate your Azure Application Gateway and WAF deployments to availability zone support.-+ Last updated 07/28/2022
reliability Migrate Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-load-balancer.md
Title: Migrate Load Balancer to availability zone support description: Learn how to migrate Load Balancer to availability zone support.-+ Last updated 05/09/2022
reliability Migrate Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-recovery-services-vault.md
Title: Migrate Azure Recovery Services Vault to availability zone support description: Learn how to migrate your Azure Recovery Services Vault to availability zone support.-+ Last updated 06/24/2022
reliability Migrate Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-storage.md
Title: Migrate Azure Storage accounts to availability zone support description: Learn how to migrate your Azure storage accounts to availability zone support.-+ Last updated 09/27/2022
reliability Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-vm.md
Title: Migrate Azure Virtual Machines and Azure Virtual Machine Scale Sets to availability zone support description: Learn how to migrate your Azure Virtual Machines and Virtual Machine Scale Sets to availability zone support.-+ Last updated 04/21/2022
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
+
+ Title: Stream CEF logs to Microsoft Sentinel with the AMA connector
+description: Stream and filter CEF-based logs from on-premises appliances to your Microsoft Sentinel workspace.
++ Last updated : 09/19/2022+
+#Customer intent: As a security operator, I want to stream and filter CEF-based logs from on-premises appliances to my Microsoft Sentinel workspace, so I can improve load time and easily analyze the data.
++
+# Stream CEF logs with the AMA connector
+
+This article describes how to use the **Common Event Format (CEF) via AMA** connector to quickly filter and upload logs in the Common Event Format (CEF) from multiple on-premises appliances over Syslog.
+
+The connector uses the Azure Monitor Agent (AMA), which uses Data Collection Rules (DCRs). With DCRs, you can filter the logs before they're ingested, for quicker upload, efficient analysis, and querying.
+
+The AMA is installed on a Linux machine that acts as a log forwarder, and the AMA collects the logs in the CEF format.
+
+- [Set up the connector](#set-up-the-common-event-format-cef-via-ama-connector)
+- [Learn more about the connector](#how-collection-works-with-the-common-event-format-cef-via-ama-connector)
+
+> [!IMPORTANT]
+> The CEF via AMA connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+> [!NOTE]
+> On February 28th 2023, we will introduce [changes to the CommonSecurityLog table schema](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). This means that custom queries will require being reviews and updates. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) will be updated by Microsoft Sentinel.
+
+## Overview
+
+### What is CEF collection?
+
+Many network, security appliances, and devices send their logs in the CEF format over Syslog. This format includes more structured information than Syslog, with information presented in a parsed key-value arrangement.
+
+If your appliance or system sends logs over Syslog using CEF, the integration with Microsoft Sentinel allows you to easily run analytics and queries across the data.
+
+CEF normalizes the data, making it more immediately useful for analysis with Microsoft Sentinel. Microsoft Sentinel also allows you to ingest unparsed Syslog events, and to analyze them with query time parsing.
+
+### How collection works with the Common Event Format (CEF) via AMA connector
++
+1. Your organization sets up a log forwarder (Linux VM), if one doesn't already exist. The forwarder can be on-premises or cloud-based.
+1. Your organization uploads CEF logs from your source devices to the forwarder.
+1. The AMA connector installed on the log forwarder collects and parses the logs.
+1. The connector streams the events to the Microsoft Sentinel workspace to be further analyzed.
+
+When you install a log forwarder, the originating device must be configured to send Syslog events to the Syslog daemon on this forwarder instead of the local daemon. The Syslog daemon on the forwarder sends events to the Log Analytics agent over UDP. If this Linux forwarder is expected to collect a high volume of Syslog events, its Syslog daemon sends events to the agent over TCP instead. In either case, the agent then sends the events from there to your Log Analytics workspace in Microsoft Sentinel.
++
+## Set up the Common Event Format (CEF) via AMA connector
+
+### Prerequisites
+
+Before you begin, verify that you have:
+
+- The Microsoft Sentinel solution enabled.
+- A defined Microsoft Sentinel workspace.
+- A Linux machine to collect logs.
+ - The Linux machine must have Python 2.7 or 3 installed on the Linux machine. Use the ``python --version`` or ``python3 --version`` command to check.
+- Either the `syslog-ng` or `rsyslog` daemon enabled.
+- To collect events from any system that isn't an Azure virtual machine, ensure that [Azure Arc](../azure-monitor/agents/azure-monitor-agent-manage.md) is installed.
+
+### Configure a log forwarder
+
+To ingest Syslog and CEF logs into Microsoft Sentinel, you need to designate and configure a Linux machine that collects the logs from your devices and forwards them to your Microsoft Sentinel workspace. This machine can be a physical or virtual machine in your on-premises environment, an Azure VM, or a VM in another cloud. This machine must have Azure Arc installed (see the [prerequisites](#prerequisites)).
+
+This machine has two components that take part in this process:
+
+- A Syslog daemon, either `rsyslog` or `syslog-ng`, which collects the logs.
+- The AMA, which forwards the logs to Microsoft Sentinel.
+
+When you set up the connector and the DCR, you [run a script](#run-the-installation-script) on the Linux machine, which configures the built-in Linux Syslog daemon (`rsyslog.d`/`syslog-ng`) to listen for Syslog messages from your security solutions on TCP/UDP port 514.
+
+The DCR installs the AMA to collect and parse the logs.
+
+#### Log forwarder - security considerations
+
+Make sure to configure the machine's security according to your organization's security policy. For example, you can configure your network to align with your corporate network security policy and change the ports and protocols in the daemon to align with your requirements. To improve your machine security configuration, [secure your VM in Azure](../virtual-machines/security-policy.md), or review these [best practices for network security](../security/fundamentals/network-best-practices.md).
+
+If your devices are sending Syslog and CEF logs over TLS (because, for example, your log forwarder is in the cloud), you need to configure the Syslog daemon (`rsyslog` or `syslog-ng`) to communicate in TLS:
+
+- [Encrypt Syslog traffic with TLS ΓÇô rsyslog](https://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_summary.html)
+- [Encrypt log messages with TLS ΓÇô syslog-ng](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.22/administration-guide/60#TOPIC-1209298)
+
+### Set up the connector
+
+You can set up the connector in two ways:
+- [Microsoft Sentinel portal](#set-up-the-connector-in-the-microsoft-sentinel-portal-ui). With this setup, you can create, manage, and delete DCRs per workspace.
+- [API](#set-up-the-connector-with-the-api). With this setup, you can create, manage, and delete DCRs. This option is more flexible than the UI. For example, with the API, you can filter by specific log levels, where with the UI, you can only select a minimum log level.
+
+#### Set up the connector in the Microsoft Sentinel portal (UI)
+
+1. [Open the connector page and create the DCR](#open-the-connector-page-and-create-the-dcr)
+1. [Define resources (VMs)](#define-resources-vms)
+1. [Select the data source type and create the DCR](#select-the-data-source-type-and-create-the-dcr)
+1. [Run the installation script](#run-the-installation-script)
+
+##### Open the connector page and create the DCR
+
+1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
+1. Select **Data connectors**, and in the search bar, type *CEF*.
+1. Select the **Common Event Format (CEF) via AMA (Preview)** connector.
+1. Below the connector description, select **Open connector page**.
+1. In the **Configuration** area, select **Create data collection rule**.
+1. Under **Basics**:
+ - Type a DCR name
+ - Select your subscription
+ - Select the resource group where your collector is defined
+
+ :::image type="content" source="media/connect-cef-ama/dcr-basics-tab.png" alt-text="Screenshot showing the DCR details in the Basics tab." lightbox="media/connect-cef-ama/dcr-basics-tab.png":::
+
+##### Define resources (VMs)
+
+Select the machines on which you want to install the AMA. These machines are VMs or on-premises Linux machines with Arc installed.
+
+1. Select the **Resources** tab and select **Add Resource(s)**.
+1. Select the VMs on which you want to install the connector to collect logs.
+
+ :::image type="content" source="media/connect-cef-ama/dcr-select-resources.png" alt-text="Screenshot showing how to select resources when setting up the DCR." lightbox="media/connect-cef-ama/dcr-select-resources.png":::
+
+1. Review your changes and select **Collect**.
+
+##### Select the data source type and create the DCR
+
+> [!NOTE]
+> **Using the same machine to forward both plain Syslog *and* CEF messages**
+>
+> If you plan to use this log forwarder machine to forward Syslog messages as well as CEF, then in order to avoid the duplication of events to the Syslog and CommonSecurityLog tables:
+>
+> 1. On each source machine that sends logs to the forwarder in CEF format, you must edit the Syslog configuration file to remove the facilities that are being used to send CEF messages. This way, the facilities that are sent in CEF won't also be sent in Syslog. See [Configure Syslog on Linux agent](../azure-monitor/agents/data-sources-syslog.md#configure-syslog-on-linux-agent) for detailed instructions on how to do this.
+>
+> 1. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in Microsoft Sentinel. This ensures that the configuration change you made in the previous step does not get overwritten.<br>
+> `sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'`
+
+1. Select the **Collect** tab and select **Linux syslog** as the data source type.
+1. Configure the minimum log level for each facility. When you select a log level, Microsoft Sentinel collects logs for the selected level and other levels with lower severity. For example, if you select **LOG_ERR**, Microsoft Sentinel collects logs for the **LOG_ERR**, **LOG_WARNING**, **LOG_NOTICE**, **LOG_INFO**, and **LOG_DEBUG** levels.
+
+ :::image type="content" source="media/connect-cef-ama/dcr-log-levels.png" alt-text="Screenshot showing how to select log levels when setting up the DCR.":::
+
+1. Review your changes and select **Next: Review and create**.
+1. In the **Review and create** tab, select **Create**.
+
+##### Run the installation script
+
+1. Log in to the Linux forwarder machine, where you want the AMA to be installed.
+1. Run this command to launch the installation script:
+
+ ```python
+ sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure- Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
+ ```
+ The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon.
+
+ > [!NOTE]
+ > To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
+ > Read more about [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](
+https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+
+### Set up the connector with the API
+
+You can create DCRs using the [API](/rest/api/monitor/data-collection-rules). Learn more about [DCRs](../azure-monitor/essentials/data-collection-rule-overview.md).
+
+Run this command to launch the installation script:
+
+```python
+sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure- Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
+```
+The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon.
+
+#### Request URL and headerΓÇ»
+
+```rest
+GET
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2019-11-01-preview
+```
+
+#### Request body
+
+Edit the template:
+- Verify that the `streams` field is set to `Microsoft-CommonSecurityLog`.
+- Add the filter and facility log levels in the `facilityNames` and `logLevels` parameters.
+
+```rest
+{
+ "properties": {
+ "immutableId": "dcr-bcc4039c90f0489b80927bbdf1f26008",
+ "dataSources": {
+ "syslog": [
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+
+ "facilityNames": [
+ "*"
+ ],
+ "logLevels": [ "*"
+ ],
+ "name": "sysLogsDataSource-1688419672"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/{Your-Subscription-
+Id}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{SentinelWorkspaceName}", "workspaceId": "123x56xx-9123-xx4x-567x-89123xxx45",
+"name": "la-140366483"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "destinations": [
+ "la-140366483"
+ ]
+ }
+ ],
+ "provisioningState": "Succeeded"
+ },
+ "location": "westeurope",
+ "tags": {},
+ "kind": "Linux",
+ "id": "/subscriptions/{Your-Subscription- Id}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}",
+ "name": "{DCRName}",
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "etag": "\"2401b6f3-0000-0d00-0000-618bbf430000\""
+}
+```
+After you finish editing the template, use `POST` or `PUT` to deploy it:
+
+```rest
+PUT
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2019-11-01-preview
+```
+#### Examples of facilities and log levels sections
+
+Review these examples of the facilities and log levels settings. The `name` field includes the filter name.
+
+This example collects events from the `cron`, `daemon`, `local0`, `local3` and `uucp` facilities, with the `Warning`, `Error`, `Critical`, `Alert`, and `Emergency` log levels:
+
+```rest
+ "syslog": [
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "cron",
+ "daemon",
+ "local0",
+ "local3",
+ "uucp"
+ ],
+
+ "logLevels": [
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+ "name": "sysLogsDataSource-1688419672"
+ }
+]
+```
+
+This example collects events for:
+- The `authpriv` and `mark` facilities with the `Info`, `Notice`, `Warning`, `Error`, `Critical`, `Alert`, and `Emergency` log levels
+- The `daemon` facility with the `Warning`, `Error`, `Critical`, `Alert`, and `Emergency` log levels
+- The `kern`, `local0`, `local5`, and `news` facilities with the `Critical`, `Alert`, and `Emergency` log levels
+- The `mail` and `uucp` facilities with the `Emergency` log level
+
+```rest
+ "syslog": [
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "authpriv",
+ "mark"
+ ],
+ "logLevels": [
+ "Info",
+ "Notice",
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+ "name": "sysLogsDataSource--1469397783"
+ },
+ {
+ "streams": [ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "daemon"
+ ],
+ "logLevels": [
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+
+ "name": "sysLogsDataSource--1343576735"
+ },
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "kern",
+ "local0",
+ "local5",
+ "news"
+ ],
+ "logLevels": [
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+ "name": "sysLogsDataSource--1469572587"
+ },
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "mail",
+ "uucp"
+ ],
+ "logLevels": [
+ "Emergency"
+ ],
+ "name": "sysLogsDataSource-1689584311"
+ }
+ ]
+}
+```
+### Test the connector
+
+1. To validate that the syslog daemon is running on the UDP port and that the AMA is listening, run this command:
+
+ ```
+ netstat -lnptv
+ ```
+
+ You should see the `rsyslog` or `syslog-ng` daemon listening on port 514.
+
+1. To capture messages sent from a logger or a connected device, run this command in the background:
+
+ ```
+ tcpdump -I any port 514 -A vv &
+ ```
+1. After you complete the validation, we recommend that you stop the `tcpdump`: Type `fg` and then select <kbd>Ctrl</kbd>+<kbd>C</kbd>.
+1. To send demo messages, do one of the following:
+ - Use the netcat utility. In this example, the utility reads data posted through the `echo` command with the newline switch turned off. The utility then writes the data to UDP port `514` on the localhost with no timeout. To execute the netcat utility, you might need to install an additional package.
+
+ ```
+ echo -n "<164>CEF:0|Mock-test|MOCK|common=event-format-test|end|TRAFFIC|1|rt=$common=event-formatted-receive_time" | nc -u -w0 localhost 514
+ ```
+ - Use the logger. This example writes the message to the `local 4` facility, at severity level `Warning`, to port `514`, on the local host, in the CEF RFC format. The `-t` and `--rfc3164` flags are used to comply with the expected RFC format.
+
+ ```
+ logger -p local4.warn -P 514 -n 127.0.0.1 --rfc3164 -t CEF "0|Mock-test|MOCK|common=event-format-test|end|TRAFFIC|1|rt=$common=event-formatted-receive_time"
+ ```
+
+1. To verify that the connector is installed correctly, run the troubleshooting script with this command:
+
+ ```
+ sudo wget -O cef_AMA_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_AMA_troubleshoot.py&&sudo python cef_AMA_troubleshoot.py
+ ```
+
+## Next steps
+
+In this article, you learned how to set up the Windows CEF via AMA connector to upload data from appliances that support CEF over Syslog. To learn more about Microsoft Sentinel, see the following articles:
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Dns Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-dns-ama.md
Last updated 01/05/2022
-#Customer intent: As a security operator, I want proactively monitor Windows DNS activities so that I can prevent threats and attacks on DNS servers.
+#Customer intent: As a security operator, I want to proactively monitor Windows DNS activities so that I can prevent threats and attacks on DNS servers.
# Stream and filter data from Windows DNS servers with the AMA connector
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
See Barracuda instructions - note the assigned facilities for the different type
| **Vendor documentation/<br>installation instructions** | [Configuring the Log to a Syslog Server action](https://help.symantec.com/cs/DLP15.7/DLP/v27591174_v133697641/Configuring-the-Log-to-a-Syslog-Server-action?locale=EN_US) | | **Supported by** | Microsoft |
+## Common Event Format (CEF) via AMA
+
+| Connector attribute | Description |
+| | |
+| **Data ingestion method** | **[Azure monitor Agent-based connection](connect-cef-ama.md)** |
+| **Log Analytics table(s)** | [CommonSecurityLog](/azure/azure-monitor/reference/tables/commonsecuritylog) |
+| **DCR support** | Standard DCR |
+| **Supported by** | Microsoft |
## Check Point
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
- [Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip) - [Out of the box anomaly detection on the SAP audit log (Preview)](#out-of-the-box-anomaly-detection-on-the-sap-audit-log-preview) - [IoT device entity page (Preview)](#iot-device-entity-page-preview)
+- [Common Event Format (CEF) via AMA](#common-event-format-cef-via-ama-preview)
### Account enrichment fields removed from Azure AD Identity Protection connector
The new [IoT device entity page](entity-pages.md) is designed to help the SOC in
Learn more about [investigating IoT device entities in Microsoft Sentinel](iot-advanced-threat-monitoring.md).
+### Common Event Format (CEF) via AMA (Preview)
+
+The [Common Event Format (CEF) via AMA](connect-cef-ama.md) connector allows you to quickly filter and upload logs over CEF from multiple on-premises appliances to Microsoft Sentinel via the Azure Monitor Agent (AMA).
+
+The AMA supports Data Collection Rules (DCRs), which you can use to filter the logs before ingestion, for quicker upload, efficient analysis, and querying.
+
+Here are some benefits of using AMA for CEF log collection:
+
+- AMA is faster compared to the existing Log Analytics Agent (MMA/OMS).
+- AMA provides centralized configuration using Data Collection Rules (DCRs), and also supports multiple DCRs.
+- AMA is Syslog RFC compliant, a faster and a more resilient and reliant agent, more secure with lower footprint on the installed machine.
+ ## September 2022 - [Create automation rule conditions based on custom details (Preview)](#create-automation-rule-conditions-based-on-custom-details-preview)
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
The following two types of errors are classified as **user errors**:
> > The important metrics to monitor for any outages for a premium tier namespace are: **CPU usage per namespace** and **memory size per namespace**. [Set up alerts](../azure-monitor/alerts/alerts-metric.md) for these metrics using Azure Monitor. >
-> The other metric you could monitor is: **throttled requests**. It shouldn't be an issue though as long as the namespace stays within its memory, CPU, and brokered connections limits. For more information, see [Throttling in Azure Service Bus Premium tier](service-bus-throttling.md#throttling-in-azure-service-bus-premium-tier)
+> The other metric you could monitor is: **throttled requests**. It shouldn't be an issue though as long as the namespace stays within its memory, CPU, and brokered connections limits. For more information, see [Throttling in Azure Service Bus Premium tier](service-bus-throttling.md#throttling-in-premium-tier)
| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | - | -- | | | |
service-bus-messaging Service Bus Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-throttling.md
Title: Overview of Azure Service Bus throttling | Microsoft Docs description: Overview of Service Bus throttling - Standard and Premium tiers. Previously updated : 09/20/2021 Last updated : 11/14/2022 # Throttling operations on Azure Service Bus
-Cloud native solutions give a notion of unlimited resources that can scale with your workload. While this notion is more true in the cloud than it is with on-premises systems, there are still limitations that exist in the cloud.
+Cloud native solutions give a notion of unlimited resources that can scale with your workload. While this notion is more true in the cloud than it is with on-premises systems, there are still limitations that exist in the cloud. These limitations may cause throttling of client application requests in both standard and premium tiers as discussed in this article.
-These limitations may cause throttling of client application requests in both Standard and Premium tiers as discussed in this article.
+## Throttling in standard tier
-## Throttling in Azure Service Bus Standard tier
+The standard tier of Service Bus operates as a multi-tenant setup with a pay-as-you-go pricing model. Here multiple namespaces in the same cluster share the allocated resources. Standard tier is the recommended choice for developer environments, QA environments, and low throughput production systems.
-The Azure Service Bus Standard tier operates as a multi-tenant setup with a pay-as-you-go pricing model. Here multiple namespaces in the same cluster share the allocated resources. Standard tier is the recommended choice for developer, testing, and QA environments along with low throughput production systems.
+In the past, Service Bus had coarse throttling limits strictly dependent on resource utilization. However, there's an opportunity to refine throttling logic and provide predictable throttling behavior to all namespaces that are sharing these resources.
-In the past, Azure Service Bus had coarse throttling limits strictly dependent on resource utilization. However, there is an opportunity to refine the throttling logic and provide predictable throttling behavior to all namespaces that are sharing these resources.
-
-In an attempt to ensure fair usage and distribution of resources across all the Azure Service Bus Standard namespaces that use the same resources, the throttling logic has been modified to be credit-based.
+In an attempt to ensure fair usage and distribution of resources across all the Service Bus standard namespaces that use the same resources, the throttling logic has been modified to be credit-based.
> [!NOTE]
-> It is important to note that throttling is ***not new*** to Azure Service Bus, or any cloud native service.
+> It is important to note that throttling is **not new** to Azure Service Bus, or any cloud native service.
>
-> Credit based throttling is simply refining the way various namespaces share resources in a multi-tenant Standard tier environment and thus enabling fair usage by all namespaces sharing the resources.
+> Credit based throttling is simply refining the way various namespaces share resources in a multi-tenant standard tier environment and thus enabling fair usage by all namespaces sharing the resources.
### What is credit-based throttling?
-Credit-based throttling limits the number of operations that can be performed on a given namespace in a specific time period.
-
-Below is the workflow for credit-based throttling -
+Credit-based throttling limits the number of operations that can be performed on a given namespace in a specific time period. Here's the workflow for credit-based throttling.
- * At the start of each time period, we provide a certain number of credits to each namespace.
+ * At the start of each time period, Service Bus provides a certain number of credits to each namespace.
* Any operations performed by the sender or receiver client applications will be counted against these credits (and subtracted from the available credits). * If the credits are depleted, subsequent operations will be throttled until the start of the next time period. * Credits are replenished at the start of the next time period. ### What are the credit limits?
-The credit limits are currently set to '1000' credits every second (per namespace).
-
-Not all operations are created equal. Here are the credit costs of each of the operations -
+The credit limits are currently set to **1000 credits every second** (per namespace). Not all operations are created equal. Here are the credit costs of each of the operations
| Operation | Credit cost| |--|--|
-| Data operations (Send, SendAsync, Receive, ReceiveAsync, Peek) |1 credit per message |
-| Management operations (Create, Read, Update, Delete on Queues, Topics, Subscriptions, Filters) | 10 credits |
+| Data operations (`Send`, `SendAsync`, `Receive`, `ReceiveAsync`, `Peek`) | 1 credit per message |
+| Management operations (`Create`, `Read`, `Update`, `Delete` on queues, topics, subscriptions, filters) | 10 credits |
> [!NOTE]
-> Please note that when sending to a Topic, each message is evaluated against filter(s) before being made available on the Subscription.
-> Each filter evaluation also counts against the credit limit (i.e. 1 credit per filter evaluation).
->
+> When sending to a topic, each message is evaluated against filters before being made available on the subscription. Each filter evaluation also counts against the credit limit (that is, 1 credit per filter evaluation).
### How will I know that I'm being throttled?
-When the client application requests are being throttled, the below server response will be received by your application and logged.
+When the client application requests are being throttled, the client application receives the following server response.
``` The request was terminated because the entity is being throttled. Error code: 50009. Please wait 2 seconds and try again.
The request was terminated because the entity is being throttled. Error code: 50
### How can I avoid being throttled?
-With shared resources, it is important to enforce some sort of fair usage across various Service Bus namespaces that share those resources. Throttling ensures that any spike in a single workload does not cause other workloads on the same resources to be throttled.
-
-As mentioned later in the article, there is no risk in being throttled because the client SDKs (and other Azure PaaS offerings) have the default retry policy built into them. Any throttled requests will be retried with exponential backoff and will eventually go through when the credits are replenished.
+With shared resources, it's important to enforce some sort of fair usage across various Service Bus namespaces that share those resources. Throttling ensures that any spike in a single workload doesn't cause other workloads on the same resources to be throttled. As mentioned later in the article, there's no risk in being throttled because the client SDKs (and other Azure PaaS offerings) have the default retry policy built into them. Any throttled requests will be retried with exponential backoff and will eventually go through when the credits are replenished.
-Understandably, some applications may be sensitive to being throttled. In that case, it is recommended to [migrate your current Service Bus Standard namespace to Premium](service-bus-migrate-standard-premium.md).
+Understandably, some applications may be sensitive to being throttled. In that case, it's recommended to [migrate your current Service Bus standard namespace to premium](service-bus-migrate-standard-premium.md). On migration, you can allocate dedicated resources to your Service Bus namespace and appropriately scale up the resources if there's a spike in your workload and reduce the likelihood of being throttled. Additionally, when your workload reduces to normal levels, you can scale down the resources allocated to your namespace.
-On migration, you can allocate dedicated resources to your Service Bus namespace and appropriately scale up the resources if there is a spike in your workload and reduce the likelihood of being throttled. Additionally, when your workload reduces to normal levels, you can scale down the resources allocated to your namespace.
+## Throttling in premium tier
-## Throttling in Azure Service Bus Premium tier
-
-The [Azure Service Bus Premium](service-bus-premium-messaging.md) tier allocates dedicated resources, in terms of messaging units, to each namespace setup by the customer. These dedicated resources provide predictable throughput and latency and are recommended for high throughput or sensitive production grade systems.
-
-Additionally, the Premium tier also enables customers to scale up their throughput capacity when they experience spikes in the workload.
+The [Service Bus premium](service-bus-premium-messaging.md) tier allocates dedicated resources, in terms of messaging units, to each namespace setup by the customer. These dedicated resources provide predictable throughput and latency and are recommended for high throughput or sensitive production grade systems. Additionally, the premium tier also enables customers to scale up their throughput capacity when they experience spikes in the workload. For more information, see [Automatically update messaging units of an Azure Service Bus namespace](automate-update-messaging-units.md).
### How does throttling work in Service Bus Premium?
-With exclusive resource allocation for Service Bus Premium, throttling is purely driven by the limitations of the resources allocated to the namespace.
-
-If the number of requests are more than the current resources can service, then the requests will be throttled.
+With exclusive resource allocation for the premium tier, throttling is purely driven by the limitations of the resources allocated to the namespace. If the number of requests are more than the current resources can service, then the requests will be throttled.
### How will I know that I'm being throttled?
-There are various ways to identifying throttling in Azure Service Bus Premium -
+There are various ways to identifying throttling in the Service Bus premium tier.
+ * **Throttled Requests** show up on the [Azure Monitor Request metrics](monitor-service-bus-reference.md#request-metrics) to identify how many requests were throttled. * High **CPU Usage** indicates that current resource allocation is high and requests may get throttled if the current workload doesn't reduce. * High **Memory Usage** indicates that current resource allocation is high and requests may get throttled if the current workload doesn't reduce. ### How can I avoid being throttled?
-Since the Service Bus Premium namespace already has dedicated resources, you can reduce the possibility of getting throttled by scaling up the number of Messaging Units allocated to your namespace in the event (or anticipation) of a spike in the workload.
+As the Service Bus premium namespace already has dedicated resources, you can reduce the possibility of getting throttled by scaling up the number of messaging units allocated to your namespace in the event (or anticipation) of a spike in the workload. For more information, see [Automatically update messaging units of an Azure Service Bus namespace](automate-update-messaging-units.md).
-Scaling up/down can be achieved by creating [runbooks](../automation/automation-create-alert-triggered-runbook.md) that can be triggered by changes in the above metrics.
## FAQs ### How does throttling affect my application?
-When a request is throttled, it implies that the service is busy because it is facing more requests than the resources allow. If the same operation is tried again after a few moments, once the service has worked through its current workload, then the request can be honored.
-
-Since throttling is the expected behavior of any cloud native service, we have built the retry logic into the Azure Service Bus SDK itself. The default is set to auto retry with an exponential back-off to ensure that we don't have the same request being throttled each time.
+When a request is throttled, it implies that the service is busy because it's facing more requests than the resources allow. If the same operation is tried again after a few moments, once the service has worked through its current workload, then the request can be honored.
-The default retry logic will apply to every operation.
+As throttling is the expected behavior of any cloud native service, retry logic is built into the Service Bus SDK itself. The default is set to auto retry with an exponential back-off to ensure that we don't have the same request being throttled each time. The default retry logic will apply to every operation.
### Does throttling result in data loss? Azure Service Bus is optimized for persistence, we ensure that all the data sent to Service Bus is committed to storage before the service acknowledges the success of the request.
-Once the request is successfully 'ACK' (acknowledged) by Service Bus, it implies that Service Bus has successfully processed the request. If Service Bus returns a 'NACK' (failure), then it implies that Service Bus has not been able to process the request and the client application must retry the request.
+Once the request is successfully acknowledged by Service Bus, it implies that Service Bus has successfully processed the request. If Service Bus returns a failure, then it implies that Service Bus hasn't been able to process the request and the client application must retry the request.
-However, when a request is throttled, the service is implying that it cannot accept and process the request right now because of resource limitations. This **does not** imply any sort of data loss because Service Bus simply hasn't looked at the request. In this case, relying on the default retry policy of the Service Bus SDK ensures that the request is eventually processed.
+However, when a request is throttled, the service is implying that it can't accept and process the request right now because of resource limitations. It **does not** imply any sort of data loss because Service Bus simply hasn't looked at the request. In this case, relying on the default retry policy of the Service Bus SDK ensures that the request is eventually processed.
## Next steps
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/release-notes.md
The following resources are also available:
- <a href="/azure/service-fabric/service-fabric-versions" target="blank">Supported Versions</a> - <a href="https://azure.microsoft.com/resources/samples/?service=service-fabric&sort=0" target="blank">Code Samples</a>
+## Service Fabric 9.1
+
+We are excited to announce that the 9.1 release of the Service Fabric runtime has started rolling out to the various Azure regions along with tooling and SDK updates. The updates for .NET SDK, Java SDK, and Service Fabric runtimes can be downloaded from the links provided in Release Notes. The SDK, NuGet packages, and Maven repositories will be available in all regions within 7-10 days.
+
+### Key announcements
+- Azure Service Fabric will block deployments that do not meet Silver or Gold durability requirements starting on 11/10/2022 (The date is extended from 10/30/2022 to 11/10/2022). Five VMs or more will be enforced with this change for newer clusters created after 11/10/2022 to help avoid data loss from VM-level infrastructure requests for production workloads. VM count requirement is not changing for Bronze durability. Enforcement for existing clusters will be rolled out in the coming months.
+- Azure Service Fabric node types with Virtual Machine Scale Set durability of Silver or Gold should always have the property "virtualMachineProfile.osProfile.windowsConfiguration.enableAutomaticUpdates" set to false in the scale set model definition. Setting enableAutomaticUpdates to false will prevent unintended OS restarts due to the Windows updates like patching, which can impact the production workloads.
+Instead, you should enable Automatic OS upgrades through Virtual Machine Scale Set OS Image updates by setting "enableAutomaticOSUpgrade" to true. With automatic OS image upgrades enabled on your scale set, an extra patching process through Windows Update is not required.
+
+### Service Fabric 9.1 releases
+| Release date | Release | More info |
+||||
+| October 24, 2022 | [Azure Service Fabric 9.1](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-1-release/ba-p/3667628) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_91.md)|
+ ## Service Fabric 9.0 We are excited to announce that 9.0 release of the Service Fabric runtime has started rolling out to the various Azure regions along with tooling and SDK updates. The updates for .NET SDK, Java SDK and Service Fabric runtime are available through Web Platform Installer, NuGet packages and Maven repositories.
We are excited to announce that 9.0 release of the Service Fabric runtime has st
### Key announcements - **General Availability** Support for .NET 6.0 - **General Availability** Support for Ubuntu 20.04-- **General Availability** Support for Multi-AZ within a single virtual machine scale set
+- **General Availability** Support for Multi-AZ within a single Virtual Machine Scale Set
- Added support for IHost, IHostBuilder and Minimal Hosting Model - Enabling opt-in option for Data Contract Serialization (DCS) based remoting exception - Support creation of End-to-End Developer Experience for Linux development on Windows using WSL2
We are excited to announce that 9.0 release of the Service Fabric runtime has st
- Mirantis Container runtime support on Windows for Service Fabric containers - The Microsoft Web Platform Installer (WebPI) used for installing Service Fabric SDK and Tools was retired on July 1, 2022. - Azure Service Fabric will block deployments that do not meet Silver or Gold durability requirements starting on 9/30/2022. 5 VMs or more will be enforced with this change to help avoid data loss from VM-level infrastructure requests for production workloads. Enforcement for existing clusters will be rolled out in the coming months.-- Azure Service Fabric node types with VMSS durability of Silver or Gold should always have Windows update explicitly disabled to avoid unintended OS restarts due to the Windows updates, which can impact the production workloads. This can be done by setting the "enableAutomaticUpdates": false, in the VMSS OSProfile. Consider enabling Automatic VMSS Image upgrades instead. The deployments will start failing from 09/30/2022 for new clusters, if the WindowsUpdates are not disabled on the VMSS. Enforcement for existing clusters will be rolled out in the coming months.
+- Azure Service Fabric node types with Virtual Machine Scale Set durability of Silver or Gold should always have Windows update explicitly disabled to avoid unintended OS restarts due to the Windows updates, which can impact the production workloads. This can be done by setting the "enableAutomaticUpdates": false, in the Virtual Machine Scale Set OSProfile. Consider enabling Automatic Virtual Machine Scale Set Image upgrades instead. The deployments will start failing from 09/30/2022 for new clusters, if the WindowsUpdates are not disabled on the Virtual Machine Scale Set. Enforcement for existing clusters will be rolled out in the coming months.
### Service Fabric 9.0 releases | Release date | Release | More info |
We are excited to announce that 9.0 release of the Service Fabric runtime has st
| June 06, 2022 | [Azure Service Fabric 9.0 First Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-first-refresh-release/ba-p/3469489) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU1.md)| | July 14, 2022 | [Azure Service Fabric 9.0 Second Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-second-refresh-release/ba-p/3575842) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU2.md)| | September 13, 2022 | [Azure Service Fabric 9.0 Third Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-third-refresh-update-release/ba-p/3631367) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU3.md)|-
+| October 11, 2022 | [Azure Service Fabric 9.0 Fourth Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-9-0-fourth-refresh-update-release/ba-p/3658429) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_90CU4.md)|
## Service Fabric 8.2
We are excited to announce that 8.2 release of the Service Fabric runtime has st
### Key announcements - Expose an API in Cluster Manager to note if upgrade is impactful
+- Azure Service Fabric will block deployments that do not meet Silver or Gold durability requirements starting on 11/10/2022 (The date is extended from 10/30/2022 to 11/10/2022). Five VMs or more will be enforced with this change for newer clusters created after 11/10/2022 to help avoid data loss from VM-level infrastructure requests for production workloads. VM count requirement is not changing for Bronze durability. Enforcement for existing clusters will be rolled out in the coming months.
+- Azure Service Fabric node types with Virtual Machine Scale Set durability of Silver or Gold should always have the property "virtualMachineProfile.osProfile.windowsConfiguration.enableAutomaticUpdates" set to false in the scale set model definition. Setting enableAutomaticUpdates to false will prevent unintended OS restarts due to the Windows updates like patching, which can impact the production workloads.
+Instead, you should enable Automatic OS upgrades through Virtual Machine Scale Set OS Image updates by setting "enableAutomaticOSUpgrade" to true. With automatic OS image upgrades enabled on your scale set, an extra patching process through Windows Update is not required.
### Service Fabric 8.2 releases | Release date | Release | More info |
We are excited to announce that 8.2 release of the Service Fabric runtime has st
| February 12, 2022 | [Azure Service Fabric 8.2 Second Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-second-refresh-release/ba-p/3095454) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU2.md)| | June 06, 2022 | [Azure Service Fabric 8.2 Third Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-third-refresh-release/ba-p/3469508) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU3.md)| | July 14, 2022 | [Azure Service Fabric 8.2 Fourth Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-8-2-fourth-refresh-release/ba-p/3575845) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU4.md)|
+| October 11, 2022 | [Azure Service Fabric 8.2 Sixth Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/microsoft-azure-service-fabric-8-2-sixth-refresh-release/ba-p/3666852) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU6.md)|
+| October 24, 2022 | [Azure Service Fabric 8.2 Seventh Refresh Release](https://techcommunity.microsoft.com/t5/azure-service-fabric-blog/azure-service-fabric-8-2-seventh-refresh-is-now-available/ba-p/3666872) | [Release notes](https://github.com/microsoft/service-fabric/blob/master/release_notes/Service_Fabric_ReleaseNotes_82CU7.md)|
## Service Fabric 8.1
service-health Impacted Resources Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/impacted-resources-outage.md
This article details what is communicated to users and where they can view infor
The impacted resources tab under Azure portal-> Service Health ->Service Issues will display resources that are Confirmed to be impacted by an outage and resources that could Potentially be impacted by an outage. Below is an example of impacted resources tab for an incident on Service Issues with Confirmed and Potential impact resources. ##### Service Health provides the below information to users whose resources are impacted by an outage:
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Micros
{ "properties": {
- "pricingTier": "Standard"
+ "pricingTier": "Standard",
"subPlan": "PerStorageAccount" } }
PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Micros
{ "properties": {
- "pricingTier": "Standard"
+ "pricingTier": "Standard",
"subPlan": "PerTransaction" } }
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Previously updated : 04/01/2022 Last updated : 11/14/2022 -+ ms.devlang: azurecli
Every secure request to an Azure Storage account must be authorized. By default,
When you disallow Shared Key authorization for a storage account, Azure Storage rejects all subsequent requests to that account that are authorized with the account access keys. Only secured requests that are authorized with Azure AD will succeed. For more information about using Azure AD, see [Authorize access to data in Azure Storage](authorize-data-access.md).
-This article describes how to detect requests sent with Shared Key authorization and how to remediate Shared Key authorization for your storage account.
+The **AllowSharedKeyAccess** property of a storage account is not set by default and does not return a value until you explicitly set it. The storage account permits requests that are authorized with Shared Key when the property value is **null** or when it is **true**.
+
+This article describes how to use a DRAG (Detection-Remediation-Audit-Governance) framework to continuously manage Shared Key authorization for your storage account.
+
+## Prerequisites
+
+Before disallowing Shared Key access on any of your storage accounts:
+
+- [Understand how disallowing Shared Key affects SAS tokens](#understand-how-disallowing-shared-key-affects-sas-tokens)
+- [Consider compatibility with other Azure tools and services](#consider-compatibility-with-other-azure-tools-and-services)
+- Consider the need to [disallow Shared Key authorization to use Azure AD Conditional Access](#disallow-shared-key-authorization-to-use-azure-ad-conditional-access)
+- [Transition Azure Files workloads](#transition-azure-files-workloads)
+
+### Understand how disallowing Shared Key affects SAS tokens
+
+When Shared Key access is disallowed for the storage account, Azure Storage handles SAS tokens based on the type of SAS and the service that is targeted by the request. The following table shows how each type of SAS is authorized and how Azure Storage will handle that SAS when the **AllowSharedKeyAccess** property for the storage account is **false**.
+
+| Type of SAS | Type of authorization | Behavior when AllowSharedKeyAccess is false |
+|-|-|-|
+| User delegation SAS (Blob storage only) | Azure AD | Request is permitted. Microsoft recommends using a user delegation SAS when possible for superior security. |
+| Service SAS | Shared Key | Request is denied for all Azure Storage services. |
+| Account SAS | Shared Key | Request is denied for all Azure Storage services. |
+
+Azure metrics and logging in Azure Monitor do not distinguish between different types of shared access signatures. The **SAS** filter in Azure Metrics Explorer and the **SAS** field in Azure Storage logging in Azure Monitor both report requests that are authorized with any type of SAS. However, different types of shared access signatures are authorized differently, and behave differently when Shared Key access is disallowed:
+
+- A service SAS token or an account SAS token is authorized with Shared Key and will not be permitted on a request to Blob storage when the **AllowSharedKeyAccess** property is set to **false**.
+- A user delegation SAS is authorized with Azure AD and will be permitted on a request to Blob storage when the **AllowSharedKeyAccess** property is set to **false**.
+
+When you are evaluating traffic to your storage account, keep in mind that metrics and logs as described in [Detect the type of authorization used by client applications](#detect-the-type-of-authorization-used-by-client-applications) may include requests made with a user delegation SAS.
+
+For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md).
+
+### Consider compatibility with other Azure tools and services
+
+A number of Azure services use Shared Key authorization to communicate with Azure Storage. If you disallow Shared Key authorization for a storage account, these services will not be able to access data in that account, and your applications may be adversely affected.
+
+Some Azure tools offer the option to use Azure AD authorization to access Azure Storage. The following table lists some popular Azure tools and notes whether they can use Azure AD to authorize requests to Azure Storage.
+
+| Azure tool | Azure AD authorization to Azure Storage |
+|-|-|
+| Azure portal | Supported. For information about authorizing with your Azure AD account from the Azure portal, see [Choose how to authorize access to blob data in the Azure portal](../blobs/authorize-data-operations-portal.md). |
+| AzCopy | Supported for Blob storage. For information about authorizing AzCopy operations, see [Choose how you'll provide authorization credentials](storage-use-azcopy-v10.md#choose-how-youll-provide-authorization-credentials) in the AzCopy documentation. |
+| Azure Storage Explorer | Supported for Blob storage, Queue storage, Table storage and Azure Data Lake Storage Gen2. Azure AD access to File storage is not supported. Make sure to select the correct Azure AD tenant. For more information, see [Get started with Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) |
+| Azure PowerShell | Supported. For information about how to authorize PowerShell commands for blob or queue operations with Azure AD, see [Run PowerShell commands with Azure AD credentials to access blob data](../blobs/authorize-data-operations-powershell.md) or [Run PowerShell commands with Azure AD credentials to access queue data](../queues/authorize-data-operations-powershell.md). |
+| Azure CLI | Supported. For information about how to authorize Azure CLI commands with Azure AD for access to blob and queue data, see [Run Azure CLI commands with Azure AD credentials to access blob or queue data](../blobs/authorize-data-operations-cli.md). |
+| Azure IoT Hub | Supported. For more information, see [IoT Hub support for virtual networks](../../iot-hub/virtual-network-support.md). |
+| Azure Cloud Shell | Azure Cloud Shell is an integrated shell in the Azure portal. Azure Cloud Shell hosts files for persistence in an Azure file share in a storage account. These files will become inaccessible if Shared Key authorization is disallowed for that storage account. For more information, see [Connect your Microsoft Azure Files storage](../../cloud-shell/overview.md#connect-your-microsoft-azure-files-storage). <br /><br /> To run commands in Azure Cloud Shell to manage storage accounts for which Shared Key access is disallowed, first make sure that you have been granted the necessary permissions to these accounts via Azure RBAC. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md). |
+
+### Disallow Shared Key authorization to use Azure AD Conditional Access
+
+To protect an Azure Storage account with Azure AD [Conditional Access](../../active-directory/conditional-access/overview.md) policies, you must disallow Shared Key authorization for the storage account.
+
+### Transition Azure Files workloads
+
+Azure Storage supports Azure AD authorization for requests to blob, table and queue storage only. If you disallow authorization with Shared Key for a storage account, requests to Azure Files that use Shared Key authorization will fail. Because the Azure portal always uses Shared Key authorization to access file data, if you disallow authorization with Shared Key for the storage account, you will not be able to access Azure Files data in the Azure portal.
+
+Microsoft recommends that you either migrate any Azure Files data to a separate storage account before you disallow access to an account via Shared Key, or do not apply this setting to storage accounts that support Azure Files workloads.
+
+Disallowing Shared Key access for a storage account does not affect SMB connections to Azure Files.
+
+## Identify storage accounts that allow Shared Key access
+
+There are two ways to identify storage accounts that allow Shared Key access:
+
+- [Check the Shared Key access setting for multiple accounts](#check-the-shared-key-access-setting-for-multiple-accounts)
+- [Configure the Azure Policy for Shared Key access in audit mode](#configure-the-azure-policy-for-shared-key-access-in-audit-mode)
+
+### Check the Shared Key access setting for multiple accounts
+
+To check the Shared Key access setting across a set of storage accounts with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
+
+Running the following query in the Resource Graph Explorer returns a list of storage accounts and displays the Shared Key access setting for each account:
+
+```kusto
+resources
+| where type =~ 'Microsoft.Storage/storageAccounts'
+| extend allowSharedKeyAccess = parse_json(properties).allowSharedKeyAccess
+| project subscriptionId, resourceGroup, name, allowSharedKeyAccess
+```
+
+### Configure the Azure Policy for Shared Key access in audit mode
+
+Azure Policy **Storage accounts should prevent shared key access** prevents users with appropriate permissions from configuring new or existing storage accounts to permit Shared Key authorization. Configure this policy in audit mode to identify storage accounts where Shared Key authorization is allowed. After you have changed applications to use Azure AD rather than Shared Key for authorization, you can [update the policy to prevent allowing Shared Key access](#update-the-azure-policy-to-prevent-allowing-shared-key-access).
+
+For more information about the built-in policy, see **Storage accounts should prevent shared key access** in [List of built-in policy definitions](../../governance/policy/samples/built-in-policies.md#storage).
+
+#### Assign the built-in policy for a resource scope
+
+Follow these steps to assign the built-in policy for the appropriate scope in the Azure portal:
+
+1. In the Azure portal, search for *Policy* to display the Azure Policy dashboard.
+1. In the **Authoring** section, select **Assignments**.
+1. Choose **Assign policy**.
+1. On the **Basics** tab of the **Assign policy** page, in the **Scope** section, specify the scope for the policy assignment. Select the **More** button (**...**) to choose the subscription and optional resource group.
+1. For the **Policy definition** field, select the **More** button (**...**), and enter *shared key access* in the **Search** field. Select the policy definition named **Storage accounts should prevent shared key access**.
+
+ :::image type="content" source="media/shared-key-authorization-prevent/policy-definition-select-portal.png" alt-text="Screenshot showing how to select the built-in policy to prevent allowing Shared Key access for your storage accounts" lightbox="media/shared-key-authorization-prevent/policy-definition-select-portal.png":::
+
+1. Select **Review + create**.
+
+1. On the **Review + create** tab, review the policy assignment then select **Create** to assign the policy definition to the specified scope.
+
+#### Monitor compliance with the policy
+
+To monitor your storage accounts for compliance with the Shared Key access policy, follow these steps:
+
+1. On the Azure Policy dashboard under **Authoring**, select **Assignments**.
+1. Locate and select the policy assignment you created in the previous section.
+1. Select the **View compliance** tab.
+1. Any storage accounts within the scope of the policy assignment that do not meet the policy requirements appear in the compliance report.
+
+ :::image type="content" source="media/shared-key-authorization-prevent/policy-compliance-report-portal.png" alt-text="Screenshot showing how to view the compliance report for the Shared Key access built-in policy." lightbox="media/shared-key-authorization-prevent/policy-compliance-report-portal.png":::
+
+To get more information about why a storage account is non-compliant, select **Details** under **Compliance reason**.
## Detect the type of authorization used by client applications
-When you disallow Shared Key authorization for a storage account, requests from clients that are using the account access keys for Shared Key authorization will fail. To understand how disallowing Shared Key authorization may affect client applications before you make this change, enable logging and metrics for the storage account. You can then analyze patterns of requests to your account over a period of time to determine how requests are being authorized.
+To understand how disallowing Shared Key authorization may affect client applications before you make this change, enable logging and metrics for the storage account. You can then analyze patterns of requests to your account over a period of time to determine how requests are being authorized.
Use metrics to determine how many requests the storage account is receiving that are authorized with Shared Key or a shared access signature (SAS). Use logs to determine which clients are sending those requests. A SAS may be authorized with either Shared Key or Azure AD. For more information about interpreting requests made with a shared access signature, see [Understand how disallowing Shared Key affects SAS tokens](#understand-how-disallowing-shared-key-affects-sas-tokens).
-### Monitor how many requests are authorized with Shared Key
+### Determine the number and frequency of requests authorized with Shared Key
To track how requests to a storage account are being authorized, use Azure Metrics Explorer in the Azure portal. For more information about Metrics Explorer, see [Getting started with Azure Metrics Explorer](../../azure-monitor/essentials/metrics-getting-started.md). Follow these steps to create a metric that tracks requests made with Shared Key or SAS: 1. Navigate to your storage account in the Azure portal. Under the **Monitoring** section, select **Metrics**.
-1. Select **Add metric**. In the **Metric** dialog, specify the following values:
+1. The new metric box should appear:
+
+ :::image type="content" source="media/shared-key-authorization-prevent/metric-new-metric-portal.png" alt-text="Screenshot showing the new metric dialog." lightbox="media/shared-key-authorization-prevent/metric-new-metric-portal.png":::
+
+ If it doesn't, select **Add metric**.
+
+1. In the **Metric** dialog, specify the following values:
1. Leave the **Scope** field set to the name of the storage account. 1. Set the **Metric Namespace** to *Account*. This metric will report on all requests against the storage account. 1. Set the **Metric** field to *Transactions*.
Follow these steps to create a metric that tracks requests made with Shared Key
The new metric will display the sum of the number of transactions against the storage account over a given interval of time. The resulting metric appears as shown in the following image:
- :::image type="content" source="media/shared-key-authorization-prevent/configure-metric-account-transactions.png" alt-text="Screenshot showing how to configure metric to sum transactions made with Shared Key or SAS":::
+ :::image type="content" source="media/shared-key-authorization-prevent/configure-metric-account-transactions.png" alt-text="Screenshot showing how to configure a metric to summarize transactions made with Shared Key or SAS." lightbox="media/shared-key-authorization-prevent/configure-metric-account-transactions.png":::
1. Next, select the **Add filter** button to create a filter on the metric for type of authorization. 1. In the **Filter** dialog, specify the following values:
Follow these steps to create a metric that tracks requests made with Shared Key
After you have configured the metric, requests to your storage account will begin to appear on the graph. The following image shows requests that were authorized with Shared Key or made with a SAS token. Requests are aggregated per day over the past thirty days. You can also configure an alert rule to notify you when a certain number of requests that are authorized with Shared Key are made against your storage account. For more information, see [Create, view, and manage metric alerts using Azure Monitor](../../azure-monitor/alerts/alerts-metric.md).
To log Azure Storage data with Azure Monitor and analyze it with Azure Log Analy
1. Create a new Log Analytics workspace in the subscription that contains your Azure Storage account, or use an existing Log Analytics workspace. After you configure logging for your storage account, the logs will be available in the Log Analytics workspace. For more information, see [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md). 1. Navigate to your storage account in the Azure portal.
-1. In the Monitoring section, select **Diagnostic settings (preview)**.
+1. In the Monitoring section, select **Diagnostic settings**.
1. Select the Azure Storage service for which you want to log requests. For example, choose **Blob** to log requests to Blob storage. 1. Select **Add diagnostic setting**. 1. Provide a name for the diagnostic setting. 1. Under **Category details**, in the **log** section, choose **StorageRead**, **StorageWrite**, and **StorageDelete** to log all data requests to the selected service. 1. Under **Destination details**, select **Send to Log Analytics**. Select your subscription and the Log Analytics workspace you created earlier, as shown in the following image.
- :::image type="content" source="media/shared-key-authorization-prevent/create-diagnostic-setting-logs.png" alt-text="Screenshot showing how to create a diagnostic setting for logging requests":::
+ :::image type="content" source="media/shared-key-authorization-prevent/create-diagnostic-setting-logs.png" alt-text="Screenshot showing how to create a diagnostic setting for logging requests." lightbox="media/shared-key-authorization-prevent/create-diagnostic-setting-logs.png":::
You can create a diagnostic setting for each type of Azure Storage resource in your storage account. After you create the diagnostic setting, requests to the storage account are subsequently logged according to that setting. For more information, see [Create diagnostic setting to collect resource logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md).
-For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs (preview)](../blobs/monitor-blob-storage-reference.md#resource-logs-preview).
+For a reference of fields available in Azure Storage logs in Azure Monitor, see [Resource logs](../blobs/monitor-blob-storage-reference.md#resource-logs-preview).
#### Query logs for requests made with Shared Key or SAS
After you have analyzed how requests to your storage account are being authorize
When you are confident that you can safely reject requests that are authorized with Shared Key, you can set the **AllowSharedKeyAccess** property for the storage account to **false**.
-The **AllowSharedKeyAccess** property is not set by default and does not return a value until you explicitly set it. The storage account permits requests that are authorized with Shared Key when the property value is **null** or when it is **true**.
- > [!WARNING] > If any clients are currently accessing data in your storage account with Shared Key, then Microsoft recommends that you migrate those clients to Azure AD before disallowing Shared Key access to the storage account.
+### Permissions for allowing or disallowing Shared Key access
+
+To set the **AllowSharedKeyAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** or **Microsoft.Storage/storageAccounts/\*** action. Built-in roles with this action include:
+
+- The Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role
+- The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role
+- The [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role
+
+These roles do not provide access to data in a storage account via Azure Active Directory (Azure AD). However, they include the **Microsoft.Storage/storageAccounts/listkeys/action**, which grants access to the account access keys. With this permission, a user can use the account access keys to access all data in a storage account.
+
+Role assignments must be scoped to the level of the storage account or higher to permit a user to allow or disallow Shared Key access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
+
+Be careful to restrict assignment of these roles only to those who require the ability to create a storage account or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
+
+> [!NOTE]
+> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage storage accounts. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
+
+### Disable Shared Key authorization
+
+Using an account that has the necessary permissions, disable Shared Key authorization in the Azure portal, with PowerShell or using the Azure CLI.
+ # [Azure portal](#tab/portal) To disallow Shared Key authorization for a storage account in the Azure portal, follow these steps:
To disallow Shared Key authorization for a storage account in the Azure portal,
1. Locate the **Configuration** setting under **Settings**. 1. Set **Allow storage account key access** to **Disabled**.
- :::image type="content" source="media/shared-key-authorization-prevent/shared-key-access-portal.png" alt-text="Screenshot showing how to disallow Shared Key access for account":::
+ :::image type="content" source="media/shared-key-authorization-prevent/shared-key-access-portal.png" alt-text="Screenshot showing how to disallow Shared Key access for a storage account." lightbox="media/shared-key-authorization-prevent/shared-key-access-portal.png":::
# [PowerShell](#tab/azure-powershell)
az storage account update \
-After you disallow Shared Key authorization, making a request to the storage account with Shared Key authorization will fail with error code 403 (Forbidden). Azure Storage returns error indicating that key-based authorization is not permitted on the storage account.
+After you disallow Shared Key authorization, making a request to the storage account with Shared Key authorization will fail with error code 403 (Forbidden). Azure Storage returns an error indicating that key-based authorization is not permitted on the storage account.
The **AllowSharedKeyAccess** property is supported for storage accounts that use the Azure Resource Manager deployment model only. For information about which storage accounts use the Azure Resource Manager deployment model, see [Types of storage accounts](storage-account-overview.md#types-of-storage-accounts).
-### Verify that Shared Key access is not allowed
+## Verify that Shared Key access is not allowed
To verify that Shared Key authorization is no longer permitted, you can attempt to call a data operation with the account access key. The following example attempts to create a container using the access key. This call will fail when Shared Key authorization is disallowed for the storage account. Remember to replace the placeholder values in brackets with your own values:
az storage container create \
--auth-mode key ```
-### Check the Shared Key access setting for multiple accounts
-
-To check the Shared Key access setting across a set of storage accounts with optimal performance, you can use the Azure Resource Graph Explorer in the Azure portal. To learn more about using the Resource Graph Explorer, see [Quickstart: Run your first Resource Graph query using Azure Resource Graph Explorer](../../governance/resource-graph/first-query-portal.md).
-
-Running the following query in the Resource Graph Explorer returns a list of storage accounts and displays the Shared Key access setting for each account:
-
-```kusto
-resources
-| where type =~ 'Microsoft.Storage/storageAccounts'
-| extend allowSharedKeyAccess = parse_json(properties).allowSharedKeyAccess
-| project subscriptionId, resourceGroup, name, allowSharedKeyAccess
-```
-
-## Permissions for allowing or disallowing Shared Key access
-
-To set the **AllowSharedKeyAccess** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** or **Microsoft.Storage/storageAccounts/\*** action. Built-in roles with this action include:
--- The Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role-- The Azure Resource Manager [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role-- The [Storage Account Contributor](../../role-based-access-control/built-in-roles.md#storage-account-contributor) role-
-These roles do not provide access to data in a storage account via Azure Active Directory (Azure AD). However, they include the **Microsoft.Storage/storageAccounts/listkeys/action**, which grants access to the account access keys. With this permission, a user can use the account access keys to access all data in a storage account.
-
-Role assignments must be scoped to the level of the storage account or higher to permit a user to allow or disallow Shared Key access for the storage account. For more information about role scope, see [Understand scope for Azure RBAC](../../role-based-access-control/scope-overview.md).
-
-Be careful to restrict assignment of these roles only to those who require the ability to create a storage account or update its properties. Use the principle of least privilege to ensure that users have the fewest permissions that they need to accomplish their tasks. For more information about managing access with Azure RBAC, see [Best practices for Azure RBAC](../../role-based-access-control/best-practices.md).
- > [!NOTE]
-> The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage storage accounts. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
-
-## Understand how disallowing Shared Key affects SAS tokens
-
-When Shared Key access is disallowed for the storage account, Azure Storage handles SAS tokens based on the type of SAS and the service that is targeted by the request. The following table shows how each type of SAS is authorized and how Azure Storage will handle that SAS when the **AllowSharedKeyAccess** property for the storage account is **false**.
-
-| Type of SAS | Type of authorization | Behavior when AllowSharedKeyAccess is false |
-|-|-|-|
-| User delegation SAS (Blob storage only) | Azure AD | Request is permitted. Microsoft recommends using a user delegation SAS when possible for superior security. |
-| Service SAS | Shared Key | Request is denied for all Azure Storage services. |
-| Account SAS | Shared Key | Request is denied for all Azure Storage services. |
-
-Azure metrics and logging in Azure Monitor do not distinguish between different types of shared access signatures. The **SAS** filter in Azure Metrics Explorer and the **SAS** field in Azure Storage logging in Azure Monitor both report requests that are authorized with any type of SAS. However, different types of shared access signatures are authorized differently, and behave differently when Shared Key access is disallowed:
--- A service SAS token or an account SAS token is authorized with Shared Key and will not be permitted on a request to Blob storage when the **AllowSharedKeyAccess** property is set to **false**.-- A user delegation SAS is authorized with Azure AD and will be permitted on a request to Blob storage when the **AllowSharedKeyAccess** property is set to **false**.-
-When you are evaluating traffic to your storage account, keep in mind that metrics and logs as described in [Detect the type of authorization used by client applications](#detect-the-type-of-authorization-used-by-client-applications) may include requests made with a user delegation SAS.
+> Anonymous requests are not authorized and will proceed if you have configured the storage account and container for anonymous public read access. For more information, see [Configure anonymous public read access for containers and blobs](../blobs/anonymous-read-access-configure.md).
-For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](storage-sas-overview.md).
+## Monitor the Azure Policy for compliance
-## Consider compatibility with other Azure tools and services
+After disallowing Shared Key access on the desired storage accounts, continue to [monitor the policy you created earlier](#monitor-compliance-with-the-policy) for ongoing compliance. Based on the monitoring results, take the appropriate action as needed, including changing the scope of the policy, disallowing Shared Key access on more accounts or allowing it for accounts where more time is needed for remediation.
-A number of Azure services use Shared Key authorization to communicate with Azure Storage. If you disallow Shared Key authorization for a storage account, these services will not be able to access data in that account, and your applications may be adversely affected.
+## Update the Azure Policy to prevent allowing Shared Key access
-Some Azure tools offer the option to use Azure AD authorization to access Azure Storage. The following table lists some popular Azure tools and notes whether they can use Azure AD to authorize requests to Azure Storage.
+To begin enforcing [the Azure Policy assignment you previously created](#configure-the-azure-policy-for-shared-key-access-in-audit-mode) for policy **Storage accounts should prevent shared key access**, change the **Effect** of the policy assignment to **Deny** to prevent authorized users from allowing Shared Key access on storage accounts. To change the effect of the policy, perform the following steps:
-| Azure tool | Azure AD authorization to Azure Storage |
-|-|-|
-| Azure portal | Supported. For information about authorizing with your Azure AD account from the Azure portal, see [Choose how to authorize access to blob data in the Azure portal](../blobs/authorize-data-operations-portal.md). |
-| AzCopy | Supported for Blob storage. For information about authorizing AzCopy operations, see [Choose how you'll provide authorization credentials](storage-use-azcopy-v10.md#choose-how-youll-provide-authorization-credentials) in the AzCopy documentation. |
-| Azure Storage Explorer | Supported for Blob storage, Queue storage, Table storage and Azure Data Lake Storage Gen2. Azure AD access to File storage is not supported. Make sure to select the correct Azure AD tenant. For more information, see [Get started with Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows#sign-in-to-azure) |
-| Azure PowerShell | Supported. For information about how to authorize PowerShell commands for blob or queue operations with Azure AD, see [Run PowerShell commands with Azure AD credentials to access blob data](../blobs/authorize-data-operations-powershell.md) or [Run PowerShell commands with Azure AD credentials to access queue data](../queues/authorize-data-operations-powershell.md). |
-| Azure CLI | Supported. For information about how to authorize Azure CLI commands with Azure AD for access to blob and queue data, see [Run Azure CLI commands with Azure AD credentials to access blob or queue data](../blobs/authorize-data-operations-cli.md). |
-| Azure IoT Hub | Supported. For more information, see [IoT Hub support for virtual networks](../../iot-hub/virtual-network-support.md). |
-| Azure Cloud Shell | Azure Cloud Shell is an integrated shell in the Azure portal. Azure Cloud Shell hosts files for persistence in an Azure file share in a storage account. These files will become inaccessible if Shared Key authorization is disallowed for that storage account. For more information, see [Connect your Microsoft Azure Files storage](../../cloud-shell/overview.md#connect-your-microsoft-azure-files-storage). <br /><br /> To run commands in Azure Cloud Shell to manage storage accounts for which Shared Key access is disallowed, first make sure that you have been granted the necessary permissions to these accounts via Azure RBAC. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md). |
-
-## Disallow Shared Key authorization to use Azure AD Conditional Access
-
-To protect an Azure Storage account with Azure AD [Conditional Access](../../active-directory/conditional-access/overview.md) policies, you must disallow Shared Key authorization for the storage account. Follow the steps described in [Detect the type of authorization used by client applications](#detect-the-type-of-authorization-used-by-client-applications) to analyze the potential impact of this change for existing storage accounts before disallowing Shared Key authorization.
+1. On the Azure Policy dashboard, locate and select the policy assignment [you previously created](#configure-the-azure-policy-for-shared-key-access-in-audit-mode).
-## Transition Azure Files and Table storage workloads
+1. Select **Edit assignment**.
+1. Go to the **Parameters** tab.
+1. Uncheck the **Only show parameters that need input or review** checkbox.
+1. In the **Effect** drop-down change **Audit** to **Deny**, then select **Review + save**.
+1. On the **Review + save** tab, review your changes, then select **Save**.
-Azure Storage supports Azure AD authorization for requests to Blob and Queue storage only. If you disallow authorization with Shared Key for a storage account, requests to Azure Files or Table storage that use Shared Key authorization will fail. Because the Azure portal always uses Shared Key authorization to access file and table data, if you disallow authorization with Shared Key for the storage account, you will not be able to access file or table data in the Azure portal.
-
-Microsoft recommends that you either migrate any Azure Files or Table storage data to a separate storage account before you disallow access to the account via Shared Key, or that you do not apply this setting to storage accounts that support Azure Files or Table storage workloads.
-
-Disallowing Shared Key access for a storage account does not affect SMB connections to Azure Files.
+> [!NOTE]
+> It might take up to 30 minutes for your policy change to take effect.
## Next steps
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
Synapse workspaces use default storage containers for:
Identify the following information about your storage: - The ADLS Gen2 account to use for your workspace. This document calls it `storage1`. `storage1` is considered the "primary" storage account for your workspace.-- The container inside `workspace1` that your Synapse workspace will use by default. This document calls it `container1`.
+- The container inside `storage1` that your Synapse workspace will use by default. This document calls it `container1`.
- Select **Access control (IAM)**.
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
To learn more about how to manage workspace libraries, see the following article
- [Manage workspace packages](./apache-spark-manage-workspace-packages.md)
-> [!NOTE]
-> If you enabled [Data exfiltration protection](../security/workspace-data-exfiltration-protection.md), you should upload all your dependencies as workspace libraries.
- ## Pool packages In some cases, you might want to standardize the packages that are used on an Apache Spark pool. This standardization can be useful if the same packages are commonly installed by multiple people on your team.
To learn more about these capabilities, see [Manage Spark pool packages](./apach
> > - If the package you are installing is large or takes a long time to install, this fact affects the Spark instance start up time. > - Altering the PySpark, Python, Scala/Java, .NET, or Spark version is not supported.
-> - Installing packages from PyPI is not supported within DEP-enabled workspaces.
+
+### Manage dependencies for DEP-enabled Synapse Spark pools
+
+> [!NOTE]
+>
+> - Installing packages from public repo is not supported within [DEP-enabled workspaces](../security/workspace-data-exfiltration-protection.md), you should upload all your dependencies as workspace libraries and install to your Spark pool.
+>
+Please follow the steps below if you have trouble to identify the required dependencies:
+
+- **Step1: Run the following script to set up a local Python environment same with Synapse Spark environment**
+The setup script requires [Synapse-Python38-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python38-CPU.yml) which is the list of libraries shipped in the default python env in Synapse spark.
+
+ ```powershell
+ # one-time synapse python setup
+ wget Synapse-Python38-CPU.yml
+ sudo bash Miniforge3-Linux-x86_64.sh -b -p /usr/lib/miniforge3
+ export PATH="/usr/lib/miniforge3/bin:$PATH"
+ sudo apt-get -yq install gcc g++
+ conda env create -n synapse-env -f Synapse-Python38-CPU.yml
+ source activate synapse-env
+```
+
+- **Step2: Run the following script to identify the required dependencies**
+The below snippet can be used to pass your requirement.txt which has all the packages and version you intend to install in the spark 3.1/spark3.2 spark pool. It will print the names of the *new* wheel files/dependencies needed for your input library requirements. Note this will list out only the dependencies that are not already present in the spark pool by default.
+
+ ```python
+ # command to list out wheels needed for your input libraries
+ # this command will list out only *new* dependencies that are
+ # not already part of the built-in synapse environment
+ pip install -r <input-user-req.txt> > pip_output.txt
+ cat pip_output.txt | grep "Using cached *"
+```
+ ## Session-scoped packages
virtual-desktop Apply Windows License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/apply-windows-license.md
Title: Apply Windows license to session host virtual machines - Azure
description: Describes how to apply the Windows license for Azure Virtual Desktop VMs. Previously updated : 11/11/2022 Last updated : 11/14/2022
You can apply an Azure Virtual Desktop license to your VMs with the following me
- You can create a host pool and its session host virtual machines using the [GitHub Azure Resource Manager template](https://github.com/Azure/RDS-Templates/tree/master/ARM-wvd-templates). Creating VMs with this method automatically applies the license. - You can manually apply a license to an existing session host virtual machine. To apply the license this way, first follow the instructions in [Create a host pool with PowerShell or the Azure CLI](./create-host-pools-powershell.md) to create a host pool and associated VMs, then return to this article to learn how to apply the license.
-## Apply a Windows license to a session host VM
+## Apply a Windows license to a Windows client session host VM
+
+>[!NOTE]
+>The directions in this section apply to Windows client VMs, not Windows Server VMs.
Before you start, make sure you've [installed and configured the latest version of Azure PowerShell](/powershell/azure/).
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md
You can create a virtual machine in multiple ways:
>[!NOTE] >If you're deploying a virtual machine using Windows 7 as the host OS, the creation and deployment process will be a little different. For more details, see [Deploy a Windows 7 virtual machine on Azure Virtual Desktop](./virtual-desktop-fall-2019/deploy-windows-7-virtual-machine.md).
-After you've created your session host virtual machines, [apply a Windows license to a session host VM](./apply-windows-license.md#apply-a-windows-license-to-a-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license.
+After you've created your session host virtual machines, [apply a Windows license to a session host VM](./apply-windows-license.md#apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license.
## Prepare the virtual machines for Azure Virtual Desktop agent installations
virtual-desktop Troubleshoot Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-azure-monitor.md
Title: Troubleshoot Monitor Azure Virtual Desktop - Azure
description: How to troubleshoot issues with Azure Monitor for Azure Virtual Desktop. Previously updated : 08/12/2022 Last updated : 11/14/2022
This article presents known issues and solutions for common problems in Azure Monitor for Azure Virtual Desktop. >[!IMPORTANT]
->[The Log Analytics Agent is currently being deprecated](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). While Azure Monitor currently uses the Log Analytics Agent for Azure Virtual Desktop support, you'll eventually need to migrate to Azure Monitor by August 31, 2024.
+>[The Log Analytics Agent is currently being deprecated](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). While Azure Monitor currently uses the Log Analytics Agent for Azure Virtual Desktop support, you'll eventually need to migrate to Azure Monitor by August 31, 2024. We'll provide instructions for how to migrate when we release the update that allows Azure Virtual Desktop Insights to support the Azure Monitor Agent. Until then, continue to use the Log Analytics Agent.
## Issues with configuration and setup
virtual-desktop Create Host Pools Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019.md
You can create a virtual machine in multiple ways:
> > Azure Virtual Desktop extended support for Windows 7 session host VMs ends on January 10, 2023. To see which operating systems are supported, review [Operating systems and licenses](../prerequisites.md#operating-systems-and-licenses).
-After you've created your session host virtual machines, [apply a Windows license to a session host VM](../apply-windows-license.md#apply-a-windows-license-to-a-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license.
+After you've created your session host virtual machines, [apply a Windows license to a session host VM](../apply-windows-license.md#apply-a-windows-license-to-a-windows-client-session-host-vm) to run your Windows or Windows Server virtual machines without paying for another license.
## Prepare the virtual machines for Azure Virtual Desktop agent installations
virtual-machines Disk Encryption Isolated Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-isolated-network.md
Any network security group settings that are applied must still allow the endpoi
## Azure Disk Encryption with Azure AD (previous version)
-If using [Azure Disk Encryption with Azure AD (previous version)](disk-encryption-overview-aad.md), the [Azure Active Directory Library](../../active-directory/azuread-dev/active-directory-authentication-libraries.md) will need to be installed manually for all distros (in addition to the packages appropriate for the distro, as [listed above](#package-management)).
+If using [Azure Disk Encryption with Azure AD (previous version)](disk-encryption-overview-aad.md), the [Microsoft Authentication Library](../../active-directory/develop/msal-overview.md) will need to be installed manually for all distros (in addition to the packages appropriate for the distro, as [listed above](#package-management)).
When encryption is being enabled with [Azure AD credentials](disk-encryption-linux-aad.md), the target VM must allow connectivity to both Azure Active Directory endpoints and Key Vault endpoints. Current Azure Active Directory authentication endpoints are maintained in sections 56 and 59 of the [Microsoft 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges) documentation. Key Vault instructions are provided in the documentation on how to [Access Azure Key Vault behind a firewall](../../key-vault/general/access-behind-firewall.md).
virtual-machines Maintenance Configurations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-powershell.md
Creating a Maintenance Configurations lets you decide when to apply platform updates to various Azure resources. This topic covers the Azure PowerShell options for Dedicated Hosts and Isolated VMs. For more about benefits of using Maintenance Configurations, its limitations, and other management options, see [Managing platform updates with Maintenance Configurations](maintenance-configurations.md).
-If you are looking for information about Maintenance Configurations for scale sets, see [Maintenance Control for virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
+If you are looking for information about Maintenance Configurations for scale sets, see [Maintenance Control for Virtual Machine Scale Sets](virtual-machine-scale-sets-maintenance-control.md).
> [!IMPORTANT] > There are different **scopes** which support certain machine types and schedules, so please ensure you are selecting the right scope for your virtual machine.
-
+ ## Enable the PowerShell module
-Make sure `PowerShellGet` is up to date.
+Make sure `PowerShellGet` is up to date.
-```azurepowershell-interactive
-Install-Module -Name PowerShellGet -Repository PSGallery -Force
-```
+```azurepowershell-interactive
+Install-Module -Name PowerShellGet -Repository PSGallery -Force
+```
-Install the `Az.Maintenance` PowerShell module.
+Install the `Az.Maintenance` PowerShell module.
-```azurepowershell-interactive
+```azurepowershell-interactive
Install-Module -Name Az.Maintenance
-```
+```
+
+Check that you are running the latest version of `Az.Maintenance` PowerShell module (version 1.2.0)
+
+```azurepowershell-interactive
+Get-Module -ListAvailable -Name Az.Maintenance
+```
+
+Ensure that you are running the appropriate version of `Az.Maintenance` using
+
+```azurepowershell-interactive
+Import-Module -Name Az.Maintenance -RequiredVersion 1.2.0
+```
If you are installing locally, make sure you open your PowerShell prompt as an administrator. You may also be asked to confirm that you want to install from an *untrusted repository*. Type `Y` or select **Yes to All** to install the module. - ## Create a maintenance configuration Create a resource group as a container for your configuration. In this example, a resource group named *myMaintenanceRG* is created in *eastus*. If you already have a resource group that you want to use, you can skip this part and replace the resource group name with your own in the rest of the examples.
New-AzResourceGroup `
-Name myMaintenanceRG ```
-Use [New-AzMaintenanceConfiguration](/powershell/module/az.maintenance/new-azmaintenanceconfiguration) to create a maintenance configuration. This example creates a maintenance configuration named *myConfig* scoped to the host.
+You can declare a scheduled window when Azure will recurrently apply the updates on your resources. Once you create a scheduled window you no longer have to apply the updates manually. Maintenance **recurrence** can be expressed as daily, weekly or monthly. Some examples are:
+
+- **daily**- RecurEvery "Day" **or** "3Days"
+- **weekly**- RecurEvery "3Weeks" **or** "Week Saturday,Sunday"
+- **monthly**- RecurEvery "Month day23,day24" **or** "Month Last Sunday" **or** "Month Fourth Monday"
+
+### Host
+
+This example creates a maintenance configuration named myConfig scoped to **host** with a scheduled window of 5 hours on the fourth Monday of every month. It is important to note that the **duration** of the schedule for this scope should be at least two hours long. To begin, you will need to define all the parameters needed for `New-AzMaintenanceConfiguration`
```azurepowershell-interactive
-$config = New-AzMaintenanceConfiguration `
- -ResourceGroup myMaintenanceRG `
- -Name myConfig `
- -MaintenanceScope host `
- -Location eastus
+$RGName = "myMaintenanceRG"
+$configName = "myConfig"
+$scope = "Host"
+$location = "eastus"
+$timeZone = "Pacific Standard Time"
+$duration = "05:00"
+$startDateTime = "2022-11-01 00:00"
+$recurEvery = "Month Fourth Monday"
```
-Using `-MaintenanceScope host` ensures that the maintenance configuration is used for controlling updates to the host.
+After you have defined the parameters, you can now use the `New-AzMaintenanceConfiguration` cmdlet to create your configuration.
-If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
+```azurepowershell-interactive
+New-AzMaintenanceConfiguration
+ -ResourceGroup $RGName `
+ -Name $configName `
+ -MaintenanceScope $scope `
+ -Location $location `
+ -StartDateTime $startDateTime `
+ -TimeZone $timeZone `
+ -Duration $duration `
+ -RecurEvery $recurEvery
+```
+
+Using `$scope = "Host"` ensures that the maintenance configuration is used for controlling updates on host machines. You will need to ensure that you are creating a configuration for the specific scope of machines you are targeting. To read more about scopes check out [maintenance configuration scopes](maintenance-configurations.md#scopes).
-You can query for available maintenance configurations using [Get-AzMaintenanceConfiguration](/powershell/module/az.maintenance/get-azmaintenanceconfiguration).
+### OS Image
+
+In this example, we will create a maintenance configuration named myConfig scoped to **osimage** with a scheduled window of 8 hours every 5 days. It is important to note that the **duration** of the schedule for this scope should be at least 5 hours long. Another key difference to note is that this scope allows a max of 7 days for schedule recurrence.
```azurepowershell-interactive
-Get-AzMaintenanceConfiguration | Format-Table -Property Name,Id
+$RGName = "myMaintenanceRG"
+$configName = "myConfig"
+$scope = "osimage"
+$location = "eastus"
+$timeZone = "Pacific Standard Time"
+$duration = "08:00"
+$startDateTime = "2022-11-01 00:00"
+$recurEvery = "5days"
```
-### Create a maintenance configuration with scheduled window
-
-You can also declare a scheduled window when Azure will apply the updates on your resources. This example creates a maintenance configuration named myConfig with a scheduled window of 5 hours on the fourth Monday of every month. Once you create a scheduled window you no longer have to apply the updates manually.
+After you have defined the parameters, you can now use the `New-AzMaintenanceConfiguration` cmdlet to create your configuration.
```azurepowershell-interactive
-$config = New-AzMaintenanceConfiguration `
+New-AzMaintenanceConfiguration
-ResourceGroup $RGName `
- -Name $MaintenanceConfig `
- -MaintenanceScope Host `
+ -Name $configName `
+ -MaintenanceScope $scope `
-Location $location `
- -StartDateTime "2020-10-01 00:00" `
- -TimeZone "Pacific Standard Time" `
- -Duration "05:00" `
- -RecurEvery "Month Fourth Monday"
+ -StartDateTime $startDateTime `
+ -TimeZone $timeZone `
+ -Duration $duration `
+ -RecurEvery $recurEvery
+```
+
+### Guest
+
+Our most recent addition to the maintenance configuration offering is the **InGuestPatch** scope. This example will show how to create a maintenance configuration for guest scope using PowerShell. To learn more about this scope see [Guest](maintenance-configurations.md#guest).
+
+```azurepowershell-interactive
+$RGName = "myMaintenanceRG"
+$configName = "myConfig"
+$scope = "InGuestPatch"
+$location = "eastus"
+$timeZone = "Pacific Standard Time"
+$duration = "04:00"
+$startDateTime = "2022-11-01 00:00"
+$recurEvery = "Week Saturday, Sunday"
+$WindowsParameterClassificationToInclude = "FeaturePack","ServicePack";
+$WindowParameterKbNumberToInclude = "KB123456","KB123466";
+$WindowParameterKbNumberToExclude = "KB123456","KB123466";
+$RebootOption = "IfRequired";
+$LinuxParameterClassificationToInclude = "Other";
+$LinuxParameterPackageNameMaskToInclude = "apt","httpd";
+$LinuxParameterPackageNameMaskToExclude = "ppt","userpk";
+ ```
-> [!IMPORTANT]
-> Maintenance **duration** must be *2 hours* or longer.
-Maintenance **recurrence** can be expressed as daily, weekly or monthly. Some examples are:
-
+After you have defined the parameters, you can now use the `New-AzMaintenanceConfiguration` cmdlet to create your configuration.
+
+```azurepowershell-interactive
+New-AzMaintenanceConfiguration
+ -ResourceGroup $RGName `
+ -Name $configName `
+ -MaintenanceScope $scope `
+ -Location $location `
+ -StartDateTime $startDateTime `
+ -TimeZone $timeZone `
+ -Duration $duration `
+ -RecurEvery $recurEvery `
+ -WindowParameterClassificationToInclude $WindowsParameterClassificationToInclude `
+ -WindowParameterKbNumberToInclude $WindowParameterKbNumberToInclude `
+ -WindowParameterKbNumberToExclude $WindowParameterKbNumberToExclude `
+ -InstallPatchRebootSetting $RebootOption `
+ -LinuxParameterPackageNameMaskToInclude $LinuxParameterPackageNameMaskToInclude `
+ -LinuxParameterClassificationToInclude $LinuxParameterClassificationToInclude `
+ -LinuxParameterPackageNameMaskToExclude $LinuxParameterPackageNameMaskToExclude `
+ -ExtensionProperty @{"InGuestPatchMode"="User"}
+```
+
+If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
+
+You can check if you have successfully created the maintenance configurations by using [Get-AzMaintenanceConfiguration](/powershell/module/az.maintenance/get-azmaintenanceconfiguration).
+
+```azurepowershell-interactive
+Get-AzMaintenanceConfiguration | Format-Table -Property Name,Id
+```
## Assign the configuration
-Use [New-AzConfigurationAssignment](/powershell/module/az.maintenance/new-azconfigurationassignment) to assign the configuration to your isolated VM or Azure Dedicated Host.
+After you have created your configuration, you might want to also assign machines to it using PowerShell. To achieve this we will use [New-AzConfigurationAssignment](/powershell/module/az.maintenance/new-azconfigurationassignment).
### Isolated VM
-Apply the configuration to a VM using the ID of the configuration. Specify `-ResourceType VirtualMachines` and supply the name of the VM for `-ResourceName`, and the resource group of the VM for `-ResourceGroupName`.
+Apply the configuration to a VM using the ID of the configuration. Specify `-ResourceType VirtualMachines` and supply the name of the VM for `-ResourceName`, and the resource group of the VM for `-ResourceGroupName`.
```azurepowershell-interactive New-AzConfigurationAssignment `
- -ResourceGroupName myResourceGroup `
- -Location eastus `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute `
- -ConfigurationAssignmentName $config.Name `
- -MaintenanceConfigurationId $config.Id
+ -ResourceGroupName "myResourceGroup" `
+ -Location "eastus" `
+ -ResourceName "myVM" `
+ -ResourceType "VirtualMachines" `
+ -ProviderName "Microsoft.Compute" `
+ -ConfigurationAssignmentName "configName" `
+ -MaintenanceConfigurationId "configID"
``` ### Dedicated host
-To apply a configuration to a dedicated host, you also need to include `-ResourceType hosts`, `-ResourceParentName` with the name of the host group, and `-ResourceParentType hostGroups`.
-
+To apply a configuration to a dedicated host, you also need to include `-ResourceType hosts`, `-ResourceParentName` with the name of the host group, and `-ResourceParentType hostGroups`.
```azurepowershell-interactive New-AzConfigurationAssignment `
- -ResourceGroupName myResourceGroup `
- -Location eastus `
- -ResourceName myHost `
- -ResourceType hosts `
+ -ResourceGroupName "myResourceGroup" `
+ -Location "eastus" `
+ -ResourceName "myHost" `
+ -ResourceType "hosts" `
-ResourceParentName myHostGroup ` -ResourceParentType hostGroups `
- -ProviderName Microsoft.Compute `
- -ConfigurationAssignmentName $config.Name `
- -MaintenanceConfigurationId $config.Id
+ -ProviderName "Microsoft.Compute" `
+ -ConfigurationAssignmentName "configName" `
+ -MaintenanceConfigurationId "configID"
+```
+
+### Virtual Machine Scale Sets
+
+```azurepowershell-interactive
+New-AzConfigurationAssignment `
+ -ResourceGroupName "myResourceGroup" `
+ -Location "eastus" `
+ -ResourceName "myVMSS" `
+ -ResourceType "VirtualMachineScaleSets" `
+ -ProviderName "Microsoft.Compute" `
+ -ConfigurationAssignmentName "configName" `
+ -MaintenanceConfigurationId "configID"
``` ## Check for pending updates
Check for pending updates for an isolated VM. In this example, the output is for
```azurepowershell-interactive Get-AzMaintenanceUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute | Format-Table
+ -ResourceGroupName "myResourceGroup" `
+ -ResourceName "myVM" `
+ -ResourceType "VirtualMachines" `
+ -ProviderName "Microsoft.Compute" | Format-Table
``` - ### Dedicated host
-To check for pending updates for a dedicated host. In this example, the output is formatted as a table for readability. Replace the values for the resources with your own.
+Check for pending updates for a dedicated host. In this example, the output is formatted as a table for readability.
```azurepowershell-interactive Get-AzMaintenanceUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myHost `
- -ResourceType hosts `
- -ResourceParentName myHostGroup `
- -ResourceParentType hostGroups `
- -ProviderName Microsoft.Compute | Format-Table
+ -ResourceGroupName "myResourceGroup" `
+ -ResourceName "myHost" `
+ -ResourceType "hosts" `
+ -ResourceParentName "myHostGroup" `
+ -ResourceParentType "hostGroups" `
+ -ProviderName "Microsoft.Compute" | Format-Table
```
+### Virtual Machine Scale Sets
+
+```azurepowershell-interactive
+Get-AzMaintenanceUpdate `
+ -ResourceGroupName "myResourceGroup" `
+ -Location "eastus" `
+ -ResourceName "myVMSS" `
+ -ResourceType "VirtualMachineScaleSets" `
+ -ProviderName "Microsoft.Compute" | Format-Table
+```
## Apply updates
-Use [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) to apply pending updates. Apply update calls can take upto 2 hours to complete.
+Use [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) to apply pending updates. Apply update calls can take up to 2 hours to complete. This cmdlet will only work for *Host* and *OSImage* scopes. It will NOT work for *Guest* scope.
### Isolated VM
Create a request to apply updates to an isolated VM.
```azurepowershell-interactive New-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute
+ -ResourceGroupName "myResourceGroup" `
+ -ResourceName "myVM" `
+ -ResourceType "VirtualMachines" `
+ -ProviderName "Microsoft.Compute"
``` On success, this command will return a `PSApplyUpdate` object. You can use the Name attribute in the `Get-AzApplyUpdate` command to check the update status. See [Check update status](#check-update-status).
Apply updates to a dedicated host.
```azurepowershell-interactive New-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myHost `
- -ResourceType hosts `
- -ResourceParentName myHostGroup `
- -ResourceParentType hostGroups `
+ -ResourceGroupName "myResourceGroup" `
+ -ResourceName "myHost" `
+ -ResourceType "hosts" `
+ -ResourceParentName "myHostGroup" `
+ -ResourceParentType "hostGroups" `
-ProviderName Microsoft.Compute ```
+### Virtual Machine Scale Sets
+
+```azurepowershell-interactive
+New-AzApplyUpdate `
+ -ResourceGroupName "myResourceGroup" `
+ -Location "eastus" `
+ -ResourceName "myVMSS" `
+ -ResourceType "VirtualMachineScaleSets" `
+ -ProviderName "Microsoft.Compute"
+```
+ ## Check update status
-Use [Get-AzApplyUpdate](/powershell/module/az.maintenance/get-azapplyupdate) to check on the status of an update. The commands shown below show the status of the latest update by using `default` for the `-ApplyUpdateName` parameter. You can substitute the name of the update (returned by the [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) command) to get the status of a specific update.
+
+Use [Get-AzApplyUpdate](/powershell/module/az.maintenance/get-azapplyupdate) to check on the status of an update. The commands shown below show the status of the latest update by using `default` for the `-ApplyUpdateName` parameter. You can substitute the name of the update (returned by the [New-AzApplyUpdate](/powershell/module/az.maintenance/new-azapplyupdate) command) to get the status of a specific update. This cmdlet will only work for *Host* and *OSImage* scopes. It will NOT work for *Guest* scope.
```text Status : Completed
ute/virtualMachines/DXT-test-04-iso/providers/Microsoft.Maintenance/applyUpdates
Name : default Type : Microsoft.Maintenance/applyUpdates ```+ LastUpdateTime will be the time when the update got complete, either initiated by you or by the platform in case self-maintenance window was not used. If there has never been an update applied through maintenance configurations it will show default value. ### Isolated VM
Check for updates to a specific virtual machine.
```azurepowershell-interactive Get-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myVM `
- -ResourceType VirtualMachines `
- -ProviderName Microsoft.Compute `
- -ApplyUpdateName default
+ -ResourceGroupName "myResourceGroup" `
+ -ResourceName "myVM" `
+ -ResourceType "VirtualMachines" `
+ -ProviderName "Microsoft.Compute" `
+ -ApplyUpdateName "applyUpdateName"
``` ### Dedicated host
Check for updates to a dedicated host.
```azurepowershell-interactive Get-AzApplyUpdate `
- -ResourceGroupName myResourceGroup `
- -ResourceName myHost `
- -ResourceType hosts `
- -ResourceParentName myHostGroup `
- -ResourceParentType hostGroups `
- -ProviderName Microsoft.Compute `
- -ApplyUpdateName myUpdateName
+ -ResourceGroupName "myResourceGroup" `
+ -ResourceName "myHost" `
+ -ResourceType "hosts" `
+ -ResourceParentName "myHostGroup" `
+ -ResourceParentType "hostGroups" `
+ -ProviderName "Microsoft.Compute" `
+ -ApplyUpdateName "applyUpdateName"
```
-## Remove a maintenance configuration
+### Virtual Machine Scale Sets
+
+```azurepowershell-interactive
+New-AzApplyUpdate `
+ -ResourceGroupName "myResourceGroup" `
+ -Location "eastus" `
+ -ResourceName "myVMSS" `
+ -ResourceType "VirtualMachineScaleSets" `
+ -ProviderName "Microsoft.Compute" `
+ -ApplyUpdateName "applyUpdateName"
+```
+
+## Delete a maintenance configuration
Use [Remove-AzMaintenanceConfiguration](/powershell/module/az.maintenance/remove-azmaintenanceconfiguration) to delete a maintenance configuration. ```azurepowershell-interactive Remove-AzMaintenanceConfiguration `
- -ResourceGroupName myResourceGroup `
- -Name $config.Name
+ -ResourceGroupName "myResourceGroup" `
+ -Name "configName"
``` ## Next steps+ To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Move Region Maintenance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-region-maintenance-configuration.md
After moving the configurations, compare configurations and resources in the new
## Clean up source resources
-After the move, consider deleting the moved maintenance configurations in the source region, [PowerShell](../virtual-machines/maintenance-configurations-powershell.md#remove-a-maintenance-configuration), or [CLI](../virtual-machines/maintenance-configurations-cli.md#delete-a-maintenance-configuration).
+After the move, consider deleting the moved maintenance configurations in the source region, [PowerShell](../virtual-machines/maintenance-configurations-powershell.md#delete-a-maintenance-configuration), or [CLI](../virtual-machines/maintenance-configurations-cli.md#delete-a-maintenance-configuration).
## Next steps
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
Previously updated : 09/13/2021 Last updated : 11/14/2021
This article shows you how to move a VM to a different [VM size](sizes.md).
-After you create a virtual machine (VM), you can scale the VM up or down by changing the VM size. In some cases, you must deallocate the VM first. This can happen if the new size is not available on the hardware cluster that is currently hosting the VM.
+After you create a virtual machine (VM), you can scale the VM up or down by changing the VM size. In some cases, you must deallocate the VM first. This can happen if the new size isn't available on the hardware cluster that is currently hosting the VM.
If your VM uses Premium Storage, make sure that you choose an **s** version of the size to get Premium Storage support. For example, choose Standard_E4**s**_v3 instead of Standard_E4_v3.
If your VM uses Premium Storage, make sure that you choose an **s** version of t
1. Pick a new size from the list of available sizes and then select **Resize**.
-If the virtual machine is currently running, changing its size will cause it to be restarted.
+If the virtual machine is currently running, changing its size will cause it to restart.
If your VM is still running and you don't see the size you want in the list, stopping the virtual machine may reveal more sizes.
To resize a VM, you need the latest [Azure CLI](/cli/azure/install-az-cli2) inst
--size Standard_DS3_v2 ```
- The VM restarts during this process. After the restart, your existing OS and data disks are remapped. Anything on the temporary disk is lost.
+ The VM restarts during this process. After the restart, your existing OS and data disks are kept. Anything on the temporary disk is lost.
-3. If the desired VM size is not listed, you need to first deallocate the VM with [az vm deallocate](/cli/azure/vm). This process allows the VM to then be resized to any size available that the region supports and then started. The following steps deallocate, resize, and then start the VM named `myVM` in the resource group named `myResourceGroup`:
+3. If the desired VM size isn't listed, you need to first deallocate the VM with [az vm deallocate](/cli/azure/vm). This process allows the VM to then be resized to any size available that the region supports and then started. The following steps deallocate, resize, and then start the VM named `myVM` in the resource group named `myResourceGroup`:
```azurecli-interactive # Variables will make this easier. Replace the values with your own.
List the VM sizes that are available in the region where the VM is hosted.
Get-AzVMSize -ResourceGroupName $resourceGroup -VMName $vmName ```
-If the size you want is listed, run the following commands to resize the VM. If the desired size is not listed, go on to step 3.
+If the size you want is listed, run the following commands to resize the VM. If the desired size isn't listed, go on to step 3.
```azurepowershell-interactive $vm = Get-AzVM -ResourceGroupName $resourceGroup -VMName $vmName
$vm.HardwareProfile.VmSize = "<newVMsize>"
Update-AzVM -VM $vm -ResourceGroupName $resourceGroup ```
-If the size you want is not listed, run the following commands to deallocate the VM, resize it, and restart the VM. Replace **\<newVMsize>** with the size you want.
+If the size you want isn't listed, run the following commands to deallocate the VM, resize it, and restart the VM. Replace **\<newVMsize>** with the size you want.
```azurepowershell-interactive Stop-AzVM -ResourceGroupName $resourceGroup -Name $vmName -Force
Start-AzVM -ResourceGroupName $resourceGroup -Name $vmName
**Use PowerShell to resize a VM in an availability set**
-If the new size for a VM in an availability set is not available on the hardware cluster currently hosting the VM, then all VMs in the availability set will need to be deallocated to resize the VM. You also might need to update the size of other VMs in the availability set after one VM has been resized. To resize a VM in an availability set, perform the following steps.
+If the new size for a VM in an availability set isn't available on the hardware cluster currently hosting the VM, then all VMs in the availability set will need to be deallocated to resize the VM. You also might need to update the size of other VMs in the availability set after one VM has been resized. To resize a VM in an availability set, perform the following steps.
```azurepowershell-interactive $resourceGroup = "myResourceGroup"
Get-AzVMSize `
-VMName $vmName ```
-If the desired size is listed, run the following commands to resize the VM. If it is not listed, go to the next section.
+If the desired size is listed, run the following commands to resize the VM. If it isn't listed, go to the next section.
```azurepowershell-interactive $vm = Get-AzVM `
Update-AzVM `
-ResourceGroupName $resourceGroup ```
-If the size you want is not listed, continue with the following steps to deallocate all VMs in the availability set, resize VMs, and restart them.
+If the size you want isn't listed, continue with the following steps to deallocate all VMs in the availability set, resize VMs, and restart them.
Stop all VMs in the availability set.
The only combinations allowed for resizing are:
- VM (with local temp disk) -> VM (with local temp disk); and - VM (with no local temp disk) -> VM (with no local temp disk).
-If interested in a work around, please see [How do I migrate from a VM size with local temp disk to a VM size with no local temp disk?](azure-vms-no-temp-disk.yml#how-do-i-migrate-from-a-vm-size-with-local-temp-disk-to-a-vm-size-with-no-local-temp-disk)
+If interested in a work-around, see [How do I migrate from a VM size with local temp disk to a VM size with no local temp disk?](azure-vms-no-temp-disk.yml#how-do-i-migrate-from-a-vm-size-with-local-temp-disk-to-a-vm-size-with-no-local-temp-disk)
## Next steps
-For additional scalability, run multiple VM instances and scale out. For more information, see [Automatically scale machines in a Virtual Machine Scale Set](../virtual-machine-scale-sets/tutorial-autoscale-powershell.md).
+For more scalability, run multiple VM instances and scale out. For more information, see [Automatically scale machines in a Virtual Machine Scale Set](../virtual-machine-scale-sets/tutorial-autoscale-powershell.md).
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Create a new variable group 'SDAF-General' using the Library page in the Pipelin
| Branch | main | | | S-Username | `<SAP Support user account name>` | | | S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon. |
-| `tf_version` | 1.2.8 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
+| `tf_version` | 1.3.0 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) |
Save the variables.
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
The table below contains the parameters that define the resource group.
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | - |
-> | `resource_group_name` | Name of the resource group to be created | Optional |
-> | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional |
-> | `resource_group_tags` | Tags to be associated to the resource group | Optional |
+> | `resourcegroup_name` | Name of the resource group to be created | Optional |
+> | `resourcegroup_arm_id` | Azure resource identifier for an existing resource group | Optional |
+> | `resourcegroup_tags` | Tags to be associated to the resource group | Optional |
## SAP Virtual Hostname parameters
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
vm-linux Previously updated : 06/26/2021 Last updated : 11/14/2022 # SAP Cloud Appliance Library
-[SAP Cloud Appliance Library](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) offers a quick and easy way to create SAP workloads in Azure. With a few clicks you can set up a fully configured demo environment from an Appliance Template or deploy a standardized system for an SAP product based on default or custom SAP software installation stacks.
+[SAP Cloud Appliance Library](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) offers a quick and easy way to create SAP workloads in Azure. You can set up a fully configured demo environment from an Appliance Template or deploy a standardized system for an SAP product based on default or custom SAP software installation stacks.
This page lists the latest Appliance Templates and below the latest SAP S/4HANA stacks for production-ready deployments.
-For deployment of an appliance template you will need to authenticate with your S-User or P-User. You can create a P-User free of charge via the [SAP Community](https://community.sap.com/).
+To deploy an appliance template, you'll need to authenticate with your S-User or P-User. You can create a P-User free of charge via the [SAP Community](https://community.sap.com/).
-[For details on Azure account creation see the SAP learning video and description](https://www.youtube.com/watch?v=iORePziUMBk&list=PLWV533hWWvDmww3OX9YPhjjS1l1n6o-H2&index=18)
+[For details on Azure account creation, see the SAP learning video and description](https://www.youtube.com/watch?v=iORePziUMBk&list=PLWV533hWWvDmww3OX9YPhjjS1l1n6o-H2&index=18)
You will also find detailed answers to your questions related to SAP Cloud Appliance Library on Azure [SAP CAL FAQ](https://caldocs.hana.ondemand.com/caldocs/help/Azure_FAQs.pdf)
-The online library is continuously updated with Appliances for demo, proof of concept and exploration of new business cases. For the most recent ones select ΓÇ£Create ApplianceΓÇ¥ here from the list ΓÇô or visit [cal.sap.com](https://cal.sap.com/catalog#/applianceTemplates) for further templates.
+The online library is continuously updated with Appliances for demo, proof of concept and exploration of new business cases. For the most recent ones, select ΓÇ£Create ApplianceΓÇ¥ here from the list ΓÇô or visit [cal.sap.com](https://cal.sap.com/catalog#/applianceTemplates) for further templates.
## Deployment of appliances through SAP Cloud Appliance Library
-| Appliance Templates | Link |
-| -- | : |
-| **SAP S/4HANA 2021 FPS02, Fully-Activated Appliance** July 19 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) |
-| **SAP S/4HANA 2021 FPS01, Fully-Activated Appliance** April 26 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) ||
-| **SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition** May 11 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured you can start directly implementing your scenarios. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577) |
-| **SAP ABAP Platform 1909, Developer Edition** June 21 2021 | [Create Appliance](https://cal.sap.com/registration?sguid=7bd4548f-a95b-4ee9-910a-08c74b4f6c37&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|The SAP ABAP Platform on SAP HANA gives you access to SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements ΓÇô including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/7bd4548f-a95b-4ee9-910a-08c74b4f6c37) |
-| **SAP ERP 6.0 EhP 6 for Data Migration to SAP S/4HANA** October 24 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=56825489-df3a-4b6d-999c-329a63ef5e8a&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|update password of DDIC 100, SAP* 000 .This system can be used as source system for the "direct transfer" data migration scenarios of the SAP S/4HANA Fully-Activated Appliance. It might also be useful as an "open playground" for SP ERP 6.0 EhP6 scenarios, however, the contained business processes and data structures are not documented explicitly. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/56825489-df3a-4b6d-999c-329a63ef5e8a) |
-| **SAP NetWeaver 7.5 SP15 on SAP ASE** January 20 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|SAP NetWeaver 7.5 SP15 on SAP ASE | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) |
+| Appliance Template | Date | Description | Creation Link |
+| | - | -- | - |
+| [**SAP S/4HANA 2021 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | July 19 2022 | This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | April 26 2022 | This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Management (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| [**SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577) | May 11 2022 | This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options, the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured, you can start directly implementing your scenarios. | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/7bd4548f-a95b-4ee9-910a-08c74b4f6c37) | June 21 2021 | The SAP ABAP Platform on SAP HANA gives you access to SAP ABAP Platform 1909 Developer Edition on SAP HANA. This solution is pre-configured with many other elements ΓÇô including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and pre-configured frontend / backend connections, etc. It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=7bd4548f-a95b-4ee9-910a-08c74b4f6c37&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| [**SAP ERP 6.0 EhP 6 for Data Migration to SAP S/4HANA**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/56825489-df3a-4b6d-999c-329a63ef5e8a) | October 24 2022 | Update password of DDIC 100, SAP* 000. This system can be used as source system for the "direct transfer" data migration scenarios of the SAP S/4HANA *Fully-Activated Appliance*. It might also be useful as an "open playground" for SP ERP 6.0 EhP6 scenarios, however, the contained business processes and data structures aren't documented explicitly. | [Create Appliance](https://cal.sap.com/registration?sguid=56825489-df3a-4b6d-999c-329a63ef5e8a&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 20 2020 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
## Deployment of S/4HANA system for productive usage through SAP Cloud Appliance Library
-You can now also deploy SAP S/4HANA systems with High Availability (HA), non-HA or single server architecture through SAP Cloud Appliance Library. The offering comprises default SAP S/4HANA software stacks including FPS levels as well as an integration into Maintenance Planner to enable creation and installation of custom SAP S/4HANA software stacks.
+You can now also deploy SAP S/4HANA systems with High Availability (HA), non-HA or single server architecture through SAP Cloud Appliance Library. The offering comprises default SAP S/4HANA software stacks including FPS levels and an integration into Maintenance Planner to enable creation and installation of custom SAP S/4HANA software stacks.
The following links highlight the Product stacks that you can quickly deploy on Azure. Just select ΓÇ£Deploy SystemΓÇ¥. | All products | Link |
virtual-machines Dbms_Guide_Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_sqlserver.md
keywords: 'Azure, SQL Server, SAP, AlwaysOn, Always On'
Previously updated : 08/24/2022 Last updated : 11/14/2022
For Azure M-Series VM, the latency writing into the transaction log can be reduc
### Formatting the disks For SQL Server, the NTFS block size for disks containing SQL Server data and log files should be 64 KB. There's no need to format the D:\ drive. This drive comes pre-formatted.
-To make sure that the restore or creation of databases isn't initializing the data files by zeroing the content of the files, you should make sure that the user context the SQL Server service is running in has a certain permission. Usually users in the Windows Administrator group have these permissions. If the SQL Server service is run in the user context of non-Windows Administrator user, you need to assign that user the User Right **Perform volume maintenance tasks**. For more information, see [Database instant file initialization](/sql/relational-databases/databases/database-instant-file-initialization?view=sql-server-ver16).
+To make sure that the restore or creation of databases isn't initializing the data files by zeroing the content of the files, you should make sure that the user context the SQL Server service is running in has a certain permission. Usually users in the Windows Administrator group have these permissions. If the SQL Server service is run in the user context of non-Windows Administrator user, you need to assign that user the User Right **Perform volume maintenance tasks**. For more information, see [Database instant file initialization](/sql/relational-databases/databases/database-instant-file-initialization).
### Influence of database compression In configurations where I/O bandwidth can become a limiting factor, every measure, which reduces IOPS might help to stretch the workload one can run in an IaaS scenario like Azure. Therefore, if not yet done, applying SQL Server PAGE compression is recommended by both SAP and Microsoft before uploading an existing SAP database to Azure.
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
The current IPv6 for Azure virtual network release has the following limitations
## Pricing
-IPv6 Azure resources and bandwidth are charged at the same rates as IPv4. There are no additional or different charges for IPv6. You can find details about pricing for [public IP addresses](https://azure.microsoft.com/pricing/details/ip-addresses/), [network bandwidth](https://azure.microsoft.com/pricing/details/bandwidth/), or [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/).
+There is no charge to use Public IPv6 Addresses or Public IPv6 Prefixes. Associated resources and bandwidth are charged at the same rates as IPv4. You can find details about pricing for [public IP addresses](https://azure.microsoft.com/pricing/details/ip-addresses/), [network bandwidth](https://azure.microsoft.com/pricing/details/bandwidth/), or [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/).
## Next steps
virtual-network Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/manage-nat-gateway.md
# Manage NAT gateway
-Learn how to create and remove a NAT gateway resource from a virtual network subnet. A NAT gateway enables outbound connectivity for resources in an Azure Virtual Network. You may wish to change the IP address or prefix your resources use for outbound connectivity to the internet. The public and public IP address prefixes associated with the NAT gateway can be changed after deployment.
+Learn how to create and remove a NAT gateway resource from a virtual network subnet. A NAT gateway enables outbound connectivity for resources in an Azure Virtual Network. You may wish to change the IP address or prefix your resources use for outbound connectivity to the internet. The public IP address and public IP address prefixes associated with the NAT gateway can be changed after deployment.
This article explains how to manage the following aspects of NAT gateway:
virtual-network Quickstart Create Nat Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md
This quickstart shows you how to use the Azure Virtual Network NAT service. You'
Before you deploy the NAT gateway resource and the other resources, a resource group is required to contain the resources deployed. In the following steps, you'll create a resource group, NAT gateway resource, and a public IP address. You can use one or more public IP address resources, public IP prefixes, or both.
-For information more about public IP prefixes and a NAT gateway, see [Manage NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway?tabs=manage-nat-portal#add-or-remove-a-public-ip-prefix)
+For information about public IP prefixes and a NAT gateway, see [Manage NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway?tabs=manage-nat-portal#add-or-remove-a-public-ip-prefix).
1. Sign in to the [Azure portal](https://portal.azure.com).
web-application-firewall Ag Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/ag-overview.md
description: This article provides an overview of Web Application Firewall (WAF)
Previously updated : 05/06/2022 Last updated : 11/08/2022
The geomatch operator is now available for custom rules. See [geomatch custom ru
For more information on custom rules, see [Custom Rules for Application Gateway.](custom-waf-rules-overview.md)
-### Bot mitigation
+### Bot protection rule set
-A managed Bot protection rule set can be enabled for your WAF to block or log requests from known malicious IP addresses, alongside the managed ruleset. The IP addresses are sourced from the Microsoft Threat Intelligence feed. Intelligent Security Graph powers Microsoft threat intelligence and is used by multiple services including Microsoft Defender for Cloud.
+You can enable a managed bot protection rule set to take custom actions on requests from all bot categories.
+
+Three bot categories are supported:
+
+- **Bad**
+
+ Bad bots include bots from malicious IP addresses and bots that have falsified their identities. Bad bots with malicious IPs are sourced from the Microsoft Threat Intelligence feedΓÇÖs high confidence IP Indicators of Compromise.
+- **Good**
+
+ Good bots include validated search engines such as Googlebot, bingbot, and other trusted user agents.
+
+- **Unknown**
+
+ Unknown bots are classified via published user agents without additional validation. For example, market analyzer, feed fetchers, and data collection agents. Unknown bots also include malicious IP addresses that are sourced from Microsoft Threat Intelligence feedΓÇÖs medium confidence IP Indicators of Compromise.
+
+Bot signatures are managed and dynamically updated by the WAF platform.
++
+You may assign Microsoft_BotManagerRuleSet_1.0 by using the **Assign** option under **Managed Rulesets**:
++
+If Bot protection is enabled, incoming requests that match bot rules are blocked, allowed, or logged based on the configured action. Malicious bots are blocked, verified search engine crawlers are allowed, unknown search engine crawlers are blocked, and unknown bots are logged by default. You can set custom actions to block, allow, or log for different types of bots.
+
+You can access WAF logs from a storage account, event hub, log analytics, or send logs to a partner solution.
-If Bot Protection is enabled, incoming requests that match Malicious Bot's client IPs are logged in the Firewall log, see more information below. You may access WAF logs from storage account, event hub, or log analytics.
### WAF modes
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
description: This page provides information on web application firewall CRS rule
Previously updated : 06/21/2022 Last updated : 11/08/2022
CRS 2.2.9 includes 10 rule groups, as shown in the following table. Each group c
|**[crs_42_tight_security](#crs42)**|Protect against path-traversal attacks| |**[crs_45_trojans](#crs45)**|Protect against backdoor trojans|
+### Bot rules
+
+You can enable a managed bot protection rule set to take custom actions on requests from all bot categories.
+
+|Rule group|Description|
+|||
+|**[BadBots](#bot100)**|Protect against bad bots|
+|**[GoodBots](#bot200)**|Identify good bots|
+|**[UnknownBots](#bot300)**|Identify unknown bots|
+ The following rule groups and rules are available when using Web Application Firewall on Application Gateway. # [OWASP 3.2](#tab/owasp32)
The following rule groups and rules are available when using Web Application Fir
|950921|Backdoor access| |950922|Backdoor access|
+# [Bot rules](#tab/bot)
+
+## <a name="bot"></a> Bot Manager rule sets
+
+### <a name="bot100"></a> Bad bots
+|RuleId|Description|
+|||
+|Bot100100|Malicious bots detected by threat intelligence|
+|Bot100200|Malicious bots that have falsified their identity|
+
+### <a name="bot200"></a> Good bots
+|RuleId|Description|
+|||
+|Bot200100|Search engine crawlers|
+|Bot200200|Unverified search engine crawlers|
+
+### <a name="bot300"></a> Unknown bots
+|RuleId|Description|
+|||
+|Bot300100|Unspecified identity|
+|Bot300200|Tools and frameworks for web crawling and attacks|
+|Bot300300|General purpose HTTP clients and SDKs|
+|Bot300400|Service agents|
+|Bot300500|Site health monitoring services|
+|Bot300600|Unknown bots detected by threat intelligence<br />(This rule also includes IP addresses matched to the Tor network.)|
+|Bot300700|Other bots|
+ ## Next steps