Updates from: 07/20/2021 03:07:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-api-connector.md
Title: Add API connectors to user flows (preview)
+ Title: Add API connectors to user flows
description: Configure an API connector to be used in a user flow.
zone_pivot_groups: b2c-policy-type
# Add an API connector to a sign-up user flow
-As a developer or IT administrator, you can use API connectors to integrate your sign-up user flows with REST APIs to customize the sign-up experience and integrate with external systems. At the end of this walkthrough, you'll be able to create an Azure AD B2C user flow that interacts with [REST API services](api-connectors-overview.md).
+As a developer or IT administrator, you can use API connectors to integrate your sign-up user flows with REST APIs to customize the sign-up experience and integrate with external systems. At the end of this walkthrough, you'll be able to create an Azure AD B2C user flow that interacts with [REST API services](api-connectors-overview.md) to modify your sign-up experiences.
::: zone pivot="b2c-user-flow"-
-In this scenario, the REST API validates whether email address' domain is fabrikam.com, or fabricam.com. The user-provided display name is greater than five characters. Then returns the job title with a static value.
-
-> [!IMPORTANT]
-> API connectors for sign-up is a public preview feature of Azure AD B2C. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
+You can create an API endpoint using one of our [samples](api-connector-samples.md#api-connector-rest-api-samples).
::: zone-end ::: zone pivot="b2c-custom-policy"
To use an [API connector](api-connectors-overview.md), you first create the API
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Under **Azure services**, select **Azure AD B2C**.
-4. Select **API connectors (Preview)**, and then select **New API connector**.
+4. Select **API connectors**, and then select **New API connector**.
- ![Add a new API connector](./media/add-api-connector/api-connector-new.png)
+ :::image type="content" source="media/add-api-connector/api-connector-new.png" alt-text="Providing the basic configuration like target URL and display name for an API connector during the creation experience.":::
5. Provide a display name for the call. For example, **Validate user information**. 6. Provide the **Endpoint URL** for the API call.
-7. Choose the **Authentication type** and configure the authentication information for calling your API. See the section below for options on securing your API.
+7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](secure-rest-api.md).
- ![Configure an API connector](./media/add-api-connector/api-connector-config.png)
+ :::image type="content" source="media/add-api-connector/api-connector-config.png" alt-text="Providing authentication configuration for an API connector during the creation experience.":::
8. Select **Save**.
-## Securing the API endpoint
-You can protect your API endpoint by using either HTTP basic authentication or HTTPS client certificate authentication (preview). In either case, you provide the credentials that Azure AD B2C will use when calling your API endpoint. Your API endpoint then checks the credentials and performs authorization decisions.
-
-### HTTP basic authentication
-HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Azure AD B2C sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then checks these values to determine whether to reject an API call or not.
-
-### HTTPS client certificate authentication (preview)
-
-> [!IMPORTANT]
-> This functionality is in preview and is provided without a service-level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Client certificate authentication is a mutual certificate-based authentication method where the client provides a client certificate to the server to prove its identity. In this case, Azure AD B2C will use the certificate that you upload as part of the API connector configuration. This happens as a part of the TLS/SSL handshake. Your API service can then limit access to only services that have proper certificates. The client certificate is a PKCS12 (PFX) X.509 digital certificate. In production environments, it should be signed by a certificate authority.
-
-To create a certificate, you can use [Azure Key Vault](../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. Recommended settings include:
-- **Subject**: `CN=<yourapiname>.<tenantname>.onmicrosoft.com`-- **Content Type**: `PKCS #12`-- **Lifetime Acton Type**: `Email all contacts at a given percentage lifetime` or `Email all contacts a given number of days before expiry`-- **Key Type**: `RSA`-- **Key Size**: `2048`-- **Exportable Private Key**: `Yes` (in order to be able to export pfx file)-
-You can then [export the certificate](../key-vault/certificates/how-to-export-certificate.md). You can alternatively use PowerShell's [New-SelfSignedCertificate cmdlet](../active-directory-b2c/secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
-
-After you have a certificate, you can then upload it as part of the API connector configuration. Note, the password is only required for certificate files protected by a password.
-
-Your API must implement the authorization based on sent client certificates in order to protect the API endpoints. For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and *validate the certificate from your API code*. You can also use Azure API Management to [check client certificate properties](
-../api-management/api-management-howto-mutual-certificates-for-clients.md) against desired values using policy expressions.
-
-It's recommended you set reminder alerts for when your certificate will expire. You will need to generate a new certificate and repeat the steps above. Your API service can temporarily continue to accept old and new certificates while the new certificate is deployed. To upload a new certificate to an existing API connector, select the API connector under **API connectors** and click on **Upload new certificate**. The most recently uploaded certificate, which is not expired and is past the start date will automatically be used by Azure Active Directory.
-
-### API Key
-
-Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development. For [Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
-
-This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can choose basic authentication and use temporary values for `username` and `password` that your API can disregard while you implement the authorization in your API.
- ## The request sent to your API An API connector materializes as an **HTTP POST** request, sending user attributes ('claims') as key-value pairs in a JSON body. Attributes are serialized similarly to [Microsoft Graph](/graph/api/resources/user#properties) user properties.
Content-type: application/json
"country":"United States", "extension_<extensions-app-id>_CustomAttribute1": "custom attribute value", "extension_<extensions-app-id>_CustomAttribute2": "custom attribute value",
+ "step": "<step-name>",
+ "client_id":"93fd07aa-333c-409d-955d-96008fd08dd9",
"ui_locales":"en-US" } ```
Only user properties and custom attributes listed in the **Azure AD B2C** > **Us
Custom attributes exist in the **extension_\<extensions-app-id>_CustomAttribute** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [Define custom attributes in Azure AD B2C](user-flow-custom-attributes.md).
-Additionally, the **UI Locales ('ui_locales')** claim is sent by default in all requests. It provides a user's locale(s) as configured on their device that can be used by the API to return internationalized responses.
-
+Additionally, the claims are typically sent in all request:
+- **UI Locales ('ui_locales')** - An end-user's locale(s) as configured on their device. This can be used by your API to return internationalized responses.
+- **Step ('step')** - The step or point on the user flow that the API connector was invoked for. Values include:
+ - `postFederationSignup` - corresponds to "After federating with an identity provider during sign-up"
+ - `postAttributeCollection` - corresponds to "Before creating the user"
+- **Client ID ('client_id')** - The `appId` value of the application that an end-user is authenticating to in a user flow. This is *not* the resource application's `appId` in access tokens.
+- **Email Address ('email')** or [**identities ('identities')**](/graph/api/resources/objectidentity) - these claims can be used by your API to identify the end-user that is authenticating to the application.
+
> [!IMPORTANT] > If a claim does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check and handle the case in which a claim is not in the request.
-> [!TIP]
-> [**identities ('identities')**](/graph/api/resources/objectidentity) and the **Email Address ('email')** claims can be used by your API to identify a user before they have an account in your tenant.
- ## Enable the API connector in a user flow Follow these steps to add an API connector to a sign-up user flow.
Follow these steps to add an API connector to a sign-up user flow.
4. Select **User flows**, and then select the user flow you want to add the API connector to. 5. Select **API connectors**, and then select the API endpoints you want to invoke at the following steps in the user flow:
- - **After signing in with an identity provider**
+ - **After federating with an identity provider during sign-up**
- **Before creating the user**
- ![Add APIs to the user flow](./media/add-api-connector/api-connectors-user-flow-select.png)
+ :::image type="content" source="media/add-api-connector/api-connectors-user-flow-select.png" alt-text="Selecting which API connector to use for a step in the user flow like 'Before creating the user'.":::
6. Select **Save**.
-## After signing in with an identity provider
+## After federating with an identity provider during sign-up
An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account.
Content-type: application/json
"displayName": "John Smith", "givenName":"John", "lastName":"Smith",
+ "step": "postFederationSignup",
+ "client_id":"<guid>",
"ui_locales":"en-US" } ```
Content-type: application/json
"country":"United States", "extension_<extensions-app-id>_CustomAttribute1": "custom attribute value", "extension_<extensions-app-id>_CustomAttribute2": "custom attribute value",
+ "step": "postAttributeCollection",
+ "client_id":"93fd07aa-333c-409d-955d-96008fd08dd9",
"ui_locales":"en-US" } ```
-The claims that send to the API depend on the information is collected from the user or is provided by the identity provider.
+The claims that are sent to the API depend on the information is collected from the user or is provided by the identity provider.
### Expected response types from the web API at this step
A continuation response indicates that the user flow should continue to the next
In a continuation response, the API can return claims. If a claim is returned by the API, the claim does the following: -- Overrides any value that has already been assigned to the claim from the attribute collection page.
+- Overrides any value that has already been provided by a user in the attribute collection page.
+
+To write claims to the directory on sign-up that shouldn't be collected from the user, you should still select the claims under **User attributes** of the user flow, which will by default ask the user for values, but you can use [custom JavaScript or CSS](customize-ui-with-html.md) to hide the input fields from an end user.
See an example of a [continuation response](#example-of-a-continuation-response).
Content-type: application/json
| Parameter | Type | Required | Description | | -- | -- | -- | -- |
+| version | String | Yes | The version of your API. |
| action | String | Yes | Value must be `Continue`. |
-| \<builtInUserAttribute> | \<attribute-type> | No | Returned values can overwrite values collected from a user. They can also be returned in the token if selected as an **Application claim**. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`. Returned values can overwrite values collected from a user. They can also be returned in the token if selected as an **Application claim**. |
+| \<builtInUserAttribute> | \<attribute-type> | No | Returned values can overwrite values collected from a user. |
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. Returned values can overwrite values collected from a user. |
### Example of a blocking response
Content-type: application/json
| Parameter | Type | Required | Description | | -- | | -- | -- |
-| version | String | Yes | The version of the API. |
+| version | String | Yes | The version of your API. |
| action | String | Yes | Value must be `ShowBlockPage` | | userMessage | String | Yes | Message to display to the user. | **End-user experience with a blocking response**
-![Example block page](./media/add-api-connector/blocking-page-response.png)
### Example of a validation-error response
Content-type: application/json
**End-user experience with a validation-error response**
-![Example validation page](./media/add-api-connector/validation-error-postal-code.png)
-
+ :::image type="content" source="media/add-api-connector/validation-error-postal-code.png" alt-text="An example image of what the end-user experience looks like after an API returns a validation-error response.":::
::: zone-end
To return the promo code claim back to the relying party application, add an out
## Best practices and how to troubleshoot + ### Using serverless cloud functions
-Serverless functions, like HTTP triggers in Azure Functions, provide a way create API endpoints to use with the API connector. You can use the serverless cloud function to, [for example](code-samples.md#api-connectors), perform validation logic and limit sign-ups to specific email domains. The serverless cloud function can also call and invoke other web APIs, user stores, and other cloud services for more complex scenarios.
+Serverless functions, like [HTTP triggers in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md), provide a way create API endpoints to use with the API connector. You can use the serverless cloud function to, [for example](api-connector-samples.md#api-connector-rest-api-samples), perform validation logic and limit sign-ups to specific email domains. The serverless cloud function can also call and invoke other web APIs, data stores, and other cloud services for complex scenarios.
### Best practices Ensure that: * Your API is following the API request and response contracts as outlined above. * The **Endpoint URL** of the API connector points to the correct API endpoint.
-* Your API explicitly checks for null values of received claims.
+* Your API explicitly checks for null values of received claims that it depends on.
+* Your API implements an authentication method outlined in [secure your API Connector](secure-rest-api.md).
* Your API responds as quickly as possible to ensure a fluid user experience.
- * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm." in production. For Azure Functions, it's recommended to use the [Premium plan](../azure-functions/functions-scale.md)
+ * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../azure-functions/functions-scale.md) in production.
+* Ensure high availability of your API.
+* Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
+
+
+### Use logging
+
+In general, it's helpful to use the logging tools enabled by your web API service, like [Application insights](../azure-functions/functions-monitoring.md), to monitor your API for unexpected error codes, exceptions, and poor performance.
+* Monitor for HTTP status codes that aren't HTTP 200 or 400.
+* A 401 or 403 HTTP status code typically indicates there's an issue with your authentication. Double-check your API's authentication layer and the corresponding configuration in the API connector.
+* Use more aggressive levels of logging (for example "trace" or "debug") in development if needed.
+* Monitor your API for long response times.
+++
+### Using serverless cloud functions
+
+Serverless cloud functions, like [HTTP triggers in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md), provide a simple, highly available, high performant way to create API endpoints to use as API connectors.
+
+### Best practices
+Ensure that:
+* Your API explicitly checks for null values of received claims that it depends on.
+* Your API implements an authentication method outlined in [secure your API Connector](secure-rest-api.md).
+* Your API responds as quickly as possible to ensure a fluid user experience.
+ * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../azure-functions/functions-scale.md)
+* Ensure high availability of your API.
+* Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
+ ### Use logging
In general, it's helpful to use the logging tools enabled by your web API servic
* Use more aggressive levels of logging (for example "trace" or "debug") in development if needed. * Monitor your API for long response times. + ## Next steps ::: zone pivot="b2c-user-flow" -- Get started with our [samples](code-samples.md#api-connectors).
+- Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples).
- [Secure your API Connector](secure-rest-api.md) ::: zone-end
active-directory-b2c Api Connector Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/api-connector-samples.md
+
+ Title: Samples of APIs for modifying your Azure AD B2C user flows | Microsoft Docs
+description: Code samples for modifying user flows with API connectors
+++++ Last updated : 07/16/2021++++++
+# API connector REST API samples
+
+The following tables provide links to code samples for using web APIs in your user flows using [API connectors](api-connectors-overview.md). These samples are primarily designed to be used with built-in user flows.
+
+## Azure Function quickstarts
+| Sample | Description |
+| - | |
+| [.NET Core](https://github.com/Azure-Samples/active-directory-dotnet-external-identities-api-connector-azure-function-validate) | This .NET Core Azure Function sample demonstrates how to limit sign-ups to specific email domains and validate user-provided information. |
+| [Node.js](https://github.com/Azure-Samples/active-directory-nodejs-external-identities-api-connector-azure-function-validate) | This Node.js Azure Function sample demonstrates how to limit sign-ups to specific email domains and validate user-provided information. |
+| [Python](https://github.com/Azure-Samples/active-directory-python-external-identities-api-connector-azure-function-validate) | This Python Azure Function sample demonstrates how to limit sign-ups to specific email domains and validate user-provided information. |
++
+## Automated fraud protection services & CAPTCHA
+| Sample | Description |
+| -- | |
+| [Arkose Labs fraud and abuse protection](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose) | This sample shows how to protect your user sign-ups using the Arkose Labs fraud and abuse protection service. |
+| [reCAPTCHA](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-captcha) | This sample shows how to protect your user sign-ups using a reCAPTCHA challenge to prevent automated abuse. |
++
+## Identity verification
+
+| Sample | Description |
+| -- | |
+| [IDology](https://github.com/Azure-Samples/active-directory-dotnet-external-identities-idology-identity-verification) | This sample shows how to verify a user identity as part of your sign-up flows with IDology's service. |
+| [Experian](https://github.com/Azure-Samples/active-directory-dotnet-external-identities-experian-identity-verification) | This sample shows how to verify a user identity as part of your sign-up flows with Experian's service. |
++
+## Other
+
+| Sample | Description |
+| -- | |
+| [Invitation code](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-invitation-code) | This sample demonstrates how to limit sign up to specific audiences by using invitation codes.|
+| [API connector community samples](https://github.com/azure-ad-b2c/api-connector-samples) | This repository has community maintained samples of scenarios enabled by API connectors.|
active-directory-b2c Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/api-connectors-overview.md
zone_pivot_groups: b2c-policy-type
::: zone pivot="b2c-user-flow"
-> [!IMPORTANT]
-> API connectors for sign-up is a public preview feature of Azure AD B2C. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- ## Overview As a developer or IT administrator, you can use API connectors to integrate your sign-up user flows with REST APIs to customize the sign-up experience and integrate with external systems. For example, with API connectors, you can: - **Validate user input data**. Validate against malformed or invalid user data. For example, you can validate user-provided data against existing data in an external data store or list of permitted values. If invalid, you can ask a user to provide valid data or block the user from continuing the sign-up flow.
+- **Verify user identity**. Use an identity verification service to add an extra level of security to account creation decisions.
- **Integrate with a custom approval workflow**. Connect to a custom approval system for managing and limiting account creation. - **Overwrite user attributes**. Reformat or assign a value to an attribute collected from the user. For example, if a user enters the first name in all lowercase or all uppercase letters, you can format the name with only the first letter capitalized. -- **Verify user identity**. Use an identity verification service to add an extra level of security to account creation decisions. - **Run custom business logic**. You can trigger downstream events in your cloud systems to send push notifications, update corporate databases, manage permissions, audit databases, and perform other custom actions.
-An API connector provides Azure AD B2C with the information needed to call API endpoint by defining the HTTP endpoint URL and authentication for the API call. Once you configure an API connector, you can enable it for a specific step in a user flow. When a user reaches that step in the sign up flow, the API connector is invoked and materializes as an HTTP POST request to your API, sending user information ("claims") as key-value pairs in a JSON body. The API response can affect the execution of the user flow. For example, the API response can block a user from signing up, ask the user to reenter information, or overwrite and append user attributes.
+An API connector provides Azure AD B2C with the information needed to call API endpoint by defining the HTTP endpoint URL and authentication for the API call. Once you configure an API connector, you can enable it for a specific step in a user flow. When a user reaches that step in the sign up flow, the API connector is invoked and materializes as an HTTP POST request to your API, sending user information ("claims") as key-value pairs in a JSON body. The API response can affect the execution of the user flow. For example, the API response can block a user from signing up, ask the user to reenter information, or overwrite user attributes.
## Where you can enable an API connector in a user flow There are two places in a user flow where you can enable an API connector: -- After signing in with an identity provider
+- After federating with an identity provider during sign-up
- Before creating the user > [!IMPORTANT] > In both of these cases, the API connectors are invoked during user **sign-up**, not sign-in.
-### After signing in with an identity provider
+### After federating with an identity provider during sign-up
An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account. The following are examples of API connector scenarios you might enable at this step:
If you reference a REST API technical profile directly from a user journey, the
::: zone-end
-## Security considerations
-
-You protect your REST API endpoint so that only authenticated clients can communicate with it. The REST API must use an HTTPS endpoint. Set the authentication type to one of the following authentication methods.
-
-### API Key
-
-API key is a unique identifier used to authenticate a user to access a REST API endpoint. For example, [Azure Functions HTTP trigger](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) includes the `code` as a query parameter in the endpoint URL.
--
-```http
-https://contoso.azurewebsites.net/api/endpoint?code=0123456789
-```
-
-API key authentication shouldn't be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can choose basic authentication and use temporary values for `username` and `password` that your API can disregard while you implement the authorization in your API.
---
-The API key can be sent a custom HTTP header. For example, the [Azure Functions HTTP trigger](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) uses the `x-functions-key` HTTP header to identify the requester.
-
+## Development of your REST API
-### Client certificate
-
-The client certificate authentication is a mutual certificate-based authentication method where the client provides a client certificate to the server to prove its identity. In this case, Azure AD B2C will use the certificate that you upload as part of the API connector configuration. This behavior happens as a part of the SSL handshake.
-
-Your API service can then limit access to only services that have proper certificates. The client certificate is a PKCS12 (PFX) X.509 digital certificate. In production environments, it should be signed by a certificate authority.
-
-### HTTP basic authentication
-
-The HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Azure AD B2C sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then checks these values to determine whether to reject an API call or not.
--
-### Bearer token
-
-Bearer token authentication is defined in [OAuth2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt). In bearer token authentication, Azure AD B2C sends an HTTP request with a token in the authorization header.
-
-```http
-Authorization: Bearer <token>
-```
-
-A bearer token is an opaque string. It can be a JWT access token or any string that the REST API expects Azure AD B2C to send in the authorization header.
-
-
-## REST API platform
-
-Your REST API can be based on any platform and written in any programing language, as long as it's secure and can send and receive claims in JSON format.
+Your REST API can be developed on any platform and written in any programing language, as long as it's secure and can send and receive claims in JSON format.
The request to your REST API service comes from Azure AD B2C servers. The REST API service must be published to a publicly accessible HTTPS endpoint. The REST API calls will arrive from an Azure data center IP address.
+You can use serverless cloud functions, like [HTTP triggers in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md) for ease of development.
-Design your REST API service and its underlying components (such as the database and file system) to be highly available.
+Your should design your REST API service and its underlying components (such as the database and file system) to be highly available.
## Next steps -- ::: zone pivot="b2c-user-flow" - Learn how to [add an API connector to a user flow](add-api-connector.md)-- Get started with our [samples](code-samples.md#api-connectors).
+- Learn how to [Secure your API Connector](secure-rest-api.md)
+- Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples)
::: zone-end
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/authorization-code-flow.md
# OAuth 2.0 authorization code flow in Azure Active Directory B2C
-You can use the OAuth 2.0 authorization code grant in apps installed on a device to gain access to protected resources, such as web APIs. By using the Azure Active Directory B2C (Azure AD B2C) implementation of OAuth 2.0, you can add sign-up, sign-in, and other identity management tasks to your single-page, mobile, and desktop apps. This article is language-independent. In the article, we describe how to send and receive HTTP messages without using any open-source libraries. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL).Take a look at the [sample apps that use MSAL](code-samples.md).
+You can use the OAuth 2.0 authorization code grant in apps installed on a device to gain access to protected resources, such as web APIs. By using the Azure Active Directory B2C (Azure AD B2C) implementation of OAuth 2.0, you can add sign-up, sign-in, and other identity management tasks to your single-page, mobile, and desktop apps. This article is language-independent. In the article, we describe how to send and receive HTTP messages without using any open-source libraries. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL).Take a look at the [sample apps that use MSAL](integrate-with-app-code-samples.md).
The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). You can use it for authentication and authorization in most [application types](application-types.md), including web applications, single-page applications, and natively installed applications. You can use the OAuth 2.0 authorization code flow to securely acquire access tokens and refresh tokens for your applications, which can be used to access resources that are secured by an [authorization server](protocols-overview.md). The refresh token allows the client to acquire new access (and refresh) tokens once the access token expires, typically after one hour.
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/best-practices.md
Define your application and service architecture, inventory current systems, and
| Architect an end-to-end solution | Include all of your applications' dependencies when planning an Azure AD B2C integration. Consider all services and products that are currently in your environment or that might need to be added to the solution, for example, Azure Functions, customer relationship management (CRM) systems, Azure API Management gateway, and storage services. Take into account the security and scalability for all services. | | Document your users' experiences | Detail all the user journeys your customers can experience in your application. Include every screen and any branching flows they might encounter when interacting with the identity and profile aspects of your application. Include usability, accessibility, and localization in your planning. | | Choose the right authentication protocol | For a breakdown of the different application scenarios and their recommended authentication flows, see [Scenarios and supported authentication flows](../active-directory/develop/authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). |
-| Pilot a proof-of-concept (POC) end-to-end user experience | Start with our [Microsoft code samples](code-samples.md) and [community samples](https://github.com/azure-ad-b2c/samples). |
+| Pilot a proof-of-concept (POC) end-to-end user experience | Start with our [Microsoft code samples](integrate-with-app-code-samples.md) and [community samples](https://github.com/azure-ad-b2c/samples). |
| Create a migration plan |Planning ahead can make migration go more smoothly. Learn more about [user migration](user-migration.md).| | Usability vs. security | Your solution must strike the right balance between application usability and your organization's acceptable level of risk. | | Move on-premises dependencies to the cloud | To help ensure a resilient solution, consider moving existing application dependencies to the cloud. |
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/embedded-login.md
window.parent.postMessage("signUp", '*');
## Configure a web application
-When a user selects the sign-in button, the [web app](code-samples.md#web-apps-and-apis) generates an authorization request that takes the user to Azure AD B2C sign-in experience. After sign-in is complete, Azure AD B2C returns an ID token, or authorization code, to the configured redirect URI within your application.
+When a user selects the sign-in button, the [web app](integrate-with-app-code-samples.md#web-apps-and-apis) generates an authorization request that takes the user to Azure AD B2C sign-in experience. After sign-in is complete, Azure AD B2C returns an ID token, or authorization code, to the configured redirect URI within your application.
To support embedded login, the iframe **src** property points to the sign-in controller, such as `/account/SignUpSignIn`, which generates the authorization request and redirects the user to Azure AD B2C policy.
See the following related articles:
- [User interface customization](customize-ui.md) - [RelyingParty](relyingparty.md) element reference - [Enable your policy for JavaScript](./javascript-and-page-layout.md)-- [Code samples](code-samples.md)
+- [Code samples](integrate-with-app-code-samples.md)
::: zone-end
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-apple-id.md
The following json is an example of a call to the Azure function:
```json { "appleTeamId": "ABC123DEFG",
- "appleKeyId": "URKEYID001",
"appleServiceId": "com.yourcompany.app1", "p8key": "MIGTAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBHkwdwIBAQQg+s07NiAcuGEu8rxsJBG7ttupF6FRe3bXdHxEipuyK82gCgYIKoZIzj0DAQehRANCAAQnR1W/KbbaihTQayXH3tuAXA8Aei7u7Ij5OdRy6clOgBeRBPy1miObKYVx3ki1msjjG2uGqRbrc1LvjLHINWRD" }
active-directory-b2c Integrate With App Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/integrate-with-app-code-samples.md
+
+ Title: Azure Active Directory B2C integrate with app samples | Microsoft Docs
+description: Code samples for integrating Azure AD B2C to mobile, desktop, web, and single-page applications.
+++++ Last updated : 10/02/2020++++++
+# Azure Active Directory B2C code samples
+
+The following tables provide links to samples for applications including iOS, Android, .NET, and Node.js.
+
+## Mobile and desktop apps
+
+| Sample | Description |
+|--| -- |
+| [ios-swift-native-msal](https://github.com/Azure-Samples/active-directory-b2c-ios-swift-native-msal) | An iOS sample in Swift that authenticates Azure AD B2C users and calls an API using OAuth 2.0 |
+| [android-native-msal](https://github.com/Azure-Samples/ms-identity-android-java#b2cmodefragment-class) | A simple Android app showcasing how to use MSAL to authenticate users via Azure Active Directory B2C, and access a Web API with the resulting tokens. |
+| [ios-native-appauth](https://github.com/Azure-Samples/active-directory-b2c-ios-native-appauth) | A sample that shows how you can use a third-party library to build an iOS application in Objective-C that authenticates Microsoft identity users to our Azure AD B2C identity service. |
+| [android-native-appauth](https://github.com/Azure-Samples/active-directory-b2c-android-native-appauth) | A sample that shows how you can use a third-party library to build an Android application that authenticates Microsoft identity users to our B2C identity service and calls a web API using OAuth 2.0 access tokens. |
+| [dotnet-desktop](https://github.com/Azure-Samples/active-directory-b2c-dotnet-desktop) | A sample that shows how a Windows Desktop .NET (WPF) application can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API. |
+| [xamarin-native](https://github.com/Azure-Samples/active-directory-b2c-xamarin-native) | A simple Xamarin Forms app showcasing how to use MSAL to authenticate users via Azure Active Directory B2C, and access a Web API with the resulting tokens. |
+
+## Web apps and APIs
+
+| Sample | Description |
+|--| -- |
+| [dotnet-webapp-and-webapi](https://github.com/Azure-Samples/active-directory-b2c-dotnet-webapp-and-webapi) | A combined sample for a .NET web application that calls a .NET Web API, both secured using Azure AD B2C. |
+| [dotnetcore-webapp-openidconnect](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-5-B2C) | An ASP.NET Core web application that uses OpenID Connect to sign in users in Azure AD B2C. |
+| [dotnetcore-webapp-msal-api](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C) | An ASP.NET Core web application that can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API. |
+| [openidconnect-nodejs](https://github.com/AzureADQuickStarts/B2C-WebApp-OpenIDConnect-NodeJS) | A Node.js app that provides a quick and easy way to set up a Web application with Express using OpenID Connect. |
+| [javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | A small node.js Web API for Azure AD B2C that shows how to protect your web api and accept B2C access tokens using passport.js. |
+| [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/master/README_B2C.md) | Demonstrate how to Integrate B2C of Microsoft identity platform with a Python web application. |
+
+## Single page apps
+
+| Sample | Description |
+|--| -- |
+| [ms-identity-javascript-angular-tutorial](https://github.com/Azure-Samples/ms-identity-javascript-angular-tutorial/tree/main/3-Authorization-II/2-call-api-b2c) | An Angular single page application (SPA) calling a web API. Authentication is done with Azure AD B2C by using MSAL Angular. This sample uses the authorization code flow with PKCE. |
+| [ms-identity-javascript-react-tutorial](https://github.com/Azure-Samples/ms-identity-javascript-react-tutorial/tree/main/3-Authorization-II/2-call-api-b2c) | A React single page application (SPA) calling a web API. Authentication is done with Azure AD B2C by using MSAL React. This sample uses the authorization code flow with PKCE. |
+| [ms-identity-b2c-javascript-spa](https://github.com/Azure-Samples/ms-identity-b2c-javascript-spa) | A VanillaJS single page application (SPA) calling a web API. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the authorization code flow with PKCE. |
+| [javascript-nodejs-management](https://github.com/Azure-Samples/ms-identity-b2c-javascript-nodejs-management/tree/main/Chapter1) | A VanillaJS single page application (SPA) calling Microsoft Graph to manage users in a B2C directory. Authentication is done with Azure AD B2C by using MSAL.js. This sample uses the authorization code flow with PKCE.|
+
+## Console/Daemon apps
+
+| Sample | Description |
+|--| -- |
+| [javascript-nodejs-management](https://github.com/Azure-Samples/ms-identity-b2c-javascript-nodejs-management/tree/main/Chapter2) | A Node.js and express console daemon application calling Microsoft Graph with its own identity to manage users in a B2C directory. Authentication is done with Azure AD B2C by using MSAL Node. This sample uses the authorization code flow.|
+| [dotnetcore-b2c-account-management](https://github.com/Azure-Samples/ms-identity-dotnetcore-b2c-account-management) | A .NET Core console application calling Microsoft Graph with its own identity to manage users in a B2C directory. Authentication is done with Azure AD B2C by using MSAL.NET. This sample uses the authorization code flow.|
+
+## SAML test application
+
+| Sample | Description |
+|--| -- |
+| [saml-sp-tester](https://github.com/azure-ad-b2c/saml-sp-tester/tree/master/source-code) | SAML test application to test Azure AD B2C configured to act as SAML identity provider. |
+
+## API connectors
+
+The following tables provide links to code samples for leveraging web APIs in your user flows using [API connectors](api-connectors-overview.md).
+
+### Azure Function quickstarts
+
+| Sample | Description |
+| - | |
+| [.NET Core](https://github.com/Azure-Samples/active-directory-dotnet-external-identities-api-connector-azure-function-validate) | This .NET Core Azure Function sample demonstrates how to limit sign-ups to specific email domains and validate user-provided information. |
+| [Node.js](https://github.com/Azure-Samples/active-directory-nodejs-external-identities-api-connector-azure-function-validate) | This Node.js Azure Function sample demonstrates how to limit sign-ups to specific email domains and validate user-provided information. |
+| [Python](https://github.com/Azure-Samples/active-directory-python-external-identities-api-connector-azure-function-validate) | This Python Azure Function sample demonstrates how to limit sign-ups to specific email domains and validate user-provided information. |
++
+### Automated fraud protection services & CAPTCHA
+| Sample | Description |
+| -- | |
+| [Arkose Labs fraud and abuse protection](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose) | This sample shows how to protect your user sign-ups using the Arkose Labs fraud and abuse protection service. |
+| [reCAPTCHA](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-captcha) | This sample shows how to protect your user sign-ups using a reCAPTCHA challenge to prevent automated abuse. |
++
+### Identity verification
+
+| Sample | Description |
+| -- | |
+| [IDology](https://github.com/Azure-Samples/active-directory-dotnet-external-identities-idology-identity-verification) | This sample shows how to verify a user identity as part of your sign-up flows by using an API connector to integrate with IDology. |
+| [Experian](https://github.com/Azure-Samples/active-directory-dotnet-external-identities-experian-identity-verification) | This sample shows how to verify a user identity as part of your sign-up flows by using an API connector to integrate with Experian. |
++
+### Other
+
+| Sample | Description |
+| -- | |
+| [Invitation code](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-invitation-code) | This sample demonstrates how to limit sign up to specific audiences by using invitation codes.|
+| [API connector community samples](https://github.com/azure-ad-b2c/api-connector-samples) | This repository has community maintained samples of scenarios enabled by API connectors.|
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/secure-rest-api.md
Title: Secure a Restful service in your Azure AD B2C
+ Title: Secure APIs used as API connectors in Azure AD B2C
-description: Secure your custom REST API claims exchanges in your Azure AD B2C.
+description: Secure your custom RESTful APIs used as API connectors in Azure AD B2C.
zone_pivot_groups: b2c-policy-type
-# Secure your API Connector
-
+# Secure your API used an API connector in Azure AD B2C
When integrating a REST API within an Azure AD B2C user flow, you must protect your REST API endpoint with authentication. The REST API authentication ensures that only services that have proper credentials, such as Azure AD B2C, can make calls to your endpoint. This article will explore how to secure REST API. + ## Prerequisites Complete the steps in the [Walkthrough: Add an API connector to a sign-up user flow](add-api-connector.md) guide. +
+You can protect your API endpoint by using either HTTP basic authentication or HTTPS client certificate authentication. In either case, you provide the credentials that Azure AD B2C will use when calling your API endpoint. Your API endpoint then checks the credentials and performs authorization decisions.
++ ## HTTP basic authentication
-HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Basic authentication works as follows: Azure AD B2C sends an HTTP request with the client credentials in the Authorization header. The credentials are formatted as the base64-encoded string "name:password".
+HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Basic authentication works as follows: Azure AD B2C sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then is responsible for checking these values to perform other authorization decisions.
::: zone pivot="b2c-user-flow" To configure an API Connector with HTTP basic authentication, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Under **Azure services**, select **Azure AD B2C**.
-1. Select **API connectors (Preview)**, and then select the **API Connector** you want to configure.
-1. For the **Authentication type**, select **Basic**.
-1. Provide the **Username**, and **Password** of your REST API endpoint.
-1. Select **Save**.
+2. Under **Azure services**, select **Azure AD B2C**.
+3. Select **API connectors**, and then select the **API Connector** you want to configure.
+4. For the **Authentication type**, select **Basic**.
+5. Provide the **Username**, and **Password** of your REST API endpoint.
+ :::image type="content" source="media/add-api-connector/api-connector-config.png" alt-text="Providing basic authentication configuration for an API connector.":::
+6. Select **Save**.
::: zone-end
The following XML snippet is an example of a RESTful technical profile configure
## HTTPS client certificate authentication
-Client certificate authentication is a mutual certificate-based authentication, where the client, Azure AD B2C, provides its client certificate to the server to prove its identity. This happens as a part of the SSL handshake. Only services that have proper certificates, such as Azure AD B2C, can access your REST API service. The client certificate is an X.509 digital certificate. In production environments, it must be signed by a certificate authority.
+Client certificate authentication is a mutual certificate-based authentication, where the client, Azure AD B2C, provides its client certificate to the server to prove its identity. This happens as a part of the SSL handshake. Your API is responsible for validating the certificates belong to a valid client, such as Azure AD B2C, and performing authorization decisions. The client certificate is an X.509 digital certificate.
+
+> [!IMPORTANT]
+> In production environments, the certificate must be signed by a certificate authority.
+
+### Create a certificate
-### Prepare a self-signed certificate (optional)
+#### Option 1: Use Azure Key Vault (recommended)
+
+To create a certificate, you can use [Azure Key Vault](../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. Recommended settings include:
+- **Subject**: `CN=<yourapiname>.<tenantname>.onmicrosoft.com`
+- **Content Type**: `PKCS #12`
+- **Lifetime Acton Type**: `Email all contacts at a given percentage lifetime` or `Email all contacts a given number of days before expiry`
+- **Key Type**: `RSA`
+- **Key Size**: `2048`
+- **Exportable Private Key**: `Yes` (in order to be able to export `.pfx` file)
+
+You can then [export the certificate](../key-vault/certificates/how-to-export-certificate.md).
+
+#### Option 2: prepare a self-sized certificate using PowerShell module
[!INCLUDE [active-directory-b2c-create-self-signed-certificate](../../includes/active-directory-b2c-create-self-signed-certificate.md)]
Client certificate authentication is a mutual certificate-based authentication,
To configure an API Connector with client certificate authentication, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Under **Azure services**, select **Azure AD B2C**.
-1. Select **API connectors (Preview)**, and then select the **API Connector** you want to configure.
-1. For the **Authentication type**, select **Certificate**.
-1. In the **Upload certificate** box, select your certificate's .pfx file with a private key.
-1. In the **Enter Password** box, type the certificate's password.
-1. Select **Save**.
+2. Under **Azure services**, select **Azure AD B2C**.
+3. Select **API connectors**, and then select the **API Connector** you want to configure.
+4. For the **Authentication type**, select **Certificate**.
+5. In the **Upload certificate** box, select your certificate's .pfx file with a private key.
+6. In the **Enter Password** box, type the certificate's password.
+ :::image type="content" source="media/secure-api-connector/api-connector-upload-cert.png" alt-text="Providing certificate authentication configuration for an API connector.":::
+7. Select **Save**.
+
+### Perform authorization decisions
+Your API must implement the authorization based on sent client certificates in order to protect the API endpoints. For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and *validate the certificate from your API code*. You can alternatively use Azure API Management as a layer in front of any API service to [check client certificate properties](
+../api-management/api-management-howto-mutual-certificates-for-clients.md) against desired values.
+
+### Renewing certificates
+It's recommended you set reminder alerts for when your certificate will expire. You will need to generate a new certificate and repeat the steps above when used certificates are about to expire. To "roll" the use of a new certificate, your API service can continue to accept old and new certificates for a temporary amount of time while the new certificate is deployed.
+
+To upload a new certificate to an existing API connector, select the API connector under **API connectors** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and whose start date has passed will automatically be used by Azure AD B2C.
+
+ :::image type="content" source="media/secure-api-connector/api-connector-renew-cert.png" alt-text="Providing a new certificate to an API connector when one already exists.":::
::: zone-end
The following XML snippet is an example of a RESTful technical profile configure
</ClaimsProvider> ``` ++ ## API key authentication +
+Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development by requiring the caller to include a unique key as an HTTP header or HTTP query parameter. For [Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL** of your API connector. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
+
+This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can select 'basic' authentication in the API connector configuration and use temporary values for `username` and `password` that your API can disregard while you implement proper authorization.
+++ API key is a unique identifier used to authenticate a user to access a REST API endpoint. The key is sent in a custom HTTP header. For example, the [Azure Functions HTTP trigger](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) uses the `x-functions-key` HTTP header to identify the requester. ### Add API key policy keys
The following XML snippet is an example of a RESTful technical profile configure
</TechnicalProfiles> </ClaimsProvider> ``` ## Next steps -- Learn more about the [Restful technical profile](restful-technical-profile.md) element in the custom policy reference.
+- Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples).
+- Learn more about the [Restful technical profile](restful-technical-profile.md) element in the custom policy reference.
active-directory-b2c Tutorial Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/tutorial-web-app-dotnet.md
In this tutorial, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] > [!NOTE]
-> This tutorial uses an ASP.NET sample web application. For other sample applications (including ASP.NET Core, Node.js, Python, and more), see [Azure Active Directory B2C code samples](code-samples.md).
+> This tutorial uses an ASP.NET sample web application. For other sample applications (including ASP.NET Core, Node.js, Python, and more), see [Azure Active Directory B2C code samples](integrate-with-app-code-samples.md).
## Prerequisites
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article
### Updated articles -- [Azure Active Directory B2C code samples](code-samples.md)
+- [Azure Active Directory B2C code samples](integrate-with-app-code-samples.md)
- [Track user behavior in Azure AD B2C by using Application Insights](analytics-with-application-insights.md) - [Configure session behavior in Azure Active Directory B2C](session-behavior.md)
active-directory-domain-services Manage Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/manage-group-policy.md
# Administer Group Policy in an Azure Active Directory Domain Services managed domain
-Settings for user and computer objects in Azure Active Directory Domain Services (Azure AD DS) are often managed using Group Policy Objects (GPOs). Azure AD DS includes built-in GPOs for the *AADDC Users* and *AADDC Computers* containers. You can customize these built-in GPOs to configure Group Policy as needed for your environment. Members of the *Azure AD DC administrators* group have Group Policy administration privileges in the Azure AD DS domain, and can also create custom GPOs and organizational units (OUs). More more information on what Group Policy is and how it works, see [Group Policy overview][group-policy-overview].
+Settings for user and computer objects in Azure Active Directory Domain Services (Azure AD DS) are often managed using Group Policy Objects (GPOs). Azure AD DS includes built-in GPOs for the *AADDC Users* and *AADDC Computers* containers. You can customize these built-in GPOs to configure Group Policy as needed for your environment. Members of the *Azure AD DC administrators* group have Group Policy administration privileges in the Azure AD DS domain, and can also create custom GPOs and organizational units (OUs). For more information on what Group Policy is and how it works, see [Group Policy overview][group-policy-overview].
In a hybrid environment, group policies configured in an on-premises AD DS environment aren't synchronized to Azure AD DS. To define configuration settings for users or computers in Azure AD DS, edit one of the default GPOs or create a custom GPO.
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-authentication-passwordless.md
The following providers offer FIDO2 security keys of different form factors that
||:--:|::|::|::|:--:|--| | AuthenTrend | ![y] | ![y]| ![y]| ![y]| ![n] | https://authentrend.com/about-us/#pg-35-3 | | Ensurity | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.ensurity.com/contact |
-| Excelsecu | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html |
-| Feitian | ![y] | ![y]| ![y]| ![y]| ![n] | https://shop.ftsafe.us/pages/microsoft |
-| Gemalto (Thales Group) | ![n] | ![y]| ![y]| ![n]| ![n] | https://safenet.gemalto.com/access-management/authenticators/fido-devices |
+| Excelsecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.excelsecu.com/productdetail/esecufido2secu.html |
+| Feitian | ![y] | ![y]| ![y]| ![y]| ![n] | https://shop.ftsafe.us/pages/microsoft |
| GoTrustID Inc. | ![n] | ![y]| ![y]| ![y]| ![n] | https://www.gotrustid.com/idem-key | | HID | ![n] | ![y]| ![y]| ![n]| ![n] | https://www.hidglobal.com/contact-us | | Hypersecu | ![n] | ![y]| ![n]| ![n]| ![n] | https://www.hypersecu.com/hyperfido | | IDmelon Technologies Inc. | ![y] | ![y]| ![y]| ![y]| ![n] | https://www.idmelon.com/#idmelon | | Kensington | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.kensington.com/solutions/product-category/why-biometrics/ | | KONA I | ![y] | ![n]| ![y]| ![y]| ![n] | https://konai.com/business/security/fido |
-| Nymi | ![y] | ![n]| ![y]| ![y]| ![n] | https://www.nymi.com/product |
+| Nymi | ![y] | ![n]| ![y]| ![n]| ![n] | https://www.nymi.com/product |
| OneSpan Inc. | ![y] | ![n]| ![n]| ![y]| ![n] | https://www.onespan.com/products/fido |
+| Thales Group | ![n] | ![y]| ![y]| ![n]| ![n] | https://cpl.thalesgroup.com/access-management/authenticators/fido-devices |
| Token2 Switzerland | ![y] | ![y]| ![y]| ![n]| ![n] | https://www.token2.swiss/shop/product/token2-t2f2-alu-fido2-u2f-and-totp-security-key | | TrustKey Solutions | ![y] | ![y]| ![n]| ![n]| ![n] | https://www.trustkeysolutions.com/security-keys/ | | VinCSS | ![n] | ![y]| ![n]| ![n]| ![n] | https://passwordless.vincss.net |
active-directory How To Authentication Two Way Sms Unsupported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/how-to-authentication-two-way-sms-unsupported.md
Previously updated : 03/02/2021 Last updated : 07/19/2021
# Two-way SMS unsupported
-Two-way SMS for Azure AD Multi-Factor Authentication (MFA) Server was originally deprecated in 2018, and is no longer supported after February 24, 2021. Administrators should enable another method for users who still use two-way SMS.
+Two-way SMS for Azure AD Multi-Factor Authentication (MFA) Server was originally deprecated in 2018, and no longer supported after February 24, 2021, except for organizations that received a support extension until August 2, 2021. Administrators should enable another method for users who still use two-way SMS.
Email notifications and Azure portal Service Health notifications (portal toasts) were sent to affected admins on December 8, 2020 and January 28, 2021. The alerts went to the Owner, Co-Owner, Admin, and Service Admin RBAC roles tied to the subscriptions. If you've already completed the following steps, no action is necessary.
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-getstarted.md
Previously updated : 07/07/2021 Last updated : 07/19/2021
Risk policies include:
- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies) - [Require MFA for users with medium or high sign-in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
-### Convert users from per-user MFA to Conditional Access based MFA
-
-If your users were enabled using per-user enabled and enforced Azure AD Multi-Factor Authentication the following PowerShell can assist you in making the conversion to Conditional Access based Azure AD Multi-Factor Authentication.
-
-Run this PowerShell in an ISE window or save as a `.PS1` file to run locally.
-
-```PowerShell
-# Sets the MFA requirement state
-function Set-MfaState {
-
- [CmdletBinding()]
- param(
- [Parameter(ValueFromPipelineByPropertyName=$True)]
- $ObjectId,
- [Parameter(ValueFromPipelineByPropertyName=$True)]
- $UserPrincipalName,
- [ValidateSet("Disabled","Enabled","Enforced")]
- $State
- )
-
- Process {
- Write-Verbose ("Setting MFA state for user '{0}' to '{1}'." -f $ObjectId, $State)
- $Requirements = @()
- if ($State -ne "Disabled") {
- $Requirement =
- [Microsoft.Online.Administration.StrongAuthenticationRequirement]::new()
- $Requirement.RelyingParty = "*"
- $Requirement.State = $State
- $Requirements += $Requirement
- }
-
- Set-MsolUser -ObjectId $ObjectId -UserPrincipalName $UserPrincipalName `
- -StrongAuthenticationRequirements $Requirements
- }
-}
-
-# Disable MFA for all users
-Get-MsolUser -All | Set-MfaState -State Disabled
-```
- ## Plan user session lifetime When planning your MFA deployment, itΓÇÖs important to think about how frequently you would like to prompt your users. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
Others might include:
- Citrix Gateway
- [Citrix Gateway](https://docs.citrix.com/advanced-concepts/implementation-guides/citrix-gateway-microsoft-azure.html#microsoft-azure-mfa-deployment-methods) supports both RADIUS and NPS extension integration, and a SAML integration.
+ [Citrix Gateway](https://docs.citrix.com/en-us/advanced-concepts/implementation-guides/citrix-gateway-microsoft-azure.html#microsoft-azure-mfa-deployment-methods) supports both RADIUS and NPS extension integration, and a SAML integration.
- Cisco VPN - The Cisco VPN supports both RADIUS and [SAML authentication for SSO](../saas-apps/cisco-anyconnect.md).
See [Troubleshooting Azure AD MFA](https://support.microsoft.com/help/2937344/tr
## Next steps [Deploy other identity features](../fundamentals/active-directory-deployment-plans.md)-
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-userstates.md
Previously updated : 08/17/2020 Last updated : 07/19/2021
To change the per-user Azure AD Multi-Factor Authentication state for a user, co
After you enable users, notify them via email. Tell the users that a prompt is displayed to ask them to register the next time they sign in. Also, if your organization uses non-browser apps that don't support modern authentication, they need to create app passwords. For more information, see the [Azure AD Multi-Factor Authentication end-user guide](../user-help/multi-factor-authentication-end-user-first-time.md) to help them get started.
-## Change state using PowerShell
-
-To change the user state by using [Azure AD PowerShell](/powershell/azure/), you change the `$st.State` parameter for a user account. There are three possible states for a user account:
-
-* *Enabled*
-* *Enforced*
-* *Disabled*
-
-In general, don't move users directly to the *Enforced* state unless they are already registered for MFA. If you do so, legacy authentication apps stop working because the user hasn't gone through Azure AD Multi-Factor Authentication registration and obtained an [app password](howto-mfa-app-passwords.md). In some cases this behavior may be desired, but impacts user experience until the user registers.
-
-To get started, install the *MSOnline* module using [Install-Module](/powershell/module/powershellget/install-module) as follows:
-
-```PowerShell
-Install-Module MSOnline
-```
-
-Next, connect using [Connect-MsolService](/powershell/module/msonline/connect-msolservice):
-
-```PowerShell
-Connect-MsolService
-```
-
-The following example PowerShell script enables MFA for an individual user named *bsimon@contoso.com*:
-
-```PowerShell
-$st = New-Object -TypeName Microsoft.Online.Administration.StrongAuthenticationRequirement
-$st.RelyingParty = "*"
-$st.State = "Enabled"
-$sta = @($st)
-
-# Change the following UserPrincipalName to the user you wish to change state
-Set-MsolUser -UserPrincipalName bsimon@contoso.com -StrongAuthenticationRequirements $sta
-```
-
-Using PowerShell is a good option when you need to bulk enable users. The following script loops through a list of users and enables MFA on their accounts. Define the user accounts set it in the first line for `$users` as follows:
-
- ```PowerShell
- # Define your list of users to update state in bulk
- $users = "bsimon@contoso.com","jsmith@contoso.com","ljacobson@contoso.com"
-
- foreach ($user in $users)
- {
- $st = New-Object -TypeName Microsoft.Online.Administration.StrongAuthenticationRequirement
- $st.RelyingParty = "*"
- $st.State = "Enabled"
- $sta = @($st)
- Set-MsolUser -UserPrincipalName $user -StrongAuthenticationRequirements $sta
- }
- ```
-
-To disable MFA, the following example gets a user with [Get-MsolUser](/powershell/module/msonline/get-msoluser), then removes any *StrongAuthenticationRequirements* set for the defined user using [Set-MsolUser](/powershell/module/msonline/set-msoluser):
-
-```PowerShell
-Get-MsolUser -UserPrincipalName bsimon@contoso.com | Set-MsolUser -StrongAuthenticationRequirements @()
-```
-
-You could also directly disable MFA for a user using [Set-MsolUser](/powershell/module/msonline/set-msoluser) as follows:
-
-```PowerShell
-Set-MsolUser -UserPrincipalName bsimon@contoso.com -StrongAuthenticationRequirements @()
-```
-
-## Convert users from per-user MFA to Conditional Access
-
-The following PowerShell can assist you in making the conversion to Conditional Access based Azure AD Multi-Factor Authentication.
-
-```PowerShell
-# Sets the MFA requirement state
-function Set-MfaState {
-
- [CmdletBinding()]
- param(
- [Parameter(ValueFromPipelineByPropertyName=$True)]
- $ObjectId,
- [Parameter(ValueFromPipelineByPropertyName=$True)]
- $UserPrincipalName,
- [ValidateSet("Disabled","Enabled","Enforced")]
- $State
- )
-
- Process {
- Write-Verbose ("Setting MFA state for user '{0}' to '{1}'." -f $ObjectId, $State)
- $Requirements = @()
- if ($State -ne "Disabled") {
- $Requirement =
- [Microsoft.Online.Administration.StrongAuthenticationRequirement]::new()
- $Requirement.RelyingParty = "*"
- $Requirement.State = $State
- $Requirements += $Requirement
- }
-
- Set-MsolUser -ObjectId $ObjectId -UserPrincipalName $UserPrincipalName `
- -StrongAuthenticationRequirements $Requirements
- }
-}
-
-# Disable MFA for all users
-Get-MsolUser -All | Set-MfaState -State Disabled
-```
-
-> [!NOTE]
-> If MFA is re-enabled on a user and the user doesn't re-register, their MFA state doesn't transition from *Enabled* to *Enforced* in MFA management UI. In this case, the administrator must move the user directly to *Enforced*.
- ## Next steps To configure Azure AD Multi-Factor Authentication settings, see [Configure Azure AD Multi-Factor Authentication settings](howto-mfa-mfasettings.md).
active-directory Howto Password Smart Lockout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-smart-lockout.md
When the smart lockout threshold is triggered, you will get the following messag
*Your account is temporarily locked to prevent unauthorized use. Try again later, and if you still have trouble, contact your admin.*
-When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has (*threshold_limit * datacenter_count*) number of bad attempts if the user hits each datacenter before a lockout occurs. Additionally, due to each datacenter tracking lockout independently, a user can be locked out of one datacenter, but not another.
+When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has a maximum of (*threshold_limit * datacenter_count*) number of bad attempts before being completely locked out.
## Next steps
active-directory Howto Get Appsource Certified https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/azuread-dev/howto-get-appsource-certified.md
A *multi-tenant application* is an application that accepts sign-ins from users
To enable multi-tenancy on your application, follow these steps: 1. Set `Multi-Tenanted` property to `Yes` on your application registration's information in the [Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps). By default, applications created in the Azure portal are configured as *[single-tenant](#single-tenant-applications)*. 1. Update your code to send requests to the `common` endpoint. To do this, update the endpoint from `https://login.microsoftonline.com/{yourtenant}` to `https://login.microsoftonline.com/common*`.
-1. For some platforms, like ASP .NET, you need also to update your code to accept multiple issuers.
+1. For some platforms, like ASP.NET, you need also to update your code to accept multiple issuers.
For more information about multi-tenancy, see [How to sign in any Azure Active Directory (Azure AD) user using the multi-tenant application pattern](../develop/howto-convert-app-to-be-multi-tenant.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json).
For more information about multi-tenancy, see [How to sign in any Azure Active D
A *single-tenant application* is an application that only accepts sign-ins from users of a defined Azure AD instance. External users (including work or school accounts from other organizations, or personal accounts) can sign in to a single-tenant application after adding each user as a guest account to the Azure AD instance that the application is registered.
-You can add users as guest accounts to Azure AD through the [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) and you can do this [programmatically](../../active-directory-b2c/code-samples.md). When using B2B, users can create a self-service portal that does not require an invitation to sign in. For more info, see [Self-service portal for Azure AD B2B collaboration sign-up](../external-identities/self-service-portal.md).
+You can add users as guest accounts to Azure AD through the [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) and you can do this [programmatically](../../active-directory-b2c/integrate-with-app-code-samples.md). When using B2B, users can create a self-service portal that does not require an invitation to sign in. For more info, see [Self-service portal for Azure AD B2B collaboration sign-up](../external-identities/self-service-portal.md).
Single-tenant applications can enable the *Contact Me* experience, but if you want to enable the single-click/free trial experience that AppSource recommends, enable multi-tenancy on your application instead.
active-directory Concept Conditional Access Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-session.md
Conditional Access App Control enables user app access and sessions to be monito
- Prevent data exfiltration: You can block the download, cut, copy, and print of sensitive documents on, for example, unmanaged devices. - Protect on download: Instead of blocking the download of sensitive documents, you can require documents to be labeled and protected with Azure Information Protection. This action ensures the document is protected and user access is restricted in a potentially risky session. - Prevent upload of unlabeled files: Before a sensitive file is uploaded, distributed, and used by others, itΓÇÖs important to make sure that the file has the right label and protection. You can ensure that unlabeled files with sensitive content are blocked from being uploaded until the user classifies the content.-- Monitor user sessions for compliance: Risky users are monitored when they sign into apps and their actions are logged from within the session. You can investigate and analyze user behavior to understand where, and under what conditions, session policies should be applied in the future.-- Block access: You can granularly block access for specific apps and users depending on several risk factors. For example, you can block them if they are using client certificates as a form of device management.
+- Monitor user sessions for compliance (Preview): Risky users are monitored when they sign into apps and their actions are logged from within the session. You can investigate and analyze user behavior to understand where, and under what conditions, session policies should be applied in the future.
+- Block access (Preview): You can granularly block access for specific apps and users depending on several risk factors. For example, you can block them if they are using client certificates as a form of device management.
- Block custom activities: Some apps have unique scenarios that carry risk, for example, sending messages with sensitive content in apps like Microsoft Teams or Slack. In these kinds of scenarios, you can scan messages for sensitive content and block them in real time. For more information, see the article [Deploy Conditional Access App Control for featured apps](/cloud-app-security/proxy-deployment-aad).
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/access-tokens.md
Some claims are used to help Azure AD secure tokens in case of reuse. These are
|--|--|-| | `typ` | String - always "JWT" | Indicates that the token is a JWT.| | `alg` | String | Indicates the algorithm that was used to sign the token, for example, "RS256" |
-| `kid` | String | Specifies the thumbprint for the public key that's used to sign this token. Emitted in both v1.0 and v2.0 access tokens. |
+| `kid` | String | Specifies the thumbprint for the public key that can be used to validate this token's signature. Emitted in both v1.0 and v2.0 access tokens. |
| `x5t` | String | Functions the same (in use and value) as `kid`. `x5t` is a legacy claim emitted only in v1.0 access tokens for compatibility purposes. | ### Payload claims
https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration
This metadata document: * Is a JSON object containing several useful pieces of information, such as the location of the various endpoints required for doing OpenID Connect authentication.
-* Includes a `jwks_uri`, which gives the location of the set of public keys used to sign tokens. The JSON Web Key (JWK) located at the `jwks_uri` contains all of the public key information in use at that particular moment in time. The JWK format is described in [RFC 7517](https://tools.ietf.org/html/rfc7517). Your app can use the `kid` claim in the JWT header to select which public key in this document has been used to sign a particular token. It can then do signature validation using the correct public key and the indicated algorithm.
+* Includes a `jwks_uri`, which gives the location of the set of public keys that correspond to the private keys used to sign tokens. The JSON Web Key (JWK) located at the `jwks_uri` contains all of the public key information in use at that particular moment in time. The JWK format is described in [RFC 7517](https://tools.ietf.org/html/rfc7517). Your app can use the `kid` claim in the JWT header to select the public key, from this document, which corresponds to the private key that has been used to sign a particular token. It can then do signature validation using the correct public key and the indicated algorithm.
> [!NOTE] > We recommend using the `kid` claim to validate your token. Though v1.0 tokens contain both the `x5t` and `kid` claims, v2.0 tokens contain only the `kid` claim.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/id-tokens.md
The following article will be beneficial before going through this article:
## Claims in an ID token
-ID tokens are [JSON web tokens (JWT)](https://jwt.io/introduction/). These ID tokens consist of a header, payload, and signature. The header and signature are used to verify the authenticity of the token, while the payload contains the information about the user requested by your client. The v1.0 and v2.0 ID tokens have differences in the information they carry. The version is based on the endpoint from where it was requested. While existing applications likely use the Azure AD endpoint (v1.0), new applications should use the "Microsoft identity platform" endpoint(v2.0).
+ID tokens are [JSON web tokens (JWT)](https://wikipedia.org/wiki/JSON_Web_Token). These ID tokens consist of a header, payload, and signature. The header and signature are used to verify the authenticity of the token, while the payload contains the information about the user requested by your client. The v1.0 and v2.0 ID tokens have differences in the information they carry. The version is based on the endpoint from where it was requested. While existing applications likely use the Azure AD endpoint (v1.0), new applications should use the "Microsoft identity platform" endpoint(v2.0).
* v1.0: Azure AD endpoint: `https://login.microsoftonline.com/common/oauth2/authorize` * v2.0: Microsoft identity Platform endpoint: `https://login.microsoftonline.com/common/oauth2/v2.0/authorize`
The table below shows header claims present in ID tokens.
|--|--|-| |`typ` | String - always "JWT" | Indicates that the token is a JWT token.| |`alg` | String | Indicates the algorithm that was used to sign the token. Example: "RS256" |
-|`kid` | String | Thumbprint for the public key used to verify this token. Emitted in both v1.0 and v2.0 `id_tokens`. |
-|`x5t` | String | The same (in use and value) as `kid`. However, this is a legacy claim emitted only in v1.0 `id_tokens` for compatibility purposes. |
+| `kid` | String | Specifies the thumbprint for the public key that can be used to validate this token's signature. Emitted in both v1.0 and v2.0 ID tokens. |
+| `x5t` | String | Functions the same (in use and value) as `kid`. `x5t` is a legacy claim emitted only in v1.0 ID tokens for compatibility purposes. |
### Payload claims
The table below shows the claims that are in most ID tokens by default (except w
When identifying a user (say, looking them up in a database, or deciding what permissions they have), it's critical to use information that will remain constant and unique across time. Legacy applications sometimes use fields like the email address, a phone number, or the UPN. All of these can change over time, and can also be reused over time . For example, when an employee changes their name, or an employee is given an email address that matches that of a previous, no longer present employee. Therefore, it is **critical** that your application not use human-readable data to identify a user - human readable generally means someone will read it, and want to change it. Instead, use the claims provided by the OIDC standard, or the extension claims provided by Microsoft - the `sub` and `oid` claims.
-To correctly store information per-user, use `sub` or `oid` alone (which as GUIDs are unique), with `tid` used for routing or sharding if needed. If you need to share data across services, `oid`+`tid` is best as all apps get the same `oid` and `tid` claims for a given user. The `sub` claim in the Microsoft identity platform is "pair-wise" - it is unique based on a combination of the token recipient, tenant, and user. Therefore, two apps that request ID tokens for a given user will receive different `sub` claims, but the same `oid` claims for that user.
+To correctly store information per-user, use `sub` or `oid` alone (which as GUIDs are unique), with `tid` used for routing or sharding if needed. If you need to share data across services, `oid`+`tid` is best as all apps get the same `oid` and `tid` claims for a given user acting in a given tenant. The `sub` claim in the Microsoft identity platform is "pair-wise" - it is unique based on a combination of the token recipient, tenant, and user. Therefore, two apps that request ID tokens for a given user will receive different `sub` claims, but the same `oid` claims for that user.
>[!NOTE] > Do not use the `idp` claim to store information about a user in an attempt to correlate users across tenants. It will not function, as the `oid` and `sub` claims for a user change across tenants, by design, to ensure that applications cannot track users across tenants. >
-> Guest scenarios, where a user is homed in one tenant, and authenticates in another, should treat the user as if they are a brand new user to the service. Your documents and privileges in the Contoso tenant should not apply in the Fabrikam tenant. This is important to prevent accidental data leakage across tenants.
+> Guest scenarios, where a user is homed in one tenant, and authenticates in another, should treat the user as if they are a brand new user to the service. Your documents and privileges in the Contoso tenant should not apply in the Fabrikam tenant. This is important to prevent accidental data leakage across tenants, and enforcement of data lifecycles. Evicting a guest from a tenant should also remove their access to the data they created in that tenant.
### Groups overage claim To ensure that the token size doesn't exceed HTTP header size limits, Azure AD limits the number of object IDs that it includes in the `groups` claim. If a user is member of more groups than the overage limit (150 for SAML tokens, 200 for JWT tokens), then Azure AD does not emit the groups claim in the token. Instead, it includes an overage claim in the token that indicates to the application to query the Microsoft Graph API to retrieve the user's group membership.
To manually validate the token, see the steps details in [validating an access t
## Next steps
+* Review the [OpenID Connect](v2-protocols-oidc.md) flow, which defines the protocols that emit an ID token.
* Learn about [access tokens](access-tokens.md) * Customize the JWT claims in your ID token using [optional claims](active-directory-optional-claims.md).
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-token-cache-serialization.md
The following code shows how to add an in-memory well partitioned token cache to
.Build(); // Add an in-memory token cache. Other options available: see below
- app.UseInMemoryTokenCaches();
+ app.AddInMemoryTokenCaches();
} return clientapp; }
The following code shows how to add an in-memory well partitioned token cache to
```CSharp // Add an in-memory token cache
- app.UseInMemoryTokenCaches();
+ app.AddInMemoryTokenCaches();
``` #### Distributed in memory token cache ```CSharp // In memory distributed token cache
- app.UseDistributedTokenCaches(services =>
+ app.AddDistributedTokenCaches(services =>
{ // In net462/net472, requires to reference Microsoft.Extensions.Caching.Memory services.AddDistributedMemoryCache();
The following code shows how to add an in-memory well partitioned token cache to
```CSharp // SQL Server token cache
- app.UseDistributedTokenCaches(services =>
+ app.AddDistributedTokenCaches(services =>
{ services.AddDistributedSqlServerCache(options => {
The following code shows how to add an in-memory well partitioned token cache to
```CSharp // Redis token cache
- app.UseDistributedTokenCaches(services =>
+ app.AddDistributedTokenCaches(services =>
{ // Requires to reference Microsoft.Extensions.Caching.StackExchangeRedis services.AddStackExchangeRedisCache(options =>
The following code shows how to add an in-memory well partitioned token cache to
```CSharp // Cosmos DB token cache
- app.UseDistributedTokenCaches(services =>
+ app.AddDistributedTokenCaches(services =>
{ // Requires to reference Microsoft.Extensions.Caching.Cosmos (preview) services.AddCosmosCache((CosmosCacheOptions cacheOptions) =>
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/security-best-practices-for-app-registration.md
+
+ Title: Best practices for Azure AD application registration configuration - Microsoft identity platform
+description: Learn about a set of best practices and general guidance on Azure AD application registration configuration.
++++++++ Last updated : 07/8/2021+++++
+# Azure AD application registration security best practices
+
+An Azure Active Directory (Azure AD) application registration is a critical part of your business application. Any misconfiguration or lapse in hygiene of your application can result in downtime or compromise.
+
+It's important to understand that your application registration has a wider impact than the business application because of its surface area. Depending on the permissions added to your application, a compromised app can have an organization-wide effect.
+Since an application registration is essential to getting your users logged in, any downtime to it can affect your business or some critical service that your business depends upon. So, it's important to allocate time and resources to ensure your application registration stays in a healthy state always. We recommend that you conduct a periodical security and health assessment of your applications much like a Security Threat Model assessment for your code. For a broader perspective on security for organizations, check the [security development lifecycle](https://www.microsoft.com/securityengineering/sdl) (SDL).
+
+This article describes security best practices for the following application registration properties.
+
+- Redirect URI
+- Implicit grant flow for access token
+- Credentials
+- AppId URI
+- Application ownership
+- Checklist
+
+## Redirect URI configuration
+
+It's important to keep Redirect URIs of your application up to date. A lapse in the ownership of one of the redirect URIs can lead to an application compromise. Ensure that all DNS records are updated and monitored periodically for changes. Along with maintaining ownership of all URIs, don't use wildcard reply URLs or insecure URI schemes such as http, or URN.
+
+![redirect Uri](media/active-directory-application-registration-best-practices/redirect-uri.png)
+
+### Redirect URI summary
+
+| Do | Don't |
+| - | -- |
+| Maintain ownership of all URIs | Use wildcards |
+| Keep DNS up to date | Use URN scheme |
+| Keep the list small | -- |
+| Trim any unnecessary URIs | -- |
+| Update URLs from Http to Https scheme | -- |
+
+## Implicit flow token configuration
+
+Scenarios that require **implicit flow** can now use **Auth code flow** to reduce the risk of compromise associated with implicit grant flow misuse. If you configured your application registration to get Access tokens using implicit flow, but don't actively use it, we recommend you turn off the setting to protect from misuse.
+
+![access tokens used for implicit flows](media/active-directory-application-registration-best-practices/implict-grant-flow.png)
+
+### Implicit grant flow summary
+
+| Do | Don't |
+| | - |
+| Understand if [implicit flow is required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant) | Use implicit flow unless [explicitly required](./v2-oauth2-implicit-grant-flow.md#suitable-scenarios-for-the-oauth2-implicit-grant) |
+| Separate app registration for (valid) implicit flow scenarios | -- |
+| Turn off unused implicit flow | -- |
+
+## Credential configuration
+
+Credentials are a vital part of an application registration when your application is used as a confidential client. If your app registration is used only as a Public Client App (allows users to sign in using a public endpoint), ensure that you don't have any credentials on your application object. Review the credentials used in your applications for freshness of use and their expiration. An unused credential on an application can result in security breach.
+While it's convenient to use password secrets as a credential, we strongly recommend that you use x509 certificates as the only credential type for getting tokens for your application. Monitor your production pipelines to ensure credentials of any kind are never committed into code repositories. If using Azure, we strongly recommend using Managed Identity so application credentials are automatically managed. Refer to the [managed identities documentation](../managed-identities-azure-resources/overview.md) for more details. [Credential Scanner](/azure/security/develop/security-code-analysis-overview#credential-scanner) is a static analysis tool that you can use to detect credentials (and other sensitive content) in your source code and build output.
+
+![certificates and secrets on Azure portal](media/active-directory-application-registration-best-practices/credentials.png)
+
+| Do | Don't |
+| - | |
+| Use [certificate credentials](./active-directory-certificate-credentials.md) | Use Password credentials |
+| Use Key Vault with [Managed identities](../managed-identities-azure-resources/overview.md) | Share credentials across apps |
+| Rollover frequently | Have many credentials on one app |
+| -- | Let stale credentials hang around |
+| -- | Commit credentials in code |
+
+## AppId URI configuration
+
+Certain applications can expose resources (via WebAPI) and as such need to define an AppId URI that uniquely identifies the resource in a tenant. We recommend using either of the following URI schemes: api or https, and set the AppId URI in the following formats to avoid URI collisions in your organization.
+
+**Valid api schemes:**
+
+- api://_{appId}_
+- api://_{tenantId}/{appId}_
+- api://_{tenantId}/{string}_
+- https://_{verifiedCustomerDomain}/{string}_
+- https://_{string}.{verifiedCustomerDomain}_
+- https://_{string}.{verifiedCustomerDomain}/{string}_
+
+![application id uri](media/active-directory-application-registration-best-practices/app-id-uri.png)
+
+### AppId URI summary
+
+| Do | Don't |
+| -- | - |
+| Avoid collisions by using valid URI formats. | Use wildcard AppId URI |
+| Use verified domain in Line of Business (LoB) apps | Malformed URI |
+| Inventory your AppId URIs | -- |
+
+## App ownership configuration
+
+Ensure app ownership is kept to a minimal set of people within the organization. It's recommended to run through the owners list once every few months to ensure owners are still part of the organization and their charter accounts for ownership of the application registration. Check out [Azure AD access reviews](../governance/access-reviews-overview.md) for more details.
+
+![users provisioning service - owners](media/active-directory-application-registration-best-practices/app-ownership.png)
+
+### App ownership summary
+
+| Do | Don't |
+| - | -- |
+| Keep it small | -- |
+| Monitor owners list | -- |
+
+## Checklist
+
+App developers can use the _Checklist_ available in Azure portal to ensure their app registration meets a high quality bar and provides guidance to integrate securely. The integration assistant highlights best practices and recommendation that help avoid common oversights when integrating with Microsoft identity platform.
+
+![Integration assistant checklist on Azure portal](media/active-directory-application-registration-best-practices/checklist.png)
+
+### Checklist summary
+
+| Do | Don't |
+| -- | -- |
+| Use checklist to get scenario-based recommendation | -- |
+| Deep link into app registration blades | -- |
++
+## Next steps
+For more information on Auth code flow, see the [OAuth 2.0 authorization code flow](./v2-oauth2-auth-code-flow.md).
active-directory V2 Howto Get Appsource Certified https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-howto-get-appsource-certified.md
For more information about multi-tenancy, see [How to sign in any Azure Active D
A *single-tenant application* is an application that only accepts sign-ins from users of a defined Azure AD instance. External users (including work or school accounts from other organizations, or personal accounts) can sign in to a single-tenant application after adding each user as a guest account to the Azure AD instance that the application is registered.
-You can add users as guest accounts to Azure AD through the [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) and you can do this [programmatically](../../active-directory-b2c/code-samples.md). When using B2B, users can create a self-service portal that does not require an invitation to sign in. For more info, see [Self-service portal for Azure AD B2B collaboration sign-up](../external-identities/self-service-portal.md).
+You can add users as guest accounts to Azure AD through the [Azure AD B2B collaboration](../external-identities/what-is-b2b.md) and you can do this [programmatically](../../active-directory-b2c/integrate-with-app-code-samples.md). When using B2B, users can create a self-service portal that does not require an invitation to sign in. For more info, see [Self-service portal for Azure AD B2B collaboration sign-up](../external-identities/self-service-portal.md).
Single-tenant applications can enable the *Contact Me* experience, but if you want to enable the single-click/free trial experience that AppSource recommends, enable multi-tenancy on your application instead.
active-directory V2 Oauth Ropc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth-ropc.md
Previously updated : 06/25/2021 Last updated : 07/16/2021
The Microsoft identity platform supports the [OAuth 2.0 Resource Owner Password
> [!WARNING] > Microsoft recommends you do _not_ use the ROPC flow. In most scenarios, more secure alternatives are available and recommended. This flow requires a very high degree of trust in the application, and carries risks which are not present in other flows. You should only use this flow when other more secure flows can't be used. + > [!IMPORTANT] > > * The Microsoft identity platform only supports ROPC for Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`) or the `organizations` endpoint.
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `username` | Required | The user's email address. | | `password` | Required | The user's password. | | `scope` | Recommended | A space-separated list of [scopes](v2-permissions-and-consent.md), or permissions, that the app requires. In an interactive flow, the admin or the user must consent to these scopes ahead of time. |
-| `client_secret`| Sometimes required | If your app is a public client, then the `client_secret` or `client_assertion` cannot be included. If the app is a confidential client, then it must be included. |
+| `client_secret`| Sometimes required | If your app is a public client, then the `client_secret` or `client_assertion` cannot be included. If the app is a confidential client, then it must be included.|
| `client_assertion` | Sometimes required | A different form of `client_secret`, generated using a certificate. See [certificate credentials](active-directory-certificate-credentials.md) for more details. |
+> [!WARNING]
+> As part of not recomending this flow for use, the official SDKs do not support this flow for confidential clients, those that use a secret or assertion. You may find that the SDK you wish to use does not allow you to add a secret while using ROPC.
+ ### Successful authentication response The following example shows a successful token response:
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
If you attempt to use the authorization code flow and see this error:
Then, visit your app registration and update the redirect URI for your app to type `spa`.
-Applications cannot use a `spa` redirect URI with non-SPA flows, for example native applications or client credential flows. To ensure security, Azure AD will return an error if you attempt to use use a `spa` redirect URI in these scenarios, e.g. from a native app that doesn't send an`Origin` header.
+Applications cannot use a `spa` redirect URI with non-SPA flows, for example native applications or client credential flows. To ensure security and best practices, the Microsoft Identity platform will return an error if you attempt to use use a `spa` redirect URI without an `Origin` header. Similarly, the Microsoft Identity platform also prevents the use of client credentials (in the OBO flow, client credentials flow, and auth code flow) in the presence of an `Origin` header, to ensure that secrets are not used from within the browser.
## Request an authorization code
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `state` | recommended | A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. | | `prompt` | optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` will force the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an `interaction_required` error.<br/>- `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` will interrupt single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> | | `login_hint` | optional | Can be used to pre-fill the username/email address field of the sign-in page for the user, if you know their username ahead of time. Often apps will use this parameter during re-authentication, having already extracted the username from a previous sign-in using the `preferred_username` claim. |
-| `domain_hint` | optional | If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience - for example, sending them to their federated identity provider. Often apps will use this parameter during re-authentication, by extracting the `tid` from a previous sign-in. If the `tid` claim value is `9188040d-6c67-4c5b-b112-36a304b66dad`, you should use `domain_hint=consumers`. Otherwise, use `domain_hint=organizations`. |
+| `domain_hint` | optional | If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience - for example, sending them to their federated identity provider. Often apps will use this parameter during re-authentication, by extracting the `tid` from a previous sign-in. |
| `code_challenge` | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - both public and confidential clients - and required by the Microsoft identity platform for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md). | | `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. The Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md).|
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
Previously updated : 06/25/2021 Last updated : 07/16/2021
The OAuth 2.0 On-Behalf-Of flow (OBO) serves the use case where an application invokes a service/web API, which in turn needs to call another service/web API. The idea is to propagate the delegated user identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform, on behalf of the user.
-This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
+The OBO flow only works for user principals at this time. A service principal cannot request an app-only token, send it to an API, and have that API exchange that for another token that represents that original service principal. Additionally, the OBO flow is focused on acting on another party's behalf, known as a delegated scenario - this means that it uses only delegated *scopes*, and not application *roles*, for reasoning about permissions. *Roles* remain attached to the principal (the user) in the flow, never the application operating on the users behalf.
+
+This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead. For more info about which clients can perform OBO calls, see [limitations](#client-limitations).
active-directory Azureadjoin Plan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/azureadjoin-plan.md
If you use AD FS, see [Verify and manage single sign-on with AD FS](/previous-ve
Users get SSO from Azure AD joined devices if the device has access to a domain controller.
+> [!NOTE]
+> Azure AD joined devices can seamlessly provide access to both, on-premises and cloud applications. For more information, see [How SSO to on-premises resources works on Azure AD joined devices](azuread-join-sso.md).
+ **Recommendation:** Deploy [Azure AD App proxy](../app-proxy/application-proxy.md) to enable secure access for these applications. ### On-premises network shares
You can use this implementation to [require managed devices for cloud app access
> [Join your work device to your organization's network](../user-help/user-help-join-device-on-network.md) <!--Image references-->
-[1]: ./media/azureadjoin-plan/12.png
+[1]: ./media/azureadjoin-plan/12.png
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
Previously updated : 12/02/2020 Last updated : 07/19/2021
Here are the settings defined in the Group.Unified SettingsTemplate. Unless othe
| <ul><li>EnableGroupCreation<li>Type: Boolean<li>Default: True |The flag indicating whether Microsoft 365 group creation is allowed in the directory by non-admin users. This setting does not require an Azure Active Directory Premium P1 license.| | <ul><li>GroupCreationAllowedGroupId<li>Type: String<li>Default: "" |GUID of the security group for which the members are allowed to create Microsoft 365 groups even when EnableGroupCreation == false. | | <ul><li>UsageGuidelinesUrl<li>Type: String<li>Default: "" |A link to the Group Usage Guidelines. |
-| <ul><li>ClassificationDescriptions<li>Type: String<li>Default: "" | A comma-delimited list of classification descriptions. The value of ClassificationDescriptions is only valid in this format:<br>$setting["ClassificationDescriptions"] ="Classification:Description,Classification:Description"<br>where Classification matches an entry in the ClassificationList.<br>This setting does not apply when EnableMIPLabels == True.|
+| <ul><li>ClassificationDescriptions<li>Type: String<li>Default: "" | A comma-delimited list of classification descriptions. The value of ClassificationDescriptions is only valid in this format:<br>$setting["ClassificationDescriptions"] ="Classification:Description,Classification:Description"<br>where Classification matches an entry in the ClassificationList.<br>This setting does not apply when EnableMIPLabels == True.<br>Character limit for property ClassificationDescriptions is 300, and commas can't be escaped,
| <ul><li>DefaultClassification<li>Type: String<li>Default: "" | The classification that is to be used as the default classification for a group if none was specified.<br>This setting does not apply when EnableMIPLabels == True.| | <ul><li>PrefixSuffixNamingRequirement<li>Type: String<li>Default: "" | String of a maximum length of 64 characters that defines the naming convention configured for Microsoft 365 groups. For more information, see [Enforce a naming policy for Microsoft 365 groups](groups-naming-policy.md). | | <ul><li>CustomBlockedWordsList<li>Type: String<li>Default: "" | Comma-separated string of phrases that users will not be permitted to use in group names or aliases. For more information, see [Enforce a naming policy for Microsoft 365 groups](groups-naming-policy.md). |
active-directory Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/api-connectors-overview.md
An API connector provides Azure Active Directory with the information needed to
There are two places in a user flow where you can enable an API connector: -- After signing in with an identity provider
+- After federating with an identity provider during sign-up
- Before creating the user > [!IMPORTANT] > In both of these cases, the API connectors are invoked during user **sign-up**, not sign-in.
-### After signing in with an identity provider
+### After federating with an identity provider during sign-up
An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account. The following are examples of API connector scenarios you might enable at this step: - Use the email or federated identity that the user provided to look up claims in an existing system. Return these claims from the existing system, pre-fill the attribute collection page, and make them available to return in the token.-- Implement an allow or block list based on social identity.
+- Implement an allow or blocklist based on social identity.
### Before creating the user
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
To use an [API connector](api-connectors-overview.md), you first create the API
3. In the left menu, select **External Identities**. 4. Select **All API connectors**, and then select **New API connector**.
- ![Add a new API connector](./media/self-service-sign-up-add-api-connector/api-connector-new.png)
+ :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-new.png" alt-text="Providing the basic configuration like target URL and display name for an API connector during the creation experience.":::
5. Provide a display name for the call. For example, **Check approval status**. 6. Provide the **Endpoint URL** for the API call.
-7. Choose the **Authentication type** and configure the authentication information for calling your API. See the section below for options on securing your API.
+7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](self-service-sign-up-secure-api-connector.md).
- ![Configure an API connector](./media/self-service-sign-up-add-api-connector/api-connector-config.png)
+ :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-config.png" alt-text="Providing authentication configuration for an API connector during the creation experience.":::
8. Select **Save**.
-## Securing the API endpoint
-You can protect your API endpoint by using either HTTP basic authentication or HTTPS client certificate authentication (preview). In either case, you provide the credentials that Azure Active Directory will use when calling your API endpoint. Your API endpoint then checks the credentials and performs authorization decisions.
-
-### HTTP basic authentication
-HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Azure Active Directory sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then checks these values to determine whether to reject an API call or not.
-
-### HTTPS client certificate authentication (preview)
-
-> [!IMPORTANT]
-> This functionality is in preview and is provided without a service-level agreement. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-Client certificate authentication is a mutual certificate-based authentication method where the client provides a client certificate to the server to prove its identity. In this case, Azure Active Directory will use the certificate that you upload as part of the API connector configuration. This happens as a part of the SSL handshake. Your API service can then limit access to only services that have proper certificates. The client certificate is an PKCS12 (PFX) X.509 digital certificate. In production environments, it should be signed by a certificate authority.
-
-To create a certificate, you can use [Azure Key Vault](../../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. Recommended settings include:
-- **Subject**: `CN=<yourapiname>.<tenantname>.onmicrosoft.com`-- **Content Type**: `PKCS #12`-- **Lifetime Acton Type**: `Email all contacts at a given percentage lifetime` or `Email all contacts a given number of days before expiry`-- **Key Type**: `RSA`-- **Key Size**: `2048`-- **Exportable Private Key**: `Yes` (in order to be able to export pfx file)-
-You can then [export the certificate](../../key-vault/certificates/how-to-export-certificate.md). You can alternatively use PowerShell's [New-SelfSignedCertificate cmdlet](../../active-directory-b2c/secure-rest-api.md#prepare-a-self-signed-certificate-optional) to generate a self-signed certificate.
-
-After you have a certificate, you can then upload it as part of the API connector configuration. Note that password is only required for certificate files protected by a password.
-
-Your API must implement the authorization based on sent client certificates in order to protect the API endpoints. For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and *validate the certificate from your API code*. You can also use Azure API Management to protect your API and [check client certificate properties](
-../../api-management/api-management-howto-mutual-certificates-for-clients.md) against desired values using policy expressions.
-
-It's recommended you set reminder alerts for when your certificate will expire. You will need to generate a new certificate and repeat the steps above. Your API service can temporarily continue to accept old and new certificates while the new certificate is deployed. To upload a new certificate to an existing API connector, select the API connector under **All API connectors** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and is past the start date will automatically be used by Azure Active Directory.
-
-### API Key
-Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development. For [Azure Functions](../../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL**. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
-
-This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can choose basic authentication and use temporary values for `username` and `password` that your API can disregard while you implement the authorization in your API.
- ## The request sent to your API An API connector materializes as an **HTTP POST** request, sending user attributes ('claims') as key-value pairs in a JSON body. Attributes are serialized similarly to [Microsoft Graph](/graph/api/resources/user#properties) user properties.
Only user properties and custom attributes listed in the **Azure Active Director
Custom attributes exist in the **extension_\<extensions-app-id>_AttributeName** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [define custom attributes for self-service sign-up flows](user-flow-add-custom-attributes.md).
-Additionally, the **UI Locales ('ui_locales')** claim is sent by default in all requests. It provides a user's locale(s) as configured on their device that can be used by the API to return internationalized responses.
+Additionally, the claims are typically sent in all request:
+- **UI Locales ('ui_locales')** - An end-user's locale(s) as configured on their device. This can be used by your API to return internationalized responses.
+<!-
+ - `postFederationSignup` - corresponds to "After federating with an identity provider during sign-up"
+ - `postAttributeCollection` - corresponds to "Before creating the user"
+- **Client ID ('client_id')** - The `appId` value of the application that an end-user is authenticating to in a user flow. This is *not* the resource application's `appId` in access tokens. -->
+- **Email Address ('email')** or [**identities ('identities')**](/graph/api/resources/objectidentity) - these claims can be used by your API to identify the end-user that is authenticating to the application.
> [!IMPORTANT] > If a claim does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check and handle the case in which a claim is not in the request.
-> [!TIP]
-> [**identities ('identities')**](/graph/api/resources/objectidentity) and the **Email Address ('email')** claims can be used by your API to identify a user before they have an account in your tenant. The 'identities' claim is sent when a user authenticates with an identity provider such as Google or Facebook. 'email' is always sent.
- ## Enable the API connector in a user flow Follow these steps to add an API connector to a self-service sign-up user flow.
Follow these steps to add an API connector to a self-service sign-up user flow.
4. Select **User flows**, and then select the user flow you want to add the API connector to. 5. Select **API connectors**, and then select the API endpoints you want to invoke at the following steps in the user flow:
- - **After signing in with an identity provider**
+ - **After federating with an identity provider during sign-up**
- **Before creating the user**
- ![Add APIs to the user flow](./media/self-service-sign-up-add-api-connector/api-connectors-user-flow-select.png)
+ :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connectors-user-flow-select.png" alt-text="Selecting which API connector to use for a step in the user flow like 'Before creating the user'.":::
6. Select **Save**.
-## After signing in with an identity provider
+## After federating with an identity provider during sign-up
-An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes. This step is not invoked if a user is registering with a local account.
+An API connector at this step in the sign-up process is invoked immediately after the user authenticates with an identity provider (like Google, Facebook, & Azure AD). This step precedes the ***attribute collection page***, which is the form presented to the user to collect user attributes.
### Example request sent to the API at this step ```http
Content-type: application/json
| Parameter | Type | Required | Description | | -- | -- | -- | -- |
-| version | String | Yes | The version of the API. |
+| version | String | Yes | The version of your API. |
| action | String | Yes | Value must be `Continue`. | | \<builtInUserAttribute> | \<attribute-type> | No | Values can be stored in the directory if they selected as a **Claim to receive** in the API connector configuration and **User attributes** for a user flow. Values can be returned in the token if selected as an **Application claim**. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The returned claim does not need to contain `_<extensions-app-id>_`. Returned values can overwrite values collected from a user. They can also be returned in the token if configured as part of the application. |
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. Returned values can overwrite values collected from a user. |
### Example of a blocking response
Content-type: application/json
| Parameter | Type | Required | Description | | -- | | -- | -- |
-| version | String | Yes | The version of the API. |
+| version | String | Yes | The version of your API. |
| action | String | Yes | Value must be `ShowBlockPage` | | userMessage | String | Yes | Message to display to the user. | **End-user experience with a blocking response**
-![Example block page](./media/api-connectors-overview/blocking-page-response.png)
### Example of a validation-error response
Content-type: application/json
| -- | - | -- | -- | | version | String | Yes | The version of your API. | | action | String | Yes | Value must be `ValidationError`. |
-| status | Integer | Yes | Must be value `400` for a ValidationError response. |
+| status | Integer / String | Yes | Must be value `400`, or `"400"` for a ValidationError response. |
| userMessage | String | Yes | Message to display to the user. | > [!NOTE]
Content-type: application/json
**End-user experience with a validation-error response**
-![Example validation page](./media/api-connectors-overview/validation-error-postal-code.png)
- ## Best practices and how to troubleshoot ### Using serverless cloud functions
-Serverless functions, like HTTP triggers in Azure Functions, provide a simple way create API endpoints to use with the API connector. You can use the serverless cloud function to, [for example](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts), perform validation logic and limit sign-ups to specific email domains. The serverless cloud function can also call and invoke other web APIs, user stores, and other cloud services for more complex scenarios.
+
+Serverless functions, like [HTTP triggers in Azure Functions](../../azure-functions/functions-bindings-http-webhook-trigger.md), provide a way create API endpoints to use with the API connector. You can use the serverless cloud function to, [for example](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts), perform validation logic and limit sign-ups to specific email domains. The serverless cloud function can also call and invoke other web APIs, data stores, and other cloud services for complex scenarios.
### Best practices Ensure that: * Your API is following the API request and response contracts as outlined above. * The **Endpoint URL** of the API connector points to the correct API endpoint.
-* Your API explicitly checks for null values of received claims.
+* Your API explicitly checks for null values of received claims that it depends on.
+* Your API implements an authentication method outlined in [secure your API Connector](self-service-sign-up-secure-api-connector.md).
* Your API responds as quickly as possible to ensure a fluid user experience.
- * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm." in production. For Azure Functions, its recommended to use the [Premium plan](../../azure-functions/functions-scale.md)
-
+ * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../../azure-functions/functions-scale.md)
+* Ensure high availability of your API.
+* Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
+* Your endpoints must comply with the Azure AD TLS and cipher security requirements. For more information, see [TLS and cipher suite requirements](../../active-directory-b2c/https-cipher-tls-requirements.md).
+
### Use logging+ In general, it's helpful to use the logging tools enabled by your web API service, like [Application insights](../../azure-functions/functions-monitoring.md), to monitor your API for unexpected error codes, exceptions, and poor performance. * Monitor for HTTP status codes that aren't HTTP 200 or 400. * A 401 or 403 HTTP status code typically indicates there's an issue with your authentication. Double-check your API's authentication layer and the corresponding configuration in the API connector.
-* Use more aggressive levels of logging (e.g. "trace" or "debug") in development if needed.
-* Monitor your API for long response times.
+* Use more aggressive levels of logging (for example "trace" or "debug") in development if needed.
+* Monitor your API for long response times.
## Next steps - Learn how to [add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)
active-directory Self Service Sign Up Add Approvals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-add-approvals.md
Now you'll add the API connectors to a self-service sign-up user flow with these
4. Select **User flows**, and then select the user flow you want to enable the API connector for. 5. Select **API connectors**, and then select the API endpoints you want to invoke at the following steps in the user flow:
- - **After signing in with an identity provider**: Select your approval status API connector, for example _Check approval status_.
+ - **After federating with an identity provider during sign-up**: Select your approval status API connector, for example _Check approval status_.
- **Before creating the user**: Select your approval request API connector, for example _Request approval_. ![Add APIs to the user flow](./media/self-service-sign-up-add-approvals/api-connectors-user-flow-api.png)
active-directory Self Service Sign Up Secure Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/self-service-sign-up-secure-api-connector.md
+
+ Title: Secure APIs used as API connectors in Azure AD self-service sign-up user flows
+description: Secure your custom RESTful APIs used as API connectors in self-service sign-up user flows.
++++ Last updated : 07/16/2021++++++++
+# Secure your API used an API connector in Azure AD External Identities self-service sign-up user flows
+
+When integrating a REST API within an Azure AD external identities self-service sign-up user flow, you must protect your REST API endpoint with authentication. The REST API authentication ensures that only services that have proper credentials, such as Azure AD, can make calls to your endpoint. This article will explore how to secure REST API.
+
+## Prerequisites
+Complete the steps in the [Walkthrough: Add an API connector to a sign-up user flow](self-service-sign-up-add-api-connector.md) guide.
+
+You can protect your API endpoint by using either HTTP basic authentication or HTTPS client certificate authentication. In either case, you provide the credentials that Azure AD will use when calling your API endpoint. Your API endpoint then checks the credentials and performs authorization decisions.
++
+## HTTP basic authentication
+
+HTTP basic authentication is defined in [RFC 2617](https://tools.ietf.org/html/rfc2617). Basic authentication works as follows: Azure AD sends an HTTP request with the client credentials (`username` and `password`) in the `Authorization` header. The credentials are formatted as the base64-encoded string `username:password`. Your API then is responsible for checking these values to perform other authorization decisions.
+
+To configure an API Connector with HTTP basic authentication, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Under **Azure services**, select **Azure AD**.
+3. Select **API connectors**, and then select the **API Connector** you want to configure.
+4. For the **Authentication type**, select **Basic**.
+5. Provide the **Username**, and **Password** of your REST API endpoint.
+ :::image type="content" source="media/secure-api-connector/api-connector-config.png" alt-text="Providing basic authentication configuration for an API connector.":::
+6. Select **Save**.
+
+## HTTPS client certificate authentication
+
+Client certificate authentication is a mutual certificate-based authentication, where the client, Azure AD, provides its client certificate to the server to prove its identity. This happens as a part of the SSL handshake. Your API is responsible for validating the certificates belong to a valid client, such as Azure AD, and performing authorization decisions. The client certificate is an X.509 digital certificate.
+
+> [!IMPORTANT]
+> In production environments, the certificate must be signed by a certificate authority.
+
+### Create a certificate
+
+#### Option 1: Use Azure Key Vault (recommended)
+
+To create a certificate, you can use [Azure Key Vault](../../key-vault/certificates/create-certificate.md), which has options for self-signed certificates and integrations with certificate issuer providers for signed certificates. Recommended settings include:
+- **Subject**: `CN=<yourapiname>.<tenantname>.onmicrosoft.com`
+- **Content Type**: `PKCS #12`
+- **Lifetime Acton Type**: `Email all contacts at a given percentage lifetime` or `Email all contacts a given number of days before expiry`
+- **Key Type**: `RSA`
+- **Key Size**: `2048`
+- **Exportable Private Key**: `Yes` (in order to be able to export `.pfx` file)
+
+You can then [export the certificate](../../key-vault/certificates/how-to-export-certificate.md).
+
+#### Option 2: prepare a self-sized certificate using PowerShell module
++
+### Configure your API Connector
+
+To configure an API Connector with client certificate authentication, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Under **Azure services**, select **Azure AD**.
+3. Select **API connectors**, and then select the **API Connector** you want to configure.
+4. For the **Authentication type**, select **Certificate**.
+5. In the **Upload certificate** box, select your certificate's .pfx file with a private key.
+6. In the **Enter Password** box, type the certificate's password.
+ :::image type="content" source="media/secure-api-connector/api-connector-upload-cert.png" alt-text="Providing certificate authentication configuration for an API connector.":::
+7. Select **Save**.
+
+### Perform authorization decisions
+Your API must implement the authorization based on sent client certificates in order to protect the API endpoints. For Azure App Service and Azure Functions, see [configure TLS mutual authentication](../../app-service/app-service-web-configure-tls-mutual-auth.md) to learn how to enable and *validate the certificate from your API code*. You can alternatively use Azure API Management as a layer in front of any API service to [check client certificate properties](
+../../api-management/api-management-howto-mutual-certificates-for-clients.md) against desired values.
+
+### Renewing certificates
+It's recommended you set reminder alerts for when your certificate will expire. You will need to generate a new certificate and repeat the steps above when used certificates are about to expire. To "roll" the use of a new certificate, your API service can continue to accept old and new certificates for a temporary amount of time while the new certificate is deployed.
+
+To upload a new certificate to an existing API connector, select the API connector under **API connectors** and click on **Upload new certificate**. The most recently uploaded certificate which is not expired and whose start date has passed will automatically be used by Azure AD.
+
+ :::image type="content" source="media/secure-api-connector/api-connector-renew-cert.png" alt-text="Providing a new certificate to an API connector when one already exists.":::
+
+## API key authentication
+
+Some services use an "API key" mechanism to obfuscate access to your HTTP endpoints during development by requiring the caller to include a unique key as an HTTP header or HTTP query parameter. For [Azure Functions](../../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys), you can accomplish this by including the `code` as a query parameter in the **Endpoint URL** of your API connector. For example, `https://contoso.azurewebsites.net/api/endpoint`<b>`?code=0123456789`</b>).
+
+This is not a mechanism that should be used alone in production. Therefore, configuration for basic or certificate authentication is always required. If you do not wish to implement any authentication method (not recommended) for development purposes, you can select 'basic' authentication in the API connector configuration and use temporary values for `username` and `password` that your API can disregard while you implement proper authorization.
+
+## Next steps
+- Get started with our [quickstart samples](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts).
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-deployment-plans.md
From any of the plan pages, use your browser's Print to PDF capability to create
| [Devices](../devices/plan-device-deployment.md) | This article helps you evaluate the methods to integrate your device with Azure AD, choose the implementation plan, and provides key links to supported device management tools. |
-## Deploy hybrid scenarios
+## Deploy hybrid scenarios
| Capability | Description| | -| -|
-| [AD FS to cloud user authentication](/hybrid/migrate-from-federation-to-cloud-authentication.md)| Learn to migrate your user authentication from federation to cloud authentication with either pass through authentication or password hash sync.
+| [AD FS to cloud user authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md)| Learn to migrate your user authentication from federation to cloud authentication with either pass through authentication or password hash sync.
| [Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) |Employees today want to be productive at any place, at any time, and from any device. They need to access SaaS apps in the cloud and corporate apps on-premises. Azure AD Application proxy enables this robust access without costly and complex virtual private networks (VPNs) or demilitarized zones (DMZs). | | [Seamless SSO](../hybrid/how-to-connect-sso-quick-start.md)| Azure Active Directory Seamless Single Sign-On (Azure AD Seamless SSO) automatically signs users in when they are on their corporate devices connected to your corporate network. With this feature, users won't need to type in their passwords to sign in to Azure AD and usually won't need to enter their usernames. This feature provides authorized users with easy access to your cloud-based applications without needing any extra on-premises components. |
active-directory Resilient External Processes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilient-external-processes.md
Identity experience framework (IEF) policies allow you to call an external syste
- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and cripple your application. For example, using CAPTCHA in your sign in, sign up flow can help. -- Use [API connectors of built-in sign-up user flow](../../active-directory-b2c/api-connectors-overview.md) wherever possible to integrate with web APIs either after signing in with an identity provider or before creating the user. Since the user flows are already extensively tested, itΓÇÖs likely that you donΓÇÖt have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale.
+- Use [API connectors of built-in sign-up user flow](../../active-directory-b2c/api-connectors-overview.md) wherever possible to integrate with web APIs either After federating with an identity provider during sign-up or before creating the user. Since the user flows are already extensively tested, itΓÇÖs likely that you donΓÇÖt have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale.
- Azure AD RESTFul API [technical profiles](../../active-directory-b2c/restful-technical-profile.md) don't provide any caching behavior. Instead, RESTFul API profile implements a retry logic and a timeout that is built into the policy.
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Previously updated : 7/13/2021 Last updated : 7/19/2021
Affected environments are:
- Azure Commercial Cloud - Office 365 GCC and WW
-For guidance to remove deprecating protocols dependencies, please refer to [EEnable support for TLS 1.2 in your environment, in preparation for upcoming Azure AD TLS 1.0/1.1 deprecation](https://docs.microsoft.com/troubleshoot/azure/active-directory/enable-support-tls-environment).
+For guidance to remove deprecating protocols dependencies, please refer to [EEnable support for TLS 1.2 in your environment, in preparation for upcoming Azure AD TLS 1.0/1.1 deprecation](/troubleshoot/azure/active-directory/enable-support-tls-environment).
Affected environments are:
- Azure Commercial Cloud - Office 365 GCC and WW
-Users, services, and applications that interact with Azure Active Directory and Microsoft Graph, should use TLS 1.2 and modern cipher suites to maintain a secure connection to Azure Active Directory for Azure, Office 365, and Microsoft 365 services. For additional guidance, refer to [Enable support for TLS 1.2 in your environment, in preparation for upcoming deprecation of Azure AD TLS 1.0/1.1](https://docs.microsoft.com/troubleshoot/azure/active-directory/enable-support-tls-environment).
+Users, services, and applications that interact with Azure Active Directory and Microsoft Graph, should use TLS 1.2 and modern cipher suites to maintain a secure connection to Azure Active Directory for Azure, Office 365, and Microsoft 365 services. For additional guidance, refer to [Enable support for TLS 1.2 in your environment, in preparation for upcoming deprecation of Azure AD TLS 1.0/1.1](/troubleshoot/azure/active-directory/enable-support-tls-environment).
To learn more about the new App registrations experience, see the [App registrat
We've fixed a known issue whereby when users were required to re-register if they were disabled for per-user Multi-Factor Authentication (MFA) and then enabled for MFA through a Conditional Access policy.
-To require users to re-register, you can select the **Required re-register MFA** option from the user's authentication methods in the Azure AD portal. For more information about migrating users from per-user MFA to Conditional Access-based MFA, see [Convert users from per-user MFA to Conditional Access based MFA](../authentication/howto-mfa-getstarted.md#convert-users-from-per-user-mfa-to-conditional-access-based-mfa).
+To require users to re-register, you can select the **Required re-register MFA** option from the user's authentication methods in the Azure AD portal.
active-directory How To Connect Sso Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-quick-start.md
The use of third-party Active Directory Group Policy extensions to roll out the
#### Known browser limitations
-Seamless SSO doesn't work in private browsing mode on Firefox. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Seamless SSO supports the next version of Microsoft Edge based on Chromium and it works in InPrivate and Guest mode by design. Microsoft Edge (legacy) is no longer supported.
+Seamless SSO doesn't work in private browsing mode on Firefox. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode. Microsoft Edge (legacy) is no longer supported. Seamless SSO supports the next version of Microsoft Edge based on Chromium and it works in InPrivate and Guest mode by design. Seamless SSO may require additional configuration to work in InPrivate and Guest mode with versions of Microsoft Edge Chromium and Google Chrome browsers:
+
+**AmbientAuthenticationInPrivateModesEnabled** ΓÇô may need to be configured for InPrivate and / or guest users based on the corresponding documentations: [Microsoft Edge Chromium](/DeployEdge/microsoft-edge-policies#ambientauthenticationinprivatemodesenabled); [Google Chrome](https://chromeenterprise.google/policies/?policy=AmbientAuthenticationInPrivateModesEnabled).
## Step 4: Test the feature
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
# Migrate from federation to cloud authentication
-In this article, you learn how to deploy cloud user authentication with either Azure Active Directory [Password hash synchronization (PHS)](whatis-phs.md) or [Pass-through authentication (PTA)](how-to-connect-pta.md). While we present the use case for moving from [Active Directory Federation Services (AD FS)](whatis-fed.md) to cloud authentication methods, the guidance substantially applies other to on premises systems as well.
+In this article, you learn how to deploy cloud user authentication with either Azure Active Directory [Password hash synchronization (PHS)](whatis-phs.md) or [Pass-through authentication (PTA)](how-to-connect-pta.md). While we present the use case for moving from [Active Directory Federation Services (AD FS)](whatis-fed.md) to cloud authentication methods, the guidance substantially applies to other on premises systems as well.
Before you continue, we suggest that you review our guide on [choosing the right authentication method](choose-ad-authn.md) and compare methods most suitable for your organization.
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
For more information on Azure AD Multi-Factor Authentication, see [What is Azure
## User experience
-Azure Active Directory Identity Protection will prompt your users to register the next time they sign in interactively and they will have 14 days to complete registration. During this 14-day period, they can bypass registration but at the end of the period they will be required to register before they can complete the sign-in process.
+Azure Active Directory Identity Protection will prompt your users to register the next time they sign in interactively and they will have 14 days to complete registration. During this 14-day period, they can bypass registration if MFA is not required as a condition, but at the end of the period they will be required to register before they can complete the sign-in process.
For an overview of the related user experience, see:
active-directory Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/common-scenarios.md
# Centralize application management with Azure AD
-Passwords, both an IT nightmare and a pain for employees across the world. This is why more and more companies are turning to Azure Active Directory, Microsoft's Identity and Access Management solution for the cloud and all your other resources. Jump from application to application without having to enter a password for each one. Jump from Outlook, to Workday, to ADP as fast as you can open them up, quickly and securely. Then collaborate with partners and even others outside your organization all without having to call IT. What's more, Azure AD helps manage risk by securing the apps you use with things like multi-factor authentication to verify who you are, using continuously adaptive machine learning and security intelligence to detect suspicious sign-ins giving you secure access to the apps you need, wherever you are. It's not only great for users but for IT as well. With just-in-time access reviews and a full scale governance suite, Azure AD helps you stay in compliance and enforce policies too. And get this, you can even automate provisioning user accounts, making access management a breeze. check out some of the common scenarios that customer use Azure Active Directory's application management capabilities for.
+Passwords, both an IT nightmare and a pain for employees across the world. This is why more and more companies are turning to Azure Active Directory, Microsoft's Identity and Access Management solution for the cloud and all your other resources. Jump from application to application without having to enter a password for each one. Jump from Outlook, to Workday, to ADP as fast as you can open them up, quickly and securely. Then collaborate with partners and even others outside your organization all without having to call IT. What's more, Azure AD helps manage risk by securing the apps you use with things like multi-factor authentication to verify who you are, using continuously adaptive machine learning and security intelligence to detect suspicious sign-ins giving you secure access to the apps you need, wherever you are. It's not only great for users but for IT as well. With just-in-time access reviews and a full scale governance suite, Azure AD helps you stay in compliance and enforce policies too. And get this, you can even automate provisioning user accounts, making access management a breeze. Check out some of the common scenarios that customer use Azure Active Directory's application management capabilities for.
**Common scenarios**
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/delete-application-portal.md
Previously updated : 07/12/2021 Last updated : 07/16/2021
To delete an application from your Azure AD tenant, you need:
>[!IMPORTANT] >Use a non-production environment to test the steps in this quickstart.
+> [!NOTE]
+>To delete an application from Azure AD, a user must be assigned one of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+ ## Delete an application from your Azure AD tenant To delete an application from your Azure AD tenant:
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
Users can download an **Intune-managed browser**:
- **For Android devices**, from the [Google play store](https://play.google.com/store/apps/details?id=com.microsoft.intune) -- **For Apple devices**, from the [Apple App Store](https://itunes.apple.com/us/app/microsoft-intune-managed-browser/id943264951?mt=8) or they can download the [My Apps mobile app for iOS ](https://apps.apple.com/us/app/my-apps-azure-active-directory/id824048653)
+- **For Apple devices**, from the [Apple App Store](https://apps.apple.com/us/app/intune-company-portal/id719171358) or they can download the [My Apps mobile app for iOS ](https://appadvice.com/app/my-apps-azure-active-directory/824048653)
**Let users open their apps from a browser extension.**
active-directory How To Use Vm Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md
This article provides a list of SDK samples, which demonstrate use of their resp
| .NET | [Deploy an Azure Resource Manager template from a Windows VM using managed identities for Azure resources](https://github.com/Azure-Samples/windowsvm-msi-arm-dotnet) | | .NET Core | [Call Azure services from a Linux VM using managed identities for Azure resources](https://github.com/Azure-Samples/linuxvm-msi-keyvault-arm-dotnet/) | | Go | [Azure identity client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ManagedIdentityCredential)
-| Node.js | [Manage resources using managed identities for Azure resources](https://azure.microsoft.com/resources/samples/resources-node-manage-resources-with-msi/) |
+| Node.js | [Manage resources using managed identities for Azure resources](https://github.com/Azure-Samples/resources-node-manage-resources-with-msi) |
| Python | [Use managed identities for Azure resources to authenticate simply from inside a VM](https://azure.microsoft.com/resources/samples/resource-manager-python-manage-resources-with-msi/) | | Ruby | [Manage resources from a VM with managed identities for Azure resources enabled](https://github.com/Azure-Samples/resources-ruby-manage-resources-with-msi/) |
active-directory Qs Configure Sdk Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md
Azure supports multiple programming platforms through a series of [Azure SDKs](h
| SDK | Sample | | | |
-| .NET | [Manage resource from a VM enabled with managed identities for Azure resources enabled](https://azure.microsoft.com/resources/samples/aad-dotnet-manage-resources-from-vm-with-msi/) |
-| Java | [Manage storage from a VM enabled with managed identities for Azure resources](https://azure.microsoft.com/resources/samples/compute-java-manage-resources-from-vm-with-msi-in-aad-group/)|
+| .NET | [Manage resource from a VM enabled with managed identities for Azure resources enabled](https://github.com/Azure-Samples/aad-dotnet-manage-resources-from-vm-with-msi) |
+| Java | [Manage storage from a VM enabled with managed identities for Azure resources](https://github.com/Azure-Samples/compute-java-manage-resources-from-vm-with-msi-in-aad-group)|
| Node.js| [Create a VM with system-assigned managed identity enabled](https://azure.microsoft.com/resources/samples/compute-node-msi-vm/) | | Python | [Create a VM with system-assigned managed identity enabled](https://azure.microsoft.com/resources/samples/compute-python-msi-vm/) | | Ruby | [Create Azure VM with an system-assigned identity enabled](https://github.com/Azure-Samples/compute-ruby-msi-vm/) |
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal:
1. On the **Set up Single Sign-On with SAML** page, enter the following values: a. In the **Sign on URL** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/login`.
+ `https://<FQDN>/remote/login`.
b. In the **Identifier** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/metadata`.
+ `https://<FQDN>/remote/saml/metadata/`.
c. In the **Reply URL** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/login`.
+ `https://<FQDN>/remote/saml/login/`.
d. In the **Logout URL** box, enter a URL in the pattern
- `https://<FQDN>/remote/saml/logout`.
+ `https://<FQDN>/remote/saml/logout/`.
> [!NOTE] > These values are just patterns. You need to use the actual **Sign on URL**, **Identifier**, **Reply URL**, and **Logout URL**. Contact [Fortinet support](https://support.fortinet.com) for guidance. You can also refer to the example patterns shown in the Fortinet documentation and the **Basic SAML Configuration** section in the Azure portal.
To complete these steps, you'll need the values you recorded earlier:
```console config user saml edit azure
+ set cert <FortiGate VPN Server Certificate Name>
set entity-id <Entity ID> set single-sign-on-url <Reply URL> set single-logout-url <Logout URL>
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure FortiGate VPN you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
+Once you configure FortiGate VPN you can enforce Session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad)
active-directory Learnupon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/learnupon-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<companyname>.learnupon.com/saml/consumer` > [!NOTE]
- > The value is not real. Update the value with the actual Reply URL. Contact [LearnUpon Client support team](https://www.learnupon.com/features/support/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The value is not real. Update the value with the actual Reply URL. Contact [LearnUpon Client support team](https://www.learnupon.com/contact/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, locate the **THUMBPRINT** - This will be added to your LearnUpon SAML Settings.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
### Create LearnUpon test user
-In this section, a user called Britta Simon is created in LearnUpon. LearnUpon supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in LearnUpon, a new one is created after authentication. If you need to create an user manually, you need to contact [LearnUpon support team](https://www.learnupon.com/features/support/).
+In this section, a user called Britta Simon is created in LearnUpon. LearnUpon supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in LearnUpon, a new one is created after authentication. If you need to create an user manually, you need to contact [LearnUpon support team](https://www.learnupon.com/contact/).
## Test SSO
app-service App Service Key Vault References https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-key-vault-references.md
If your vault is configured with [network restrictions](../key-vault/general/ove
1. Make sure the application has outbound networking capabilities configured, as described in [App Service networking features](./networking-features.md) and [Azure Functions networking options](../azure-functions/functions-networking-options.md).
+ Linux applications attempting to use private endpoints additionally require that the app be explicitly configured to have all traffic route through the virtual network. This requirement will be removed in a forthcoming update. To set this, use the following CLI command:
+
+ ```azurecli
+ az webapp config set --subscription <sub> -g <rg> -n <appname> --generic-configurations '{"vnetRouteAllEnabled": true}'
+ ```
+ 2. Make sure that the vault's configuration accounts for the network or subnet through which your app will access it. ### Access vaults with a user-assigned identity
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/overview-diagnostics.md
Title: Diagnostics and solve tool description: Learn how you can troubleshoot issues with your app in Azure App Service with the diagnostics and solve tool in the Azure portal. keywords: app service, azure app service, diagnostics, support, web app, troubleshooting, self-help- Last updated 10/18/2019-
automanage Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/faq.md
- Title: Azure Automanage for virtual machines FAQ
-description: Answers to frequently asked questions about Azure Automanage for virtual machines.
----- Previously updated : 02/22/2021---
-# Frequently asked questions for Azure Automanage for VMs
-
-This article provides answers to some of the most common questions about [Azure Automanage for virtual machines](automanage-virtual-machines.md).
-
-If your Azure issue is not addressed in this article, visit the Azure forums on [MSDN and Stack Overflow](https://azure.microsoft.com/support/forums/). You can post your issue in these forums, or post to [@AzureSupport on Twitter](https://twitter.com/AzureSupport). You can also submit an Azure support request. To submit a support request, on the [Azure support page](https://azure.microsoft.com/support/options/), select **Get support**.
--
-## Azure Automanage for virtual machines
-
-**What are all of the prerequisites required to enable Azure Automanage?**
-
-The following are prerequisites for enabling Azure Automanage:
-- Supported [Windows Server versions](automanage-windows-server.md#supported-windows-server-versions) and [Linux distros](automanage-linux.md#supported-linux-distributions-and-versions)-- VMs must be in a supported region-- User must have correct permissions-- Non-scale set VMs only-- Automanage does not support Sandbox subscriptions at this time-
-**What Azure RBAC permission is needed to enable Automanage?**
-
-If you are enabling Automanage on an VM with an existing Automanage Account, you need Contributor role to the Resource Group where the VM resides.
-
-If you are using a new Automanage Account when enabling, you must have either the Owner role or have Contributor + User Access Administrator role to the subscription.
--
-**What regions are supported?**
-
-The full list of supported regions is available [here](./automanage-virtual-machines.md#supported-regions).
--
-**Which capabilities does Azure Automanage automate?**
-
-Automanage enrolls, configures, and monitors throughout the lifecycle of the VM the services listed [here](automanage-virtual-machines.md).
-
-**Does Azure Automanage work with Azure Arc-enabled VMs?**
-
-Automanage currently does not support Arc-enabled VMs.
-
-**Can I customize configurations on Azure Automanage?**
-
-Customers can customize settings for specific services, like Azure Backup retention, through configuration preferences. For the full list of settings that can be changed, see our documentation [here](automanage-virtual-machines.md#customizing-an-environment-using-preferences).
--
-**Does Azure Automanage work with both Linux and Windows VMs?**
-
-Yes, see the supported [Windows Server versions](automanage-windows-server.md#supported-windows-server-versions) and [Linux distros](automanage-linux.md#supported-linux-distributions-and-versions).
--
-**Can I selectively apply Automanage on only a set of VMs?**
-
-Automanage can be enabled with click and point simplicity on selected new and existing VMs. Automanage can also be disabled at any time.
--
-**Does Azure Automanage support VMs in a Virtual Machine Scale Set?**
-
-No, Azure Automanage does not currently support VMs in a Virtual Machine Scale Set.
--
-**How much does Azure Automanage cost?**
-
-Azure Automanage is available at no additional cost in public preview. Attached Azure resources, such as Azure Backup, will incur cost.
--
-**Can I apply Automanage through Azure policy?**
-
-Yes, we have a built-in policy that will automatically apply Automanage to all VMs within your defined scope. You will also specify the environment configuration (DevTest or Production) along with your Automanage account. Learn more about enabling Automanage through Azure policy [here](virtual-machines-policy-enable.md).
--
-**What is an Automanage account?**
-
-The Automanage Account is an MSI (Managed Service Identity) that provides the security context or the identity under which the automated operations occur.
--
-**When enabling Automanage, does it impact any additional machines besides the machine(s) I selected?**
-
-If your VM is linked to an existing Log Analytics workspace, we will reuse that workspace to apply these solutions: Change Tracking, Inventory, and Update Management. All machines connected to that workspace will have those solutions enabled.
--
-**Can I change the environment of my machine?**
-
-At this time, you will need to disable Automanage for that machine and then re-enable Automanage with the desired environment and preferences.
--
-**If my machine is already configured for a service, like Update Management, will Automanage reconfigure it?**
-No, Automanage will not reconfigure it. We will begin to monitor the resources associated to that service for drift.
--
-**Why does my machine have a Failed status in the Automanage portal?**
-
-If you see the status as *Failed*, you can troubleshoot the deployment in a few different ways:
-* Go to **Resource groups**, select your resource group, click on **Deployments** and see the *Failed* status there along with error details.
-* Go to **Subscriptions**, select your resource group, click on **Deployments** and see the *Failed* status there along with error details.
-* You can also visit the activity log of a machine, which will contain an entry for "Create or Update Configuration Profile Assignments". This may also contain more details on your deployment.
-
-**How can I get troubleshooting support for Automanage?**
-
-You can file a [technical support case ticket](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). For the **Service** option, search for and select *Automanage* under the *Monitoring and Management* section.
--
-## Next steps
-
-Try enabling Automanage for virtual machines in the Azure portal.
-
-> [!div class="nextstepaction"]
-> [Enable Automanage for virtual machines in the Azure portal](quick-create-virtual-machines-portal.md)
automanage Move Automanaged Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/move-automanaged-vms.md
Once you have moved your VMs across regions, you may re-enable Automanage on the
## Next steps * [Learn more about Azure Automanage](./automanage-virtual-machines.md)
-* [View frequently asked questions about Azure Automanage](./faq.md)
+* [View frequently asked questions about Azure Automanage](./faq.yml)
automanage Virtual Machines Custom Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/virtual-machines-custom-preferences.md
Azure Automanage creates default resource groups to store resources in. Check re
Get the most frequently asked questions answered in our FAQ. > [!div class="nextstepaction"]
-> [Frequently Asked Questions](faq.md)
+> [Frequently Asked Questions](faq.yml)
automation Automation Managing Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-managing-data.md
This article contains several topics explaining how data is protected and secured in an Azure Automation environment.
-## TLS 1.2 enforcement for Azure Automation
+## TLS 1.2 for Azure Automation
To insure the security of data in transit to Azure Automation, we strongly encourage you to configure the use of Transport Layer Security (TLS) 1.2. The following are a list of methods or clients that communicate over HTTPS to the Automation service:
automation Automation Network Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-network-configuration.md
If your nodes are located in a private network, the port and URLs defined above
If you are using DSC resources that communicate between nodes, such as the [WaitFor* resources](/powershell/scripting/dsc/reference/resources/windows/waitForAllResource), you also need to allow traffic between nodes. See the documentation for each DSC resource to understand these network requirements.
-To understand client requirements for TLS 1.2, see [TLS 1.2 enforcement for Azure Automation](automation-managing-data.md#tls-12-enforcement-for-azure-automation).
+To understand client requirements for TLS 1.2, see [TLS 1.2 for Azure Automation](automation-managing-data.md#tls-12-for-azure-automation).
## Update Management and Change Tracking and Inventory
automation Automation Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-webhooks.md
A webhook allows an external service to start a particular runbook in Azure Auto
![WebhooksOverview](media/automation-webhooks/webhook-overview-image.png)
-To understand client requirements for TLS 1.2 with webhooks, see [TLS 1.2 enforcement for Azure Automation](automation-managing-data.md#tls-12-enforcement-for-azure-automation).
+To understand client requirements for TLS 1.2 with webhooks, see [TLS 1.2 for Azure Automation](automation-managing-data.md#tls-12-for-azure-automation).
## Webhook properties
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/change-tracking/overview.md
For limits that apply to Change Tracking and Inventory, see [Azure Automation se
Change Tracking and Inventory is supported on all operating systems that meet Log Analytics agent requirements. See [supported operating systems](../../azure-monitor/agents/agents-overview.md#supported-operating-systems) for a list of the Windows and Linux operating system versions that are currently supported by the Log Analytics agent.
-To understand client requirements for TLS 1.2, see [TLS 1.2 enforcement for Azure Automation](../automation-managing-data.md#tls-12-enforcement-for-azure-automation).
+To understand client requirements for TLS 1.2, see [TLS 1.2 for Azure Automation](../automation-managing-data.md#tls-12-for-azure-automation).
### Python requirement
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/update-management/operating-system-requirements.md
The following table lists operating systems not supported by Update Management:
## System requirements
-The following information describes operating system-specific requirements. For additional guidance, see [Network planning](plan-deployment.md#ports). To understand requirements for TLS 1.2, see [TLS 1.2 enforcement for Azure Automation](../automation-managing-data.md#tls-12-enforcement-for-azure-automation).
+The following information describes operating system-specific requirements. For additional guidance, see [Network planning](plan-deployment.md#ports). To understand requirements for TLS 1.2, see [TLS 1.2 for Azure Automation](../automation-managing-data.md#tls-12-for-azure-automation).
### Windows
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
Automation support of service tags allows or denies the traffic for the Automati
**Type:** Plan for change
-Azure Automation fully supports TLS 1.2 and all client calls (through webhooks, DSC nodes, and hybrid worker). TLS 1.1 and TLS 1.0 are still supported for backward compatibility with older clients until customers standardize and fully migrate to TLS 1.2.
+Azure Automation fully supports TLS 1.2 and all client calls (through webhooks, DSC nodes, and hybrid worker). TLS 1.1 and TLS 1.0 are still supported for backward compatibility with older clients until customers standardize and fully migrate to TLS 1.2. To learn more, see the [documentation](automation-managing-data.md#tls-12-for-azure-automation).
## January 2020
azure-arc Create Sql Managed Instance Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md
Last updated 07/13/2021
-# Create SQL Managed iInstance - Azure Arc using Azure Data Studio
+# Create SQL Managed Instance - Azure Arc using Azure Data Studio
This document walks you through the steps for installing Azure SQL Managed Instance - Azure Arc using Azure Data Studio
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/overview.md
When you connect your machine to Azure Arc-enabled servers, it enables the abili
- Monitor your connected machine guest operating system performance, and discover application components to monitor their processes and dependencies with other resources the application communicates using [VM insights](../../azure-monitor/vm/vminsights-overview.md). -- Simplify deployment using other Azure services like Azure Automation [State Configuration](../../automation/automation-dsc-overview.md) and Azure Monitor Log Analytics workspace, using the supported [Azure VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. This includes performing post-deployment configuration or software installation using the Custom Script Extension.
+- Simplify deployment using other Azure services like Azure Monitor Log Analytics workspace, using the supported [Azure VM extensions](manage-vm-extensions.md) for your non-Azure Windows or Linux machine. This includes performing post-deployment configuration or software installation using the Custom Script Extension.
- Use [Update Management](../../automation/update-management/overview.md) in Azure Automation to manage operating system updates for your Windows and Linux servers
The Connected Machine agent sends a regular heartbeat message to the service eve
* Before evaluating or enabling Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
-* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+* Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
You can use the [monitoring](cache-how-to-monitor.md) features of Azure Cache fo
You can monitor the following metrics to help determine if you need to scale. - Redis Server Load
- - Redis is a single threaded process and high Redis server load means that Redis is unable to keep pace with the requests from all the client connections. In such situations, it helps to enable clustering or increase shard count so that client connections get distributed across multiple Redis processes.
+ - Redis server is a single threaded process. High Redis server load means that the server is unable to keep pace with the requests from all the client connections. In such situations, it helps to enable clustering or increase shard count so overhead functions are distributed across multiple Redis processes. Clustering and larger shard counts distribute TLS encryption and decryption, and distribute TLS connection and disconnection.
+ - For more information, see [Set up clustering](cache-how-to-premium-clustering.md#set-up-clustering).
- Memory Usage
- - High memory usage indicates that your data size is too large for the current cache size and you should consider scaling to a cache size with larger memory.
+ - High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory.
- Client connections - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider scaling up to a larger tier, or scaling out to enable clustering and increase shard count. Your choice depends on the Redis server load and memory usage. - For more information on connection limits by cache size, see [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
You can monitor the following metrics to help determine if you need to scale.
If you determine your cache is no longer meeting your application's requirements, you can scale to an appropriate cache pricing tier for your application. You can choose a larger or smaller cache to match your needs.
-For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq) to view the complete list of available SKU specifications.
+For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](/azure/azure-cache-for-redis/cache-planning-faq).
## Scale a cache
The following list contains answers to commonly asked questions about Azure Cach
- You can scale from one **Premium** cache pricing tier to another. - You can't scale from a **Basic** cache directly to a **Premium** cache. First, scale from **Basic** to **Standard** in one scaling operation, and then from **Standard** to **Premium** in a later scaling operation. - If you enabled clustering when you created your **Premium** cache, you can [change the cluster size](cache-how-to-premium-clustering.md#cluster-size). If your cache was created without clustering enabled, you can configure clustering at a later time.
-
- For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
+
+For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
### After scaling, do I have to change my cache name or access keys?
No, your cache name and keys are unchanged during a scaling operation.
- When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation. - When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.-- When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When scaling down a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+- When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When scaling a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
### Is my custom databases setting affected during scaling?
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure Services in FedRAMP and DoD SRG Audit Scope
description: This article contains tables for Azure Public and Azure Government that illustrate what FedRAMP (Moderate vs. High) and DoD SRG (Impact level 2, 4, 5 or 6) audit scope a given service has reached. Previously updated : 05/17/2021 Last updated : 07/19/2021
This article provides a detailed list of in-scope cloud services across Azure Pu
* Planned 2021 = indicates the service will be reviewed by 3PAO and JAB in 2021. Once the service is authorized, status will be updated ## Azure public services by audit scope
-| _Last Updated: May 2021_ |
+| _Last Updated: July 2021_ |
| Azure Service| DoD CC SRG IL 2 | FedRAMP Moderate | FedRAMP High | Planned 2021 | | |::|:-:|::|::|
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure for Education](https://azure.microsoft.com/developer/students/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure File Sync](https://azure.microsoft.com/services/storage/files/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Azure Firewall Manager](https://azure.microsoft.com/services/firewall-manager/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Log Analytics](../../azure-monitor/logs/data-platform-logs.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Logic Apps](https://azure.microsoft.com/services/logic-apps/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Media Services](https://azure.microsoft.com/services/media-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [Microsoft 365 Defender](https://docs.microsoft.com/microsoft-365/security/defender/microsoft-365-defender?view=o365-worldwide) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Microsoft Azure Attestation](https://azure.microsoft.com/services/azure-attestation/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Microsoft Azure Peering Service](../../peering-service/about.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Time Series Insights](https://azure.microsoft.com/services/time-series-insights/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
+| [UEBA for Sentinel](https://docs.microsoft.com/azure/sentinel/identify-threats-with-entity-behavior-analytics#what-is-user-and-entity-behavior-analytics-ueba) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
| [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Virtual Machines (incl. Reserved Instances)](https://azure.microsoft.com/services/virtual-machines/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | | [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | |
This article provides a detailed list of in-scope cloud services across Azure Pu
**&ast;&ast;** FedRAMP High certification for Azure Databricks is applicable for limited regions in Azure Commercial. To configure Azure Databricks for FedRAMP High use, please reach out to your Microsoft or Databricks Representative. ## Azure Government services by audit scope
-| _Last Updated: May 2021_ |
+| _Last Updated: July 2021_ |
| Azure Service | DoD CC SRG IL 2 | DoD CC SRG IL 4 | DoD CC SRG IL 5 (Azure Gov)**&ast;** | DoD CC SRG IL 5 (Azure DoD) **&ast;&ast;** | FedRAMP High | DoD CC SRG IL 6 | - |::|::|::|::|::|::
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Network Watcher(Traffic Analytics)](../../network-watcher/traffic-analytics.md) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Planned Maintenance](https://docs.microsoft.com/azure/virtual-machines/maintenance-control-portal) | :heavy_check_mark: | | | | :heavy_check_mark: |
| [Power BI](https://powerbi.microsoft.com/) | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | | [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | :heavy_check_mark: | | | | :heavy_check_mark: |
This article provides a detailed list of in-scope cloud services across Azure Pu
**&ast;** DoD CC SRG IL5 (Azure Gov) column shows DoD CC SRG IL5 certification status of services in Azure Government. For details, please refer to [Azure Government Isolation Guidelines for Impact Level 5](../documentation-government-impact-level-5.md)
-**&ast;&ast;** DoD CC SRG IL5 (Azure DoD) column shows DoD CC SRG IL5 certification status for services in Azure Government DoD regions.
+**&ast;&ast;** DoD CC SRG IL5 (Azure DoD) column shows DoD CC SRG IL5 certification status for services in Azure Government DoD regions.
azure-maps Map Add Image Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-image-layer-android.md
This article shows you how to overlay an image to a fixed set of coordinates. He
## Add an image layer
-The following code overlays an image of a [map of Newark, New Jersey, from 1922](https://www.lib.utexas.edu/maps/historical/newark_nj_1922.jpg) on the map. This image is added to the `drawable` folder of the project. An image layer is created by setting the image and coordinates for the four corners in the format `[Top Left Corner, Top Right Corner, Bottom Right Corner, Bottom Left Corner]`. Adding image layers below the `label` layer is often desirable.
+The following code overlays an image of a map of Newark, New Jersey, from 1922 on the map. This image is added to the `drawable` folder of the project. An image layer is created by setting the image and coordinates for the four corners in the format `[Top Left Corner, Top Right Corner, Bottom Right Corner, Bottom Left Corner]`. Adding image layers below the `label` layer is often desirable.
::: zone pivot="programming-language-java-android"
The following screenshot shows a map with a KML ground overlay overlaid using an
See the following article to learn more about ways to overlay imagery on a map. > [!div class="nextstepaction"]
-> [Add a tile layer](how-to-add-tile-layer-android-map.md)
+> [Add a tile layer](how-to-add-tile-layer-android-map.md)
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/map-add-image-layer.md
The image layer supports the following image formats:
## Add an image layer
-The following code overlays an image of a [map of Newark, New Jersey, from 1922](https://www.lib.utexas.edu/maps/historical/newark_nj_1922.jpg) on the map. An [ImageLayer](/javascript/api/azure-maps-control/atlas.layer.imagelayer) is created by passing a URL to an image, and coordinates for the four corners in the format `[Top Left Corner, Top Right Corner, Bottom Right Corner, Bottom Left Corner]`.
+The following code overlays an image of a map of Newark, New Jersey, from 1922 on the map. An [ImageLayer](/javascript/api/azure-maps-control/atlas.layer.imagelayer) is created by passing a URL to an image, and coordinates for the four corners in the format `[Top Left Corner, Top Right Corner, Bottom Right Corner, Bottom Left Corner]`.
```javascript //Create an image layer and add it to the map.
Learn more about the classes and methods used in this article:
See the following articles for more code samples to add to your maps: > [!div class="nextstepaction"]
-> [Add a tile layer](./map-add-tile-layer.md)
+> [Add a tile layer](./map-add-tile-layer.md)
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-bing-maps-web-app.md
Both Bing and Azure maps support overlaying georeferenced images on the map so t
**Before: Bing Maps**
-When creating a ground overlay in Bing Maps you need to specify the URL to the image to overlay and a bounding box to bind the image to on the map. This example overlays a map image of [Newark New Jersey from 1922](https://www.lib.utexas.edu/maps/historical/newark_nj_1922.jpg) on the map.
+When creating a ground overlay in Bing Maps you need to specify the URL to the image to overlay and a bounding box to bind the image to on the map. This example overlays a map image of Newark New Jersey from 1922 on the map.
```html <!DOCTYPE html>
No resources to be cleaned up.
Learn more about migrating from Bing Maps to Azure Maps. > [!div class="nextstepaction"]
-> [Migrate a web service](migrate-from-bing-maps-web-services.md)
+> [Migrate a web service](migrate-from-bing-maps-web-services.md)
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/migrate-from-google-maps-web-app.md
Both Azure and Google maps support overlaying georeferenced images on the map. G
#### Before: Google Maps
-Specify the URL to the image you want to overlay and a bounding box to bind the image on the map. This example overlays a map image of [Newark New Jersey from 1922](https://www.lib.utexas.edu/maps/historical/newark_nj_1922.jpg) on the map.
+Specify the URL to the image you want to overlay and a bounding box to bind the image on the map. This example overlays a map image of Newark New Jersey from 1922 on the map.
```html <!DOCTYPE html>
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
description: Options for installing the Azure Monitor Agent (AMA) on Azure virtu
Previously updated : 11/17/2020 Last updated : 07/19/2021
Set-AzVMExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publishe
Use the following PowerShell commands to install the Azure Monitor agent onAzure Arc enabled servers. # [Windows](#tab/PowerShellWindowsArc) ```powershell
-New-AzConnectedMachineExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <virtual-machine-name> -Location <location>
+New-AzConnectedMachineExtension -Name AMAWindows -ExtensionType AzureMonitorWindowsAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
``` # [Linux](#tab/PowerShellLinuxArc) ```powershell
-New-AzConnectedMachineExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <virtual-machine-name> -Location <location>
+New-AzConnectedMachineExtension -Name AMALinux -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -MachineName <arc-server-name> -Location <arc-server-location>
``` ## Azure CLI
Use the following CLI commands to install the Azure Monitor agent onAzure Arc en
# [Windows](#tab/CLIWindowsArc) ```azurecli
-az connectedmachine machine-extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id>
+az connectedmachine extension create --name AzureMonitorWindowsAgent --publisher Microsoft.Azure.Monitor --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
``` # [Linux](#tab/CLILinuxArc) ```azurecli
-az connectedmachine machine-extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id>
+az connectedmachine extension create --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location>
```
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/data-sources-custom-logs.md
Use the following procedure to define a custom log file. Scroll to the end of t
### Step 1. Open the Custom Log Wizard The Custom Log Wizard runs in the Azure portal and allows you to define a new custom log to collect.
-1. In the Azure portal, select **Log Analytics workspaces** > your workspace > **Advanced Settings**.
-2. Click on **Data** > **Custom logs**.
+1. In the Azure portal, select **Log Analytics workspaces** > your workspace > **Settings**.
+2. Click on **Custom logs**.
3. By default, all configuration changes are automatically pushed to all agents. For Linux agents, a configuration file is sent to the Fluentd data collector. 4. Click **Add+** to open the Custom Log Wizard.
In the cases where your data can't be collected with custom logs, consider the f
## Next steps * See [Parse text data in Azure Monitor](../logs/parse-text.md) for methods to parse each imported log entry into multiple properties.
-* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
+* Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
azure-monitor Alerts Metric https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/alerts-metric.md
description: Learn how to use Azure portal or CLI to create, view, and manage me
Previously updated : 01/11/2021 Last updated : 07/19/2021 # Create, view, and manage metric alerts using Azure Monitor
The following procedure describes how to create a metric alert rule in Azure por
15. Click **Done** to save the metric alert rule.
-> [!NOTE]
-> Metric alert rules created through portal are created in the same resource group as the target resource.
## View and manage with Azure portal
You can view and manage metric alert rules using the Manage Rules blade under Al
5. In the Edit Rule, click on the **Alert criteria** you want to edit. You can change the metric, threshold condition and other fields as required > [!NOTE]
- > You can't edit the **Target resource** and **Alert Rule Name** after the metric alert is created.
+ > You can't edit the **Alert Rule Name** after the metric alert rule is created.
6. Click **Done** to save your edits.
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/ip-addresses.md
You need to open some outgoing ports in your server's firewall to allow the Appl
| Purpose | URL | IP | Ports | | | | | |
-| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65<br/>40.71.12.237<br/>40.78.229.32<br/>40.78.229.33<br/>51.105.67.161<br/>40.124.64.192<br/>20.44.12.194<br/>20.189.172.0<br/>13.69.106.208<br/>40.78.253.199<br/>40.78.253.198<br/>40.78.243.19 | 443 |
+| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com<br/>*.in.applicationinsights.azure.com |40.114.241.141<br/>104.45.136.42<br/>40.84.189.107<br/>168.63.242.221<br/>52.167.221.184<br/>52.169.64.244<br/>40.85.218.175<br/>104.211.92.54<br/>52.175.198.74<br/>51.140.6.23<br/>40.71.12.231<br/>13.69.65.22<br/>13.78.108.165<br/>13.70.72.233<br/>20.44.8.7<br/>13.86.218.248<br/>40.79.138.41<br/>52.231.18.241<br/>13.75.38.7<br/>102.133.155.50<br/>52.162.110.67<br/>191.233.204.248<br/>13.69.66.140<br/>13.77.52.29<br/>51.107.59.180<br/>40.71.12.235<br/>20.44.8.10<br/>40.71.13.169<br/>13.66.141.156<br/>40.71.13.170<br/>13.69.65.23<br/>20.44.17.0<br/>20.36.114.207 <br/>51.116.155.246 <br/>51.107.155.178 <br/>51.140.212.64 <br/>13.86.218.255 <br/>20.37.74.240 <br/>65.52.250.236 <br/>13.69.229.240 <br/>52.236.186.210<br/>52.167.107.65<br/>40.71.12.237<br/>40.78.229.32<br/>40.78.229.33<br/>51.105.67.161<br/>40.124.64.192<br/>20.44.12.194<br/>20.189.172.0<br/>13.69.106.208<br/>40.78.253.199<br/>40.78.253.198<br/>40.78.243.19 | 443 |
| Live Metrics Stream | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com|23.96.28.38<br/>13.92.40.198<br/>40.112.49.101<br/>40.117.80.207<br/>157.55.177.6<br/>104.44.140.84<br/>104.215.81.124<br/>23.100.122.113| 443 | ## Status Monitor
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/resource-logs-categories.md
If you think there is something is missing, you can open a GitHub comment at the
|BigDataPoolAppsEnded|Big Data Pool Applications Ended|No|
-## Microsoft.Synapse/workspaces/kustoPools
-
-|Category|Category Display Name|Costs To Export|
-||||
-|Command|Command|No|
-|FailedIngestion|Failed ingest operations|No|
-|IngestionBatching|Ingestion batching|No|
-|Query|Query|No|
-|SucceededIngestion|Successful ingest operations|No|
-|TableDetails|Table details|No|
-|TableUsageStatistics|Table usage statistics|No|
-- ## Microsoft.Synapse/workspaces/sqlPools |Category|Category Display Name|Costs To Export|
azure-monitor View Designer Conversion Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/view-designer-conversion-access.md
You may also wish to pin multiple visualizations from the Workbook or the entire
![Pin all](media/view-designer-conversion-access/pin-all.png) -- ## Sharing and Viewing Permissions
-Workbooks have the benefit of either being a private or shared document. By default, saved workbooks will be saved under **My Reports**, meaning that only the creator can view this workbook.
You can share your workbooks by selecting the **Share** icon from the top tool bar while in **Edit Mode**. You will be prompted to move your workbook to **Shared Reports**, which will generate a link that provides direct access to the workbook.
azure-monitor View Designer Conversion Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/view-designer-conversion-overview.md
Once selected, a gallery will be displayed listing out all the saved workbooks a
To start a new workbook, you may select the **Empty** template under **Quick start**, or the **New** icon in the top navigation bar. To view templates or return to saved workbooks, select the item from the gallery or search for the name in the search bar. To save a workbook, you will need to save the report with a specific title, subscription, resource group, and location.
-The workbook will autofill to the same settings as the LA workspace, with the same subscription, resource group, however, users may change these report settings. Workbooks are by default saved to *My Reports*, accessible only by the individual user. They can also be saved directly to shared reports or shared later.
-
-![Workbooks save](media/view-designer-conversion-overview/workbooks-save.png)
+The workbook will autofill to the same settings as the LA workspace, with the same subscription, resource group, however, users may change these report settings. Workbooks are shared resources that require write access to the parent resource group to be saved.
## Next steps
azure-monitor Workbooks Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-access-control.md
ibiza Previously updated : 10/23/2019 Last updated : 07/16/2021 # Access control
Access control in workbooks refers to two things:
* Access required to save workbooks
- - Saving private `("My")` workbooks requires no additional privileges. All users can save private workbooks, and only they can see those workbooks.
- - Saving shared workbooks requires write privileges in a resource group to save the workbook. These privileges are usually specified by the [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role, but can also be set via the *Workbooks Contributor* role.
+ - Saving workbooks requires write privileges in a resource group to save the workbook. These privileges are usually specified by the [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role, but can also be set via the *Workbooks Contributor* role.
## Standard roles with workbook-related privileges
Access control in workbooks refers to two things:
[Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) includes general `/write` privileges used by various monitoring tools for saving items (including `workbooks/write` privilege to save shared workbooks). ΓÇ£Workbooks ContributorΓÇ¥ adds ΓÇ£workbooks/writeΓÇ¥ privileges to an object to save shared workbooks.
-No special privileges are required for users to save private workbooks that only they can see.
For custom roles:
-Add `microsoft.insights/workbooks/write` to save shared workbooks. For more details, see the [Workbook Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role.
+Add `microsoft.insights/workbooks/write` to save workbooks. For more details, see the [Workbook Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) role.
## Next steps
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-automate.md
This template shows how to deploy a simple workbook that displays a 'Hello World
| `workbookType` | The gallery that the workbook will be shown under. Supported values include workbook, `tsg`, Azure Monitor, etc. | | `workbookSourceId` | The ID of the resource instance to which the workbook will be associated. The new workbook will show up related to this resource instance - for example in the resource's table of content under _Workbook_. If you want your workbook to show up in the workbook gallery in Azure Monitor, use the string _Azure Monitor_ instead of a resource ID. | | `workbookId` | The unique guid for this workbook instance. Use _[newGuid()]_ to automatically create a new guid. |
-| `kind` | Used to specify if the created workbook is shared or private. Use value _shared_ for shared workbooks and _user_ for private ones. |
+| `kind` | Used to specify if the created workbook is shared. All new workbooks will use the value _shared_. |
| `location` | The Azure location where the workbook will be created. Use _[resourceGroup().location]_ to create it in the same location as the resource group | | `serializedData` | Contains the content or payload to be used in the workbook. Use the Resource Manager template from the workbooks UI to get the value |
azure-monitor Workbooks Move Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/visualize/workbooks-move-region.md
This article describes how to move Azure Workbook resources to a different Azure
* Ensure that workbooks are supported in the target region.
-* These instructions apply to both shared workbooks (`microsoft.insights/workbooks`) and private workbooks (`microsoft.insights/myworkbooks`) saved in Azure Monitor and on most resource types.
+* These instructions apply to workbooks (`microsoft.insights/workbooks`) saved in Azure Monitor and on most resource types.
However, for workbooks specifically linked to the Application Insights resource type, those workbooks are stored in the Azure region where the Application Insights resource is saved.
azure-netapp-files Configure Ldap Extended Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/configure-ldap-extended-groups.md
na ms.devlang: na Previously updated : 07/15/2021 Last updated : 07/19/2021 # Configure ADDS LDAP with extended groups for NFS volume access
This article explains the considerations and steps for enabling LDAP with extend
* LDAP over TLS must *not* be enabled if you are using Azure Active Directory Domain Services (AADDS).
-* If you enable the LDAP with extended groups feature, LDAP-enabled [Kerberos volumes](configure-kerberos-encryption.md) will not correctly display the file ownership for LDAP users. A file or directory created by an LDAP user will default to `root` as the owner instead of the actual LDAP user. However, the `root` account can manually change the file ownership by using the command `chown <username> <filename>`.
- * You cannot modify the LDAP option setting (enabled or disabled) after you have created the volume. * The following table describes the Time to Live (TTL) settings for the LDAP cache. You need to wait until the cache is refreshed before trying to access a file or directory through a client. Otherwise, an access denied message appears on the client.
azure-resource-manager Deploy To Management Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-management-group.md
Title: Use Bicep to deploy resources to management group description: Describes how to create a Bicep file that deploys resources at the management group scope. Previously updated : 06/01/2021 Last updated : 07/19/2021 # Management group deployments with Bicep files
resource policyDefinition 'Microsoft.Authorization/policyDefinitions@2019-09-01'
} ```
-To target another management group, add a module. Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set the `scope` property. Provide the management group name.
+To target another management group, add a [module](modules.md). Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set the `scope` property. Provide the management group name.
```bicep targetScope = 'managementGroup'
param otherManagementGroupName string
// module deployed at management group level but in a different management group module exampleModule 'module.bicep' = {
- name: 'deployToDifferntMG'
+ name: 'deployToDifferentMG'
scope: managementGroup(otherManagementGroupName) } ```
azure-resource-manager Deploy To Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-subscription.md
Title: Use Bicep to deploy resources to subscription description: Describes how to create a Bicep file that deploys resources to the Azure subscription scope. It shows how to create a resource group. Previously updated : 06/01/2021 Last updated : 07/19/2021 # Subscription deployments with Bicep files
resource exampleResource 'Microsoft.Resources/resourceGroups@2020-10-01' = {
For examples of deploying to the subscription, see [Create resource groups](#create-resource-groups) and [Assign policy definition](#assign-policy-definition).
-To deploy resources to a subscription that is different than the subscription from the operation, add a module. Use the [subscription function](bicep-functions-scope.md#subscription) to set the `scope` property. Provide the `subscriptionId` property to the ID of the subscription you want to deploy to.
+To deploy resources to a subscription that is different than the subscription from the operation, add a [module](modules.md). Use the [subscription function](bicep-functions-scope.md#subscription) to set the `scope` property. Provide the `subscriptionId` property to the ID of the subscription you want to deploy to.
```bicep targetScope = 'subscription'
param otherSubscriptionID string
// module deployed at subscription level but in a different subscription module exampleModule 'module.bicep' = {
- name: 'deployToDifferntSub'
+ name: 'deployToDifferentSub'
scope: subscription(otherSubscriptionID) } ```
azure-resource-manager Deploy To Tenant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-to-tenant.md
Title: Use Bicep to deploy resources to tenant description: Describes how to deploy resources at the tenant scope in a Bicep file. Previously updated : 06/01/2021 Last updated : 07/19/2021 # Tenant deployments with Bicep file
resource mgName_resource 'Microsoft.Management/managementGroups@2020-02-01' = {
### Scope to management group
-To target a management group within the tenant, add a module. Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set its `scope` property. Provide the management group name.
+To target a management group within the tenant, add a [module](modules.md). Use the [managementGroup function](bicep-functions-scope.md#managementgroup) to set its `scope` property. Provide the management group name.
```bicep targetScope = 'tenant'
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 06/04/2021 Last updated : 07/19/2021
To upgrade to the latest version, use:
az bicep upgrade ```
+To validate the install, use:
+
+```azurecli
+az bicep version
+```
+ For more commands, see [Bicep CLI](bicep-cli.md). > [!IMPORTANT]
azure-sql Authentication Azure Ad Only Authentication Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-create-server.md
Previously updated : 06/30/2021 Last updated : 07/19/2021 # Create server with Azure AD-only authentication enabled in Azure SQL
This how-to guide outlines the steps to create an [Azure SQL logical server](log
## Prerequisites
+- Version 2.26.1 or later is needed when using The Azure CLI. For more information on the installation and the latest version, see [Install the Azure CLI](/cli/azure/install-azure-cli).
- [Az 6.1.0](https://www.powershellgallery.com/packages/Az/6.1.0) module or higher is needed when using PowerShell.-- If you're provisioning a managed instance using PowerShell or Rest API, a virtual network and subnet needs to be created before you begin. For more information, see [Create a virtual network for Azure SQL Managed Instance](../managed-instance/virtual-network-subnet-create-arm-template.md).
+- If you're provisioning a managed instance using the Azure CLI, PowerShell, or Rest API, a virtual network and subnet needs to be created before you begin. For more information, see [Create a virtual network for Azure SQL Managed Instance](../managed-instance/virtual-network-subnet-create-arm-template.md).
## Permissions
To change the existing properties after server or managed instance creation, oth
## Azure SQL Database
+# [The Azure CLI](#tab/azure-cli)
+
+The Azure CLI command `az sql server create` is used to provision a new Azure SQL logical server. The below command will provision a new server with Azure AD-only authentication enabled.
+
+The server SQL Administrator login will be automatically created and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this server creation, the SQL Administrator login won't be used.
+
+The server Azure AD admin will be the account you set for `<AzureADAccount>`, and can be used to manage the server.
+
+Replace the following values in the example:
+
+- `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin`
+- `<AzureADAccountSID>`: The Azure AD Object ID for the user
+- `<ResourceGroupName>`: Name of the resource group for your Azure SQL logical server
+- `<ServerName>`: Use a unique Azure SQL logical server name
+
+```azurecli
+az sql server create --enable-ad-only-auth --external-admin-principal-type User --external-admin-name <AzureADAccount> --external-admin-sid <AzureADAccountSID> -g <ResourceGroupName> -n <ServerName>
+```
+
+For more information, see [az sql server create](/cli/azure/sql/server#az_sql_server_create).
+
+To check the server status after creation, see the following command:
+
+```azurecli
+az sql server show --name <ServerName> --resource-group <ResourceGroupName> --expand-ad-admin
+```
+ # [PowerShell](#tab/azure-powershell)
-The PowerShell command `New-AzSqlServer` is used to provision a new Azure SQL logical server. The below command will provision a new logical server with Azure AD-only authentication enabled.
+The PowerShell command `New-AzSqlServer` is used to provision a new Azure SQL logical server. The below command will provision a new server with Azure AD-only authentication enabled.
The server SQL Administrator login will be automatically created and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this server creation, the SQL Administrator login won't be used.
For more information, see [New-AzSqlServer](/powershell/module/az.sql/new-azsqls
The [Servers - Create Or Update](/rest/api/sql/2020-11-01-preview/servers/create-or-update) Rest API can be used to create an Azure SQL logical server with Azure AD-only authentication enabled during provisioning.
-The script below will provision an Azure SQL logical server, set the Azure AD admin as `<AzureADAccount>`, and enable Azure AD-only authentication. The server SQL Administrator login will also be created automatically and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provisioning, the SQL Administrator login won't be used.
+The script below will provision a Azure SQL logical server, set the Azure AD admin as `<AzureADAccount>`, and enable Azure AD-only authentication. The server SQL Administrator login will also be created automatically and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provisioning, the SQL Administrator login won't be used.
The Azure AD admin, `<AzureADAccount>` can be used to manage the server when the provisioning is complete.
You can also use the following template. Use a [Custom deployment in the Azure p
## Azure SQL Managed Instance
+# [The Azure CLI](#tab/azure-cli)
+
+The Azure CLI command `az sql mi create` is used to provision a new Azure SQL Managed Instance. The below command will provision a new managed instance with Azure AD-only authentication enabled.
+
+> [!NOTE]
+> The script requires a virtual network and subnet be created as a prerequisite.
+
+The managed instance SQL Administrator login will be automatically created and the password will be set to a random password. Since SQL Authentication connectivity is disabled with this provision, the SQL Administrator login won't be used.
+
+The Azure AD admin will be the account you set for `<AzureADAccount>`, and can be used to manage the instance when the provisioning is complete.
+
+Replace the following values in the example:
+
+- `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin`
+- `<AzureADAccountSID>`: The Azure AD Object ID for the user
+- `<managedinstancename>`: Name the managed instance you want to create
+- `<ResourceGroupName>`: Name of the resource group for your managed instance. The resource group should also include the virtual network and subnet created
+- The `subnet` parameter needs to be updated with the `<Subscription ID>`, `<ResourceGroupName>`, `<VNetName>`, and `<SubnetName>`. Your subscription ID can be found in the Azure portal
+
+```azurecli
+az sql mi create --enable-ad-only-auth --external-admin-principal-type User --external-admin-name <AzureADAccount> --external-admin-sid <AzureADAccountSID> -g <ResourceGroupName> -n <managedinstancename> --subnet /subscriptions/<Subscription ID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/virtualNetworks/<VNetName>/subnets/<SubnetName>
+```
+
+For more information, see [az sql mi create](/cli/azure/sql/mi#az_sql_mi_create).
+ # [PowerShell](#tab/azure-powershell) The PowerShell command `New-AzSqlInstance` is used to provision a new Azure SQL Managed Instance. The below command will provision a new managed instance with Azure AD-only authentication enabled.
Replace the following values in the example:
- `<ResourceGroupName>`: Name of the resource group for your managed instance. The resource group should also include the virtual network and subnet created - `<Location>`: Location of the server, such as `West US`, or `Central US` - `<AzureADAccount>`: Can be an Azure AD user or group. For example, `DummyLogin`-- The `SubnetId` parameter needs to be updated with the `<ResourceGroupName>`, the `Subscription ID`, `<VNetName>`, and `<SubnetName>`. Your subscription ID can be found in the Azure portal
+- The `SubnetId` parameter needs to be updated with the `<Subscription ID>`, `<ResourceGroupName>`, `<VNetName>`, and `<SubnetName>`. Your subscription ID can be found in the Azure portal
```powershell
-New-AzSqlInstance -Name "<managedinstancename>" -ResourceGroupName "<ResourceGroupName>" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication -Location "<Location>" -SubnetId "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/virtualNetworks/<VNetName>/subnets/<SubnetName>" -LicenseType LicenseIncluded -StorageSizeInGB 1024 -VCore 16 -Edition "GeneralPurpose" -ComputeGeneration Gen4
+New-AzSqlInstance -Name "<managedinstancename>" -ResourceGroupName "<ResourceGroupName>" -ExternalAdminName "<AzureADAccount>" -EnableActiveDirectoryOnlyAuthentication -Location "<Location>" -SubnetId "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Network/virtualNetworks/<VNetName>/subnets/<SubnetName>" -LicenseType LicenseIncluded -StorageSizeInGB 1024 -VCore 16 -Edition "GeneralPurpose" -ComputeGeneration Gen4
``` For more information, see [New-AzSqlInstance](/powershell/module/az.sql/new-azsqlinstance).
Once the deployment is complete for your managed instance, you may notice that t
## Limitations -- Creating a server or instance using the Azure CLI or Azure portal with Azure AD-only authentication enabled during provisioning is currently not supported.
+- Creating a server or instance using the Azure portal with Azure AD-only authentication enabled during provisioning is currently not supported.
- To reset the server administrator password, Azure AD-only authentication must be disabled. - If Azure AD-only authentication is disabled, you must create a server with a server admin and password when using all APIs.
azure-sql Authentication Azure Ad Only Authentication Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/authentication-azure-ad-only-authentication-tutorial.md
To enable Azure AD-only authentication auth in the Azure portal, see the steps b
Managing Azure AD-only authentication for SQL Managed Instance in the portal is currently not supported.
-# [Azure CLI](#tab/azure-cli)
+# [The Azure CLI](#tab/azure-cli)
## Enable in SQL Database using Azure CLI
Check whether Azure AD-only authentication is enabled for your server or instanc
Go to your **SQL server** resource in the [Azure portal](https://portal.azure.com/). Select **Azure Active Directory** under the **Settings** menu. Portal support for Azure AD-only authentication is only available for Azure SQL Database.
-# [Azure CLI](#tab/azure-cli)
+# [The Azure CLI](#tab/azure-cli)
These commands can be used to check whether Azure AD-only authentication is enabled for your SQL Database logical server or SQL managed instance. Members of the [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) and [SQL Managed Instance Contributor](../../role-based-access-control/built-in-roles.md#sql-managed-instance-contributor) roles can use these commands to check the status of Azure AD-only authentication, but can't enable or disable the feature.
By disabling the Azure AD-only authentication feature, you allow both SQL authen
Managing Azure AD-only authentication for SQL Managed Instance in the portal is currently not supported.
-# [Azure CLI](#tab/azure-cli)
+# [The Azure CLI](#tab/azure-cli)
## Disable in SQL Database using Azure CLI
azure-sql Automated Backups Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automated-backups-overview.md
--++ Last updated 03/10/2021
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-copy.md
ms.devlang: --++ Last updated 03/10/2021
azure-sql Database Import Export Azure Services Off https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import-export-azure-services-off.md
ms.devlang: --++ Last updated 01/08/2020
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/database-import.md
ms.devlang: --++ Last updated 10/29/2020
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-backup-retention-configure.md
ms.devlang: --++ Last updated 12/16/2020
azure-sql Long Term Retention Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/long-term-retention-overview.md
ms.devlang: --++ Last updated 07/13/2021
azure-sql Recovery Using Backups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/recovery-using-backups.md
ms.devlang: --++ Last updated 11/13/2020
azure-sql Copy Database To New Server Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/copy-database-to-new-server-powershell.md
ms.devlang: PowerShell --++ Last updated 03/12/2019
azure-sql Import From Bacpac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/import-from-bacpac-powershell.md
ms.devlang: PowerShell --++ Last updated 05/24/2019
azure-sql Restore Database Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/scripts/restore-database-powershell.md
ms.devlang: PowerShell --++ Last updated 03/27/2019
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
ms.devlang: --++ Last updated 07/13/2021
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes.md
Azure allows you to deploy a virtual machine (VM) with an image of SQL Server bu
| Changes | Details | | | |
-| **Security enhancements in the Azure portal** | Once you've enabled [Azure Defender for SQL](/security-center/defender-for-sql-usage), you can view Security Center recommendations in the [SQL virtual machines resource in the Azure portal](manage-sql-vm-portal.md#security-center). |
+| **Security enhancements in the Azure portal** | Once you've enabled [Azure Defender for SQL](/azure/security-center/defender-for-sql-usage), you can view Security Center recommendations in the [SQL virtual machines resource in the Azure portal](manage-sql-vm-portal.md#security-center). |
## May 2021
Azure allows you to deploy a virtual machine (VM) with an image of SQL Server bu
* [Overview of SQL Server on a Linux VM](../linux/sql-server-on-linux-vm-what-is-iaas-overview.md) * [Provision SQL Server on a Linux virtual machine](../linux/sql-vm-create-portal-quickstart.md) * [FAQ (Linux)](../linux/frequently-asked-questions-faq.yml)
-* [SQL Server on Linux documentation](/sql/linux/sql-server-linux-overview)
+* [SQL Server on Linux documentation](/sql/linux/sql-server-linux-overview)
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
The following table details these benefits:
| **View disk utilization in portal** | Allows you to view a graphical representation of the disk utilization of your SQL data files in the Azure portal. <br/> Management mode: Full | | **Flexible licensing** | Save on cost by [seamlessly transitioning](licensing-model-azure-hybrid-benefit-ahb-change.md) from the bring-your-own-license (also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and back again. <br/> Management mode: Lightweight & full| | **Flexible version / edition** | If you decide to change the [version](change-sql-server-version.md) or [edition](change-sql-server-edition.md) of SQL Server, you can update the metadata within the Azure portal without having to redeploy the entire SQL Server VM. <br/> Management mode: Lightweight & full|
-| **Security Center Portal integration** | If you've enabled [Azure Defender for SQL](/security-center/defender-for-sql-usage.md), then you can view Security Center recommendations directly in the [SQL virtual machines](manage-sql-vm-portal.md) resource of the Azure portal. See [Security best practices](security-considerations-best-practices.md) to learn more. <br/> Management mode: Lightweight & full|
+| **Security Center Portal integration** | If you've enabled [Azure Defender for SQL](/azure/security-center/defender-for-sql-usage), then you can view Security Center recommendations directly in the [SQL virtual machines](manage-sql-vm-portal.md) resource of the Azure portal. See [Security best practices](security-considerations-best-practices.md) to learn more. <br/> Management mode: Lightweight & full|
## Management modes
To install the SQL Server IaaS extension to SQL Server on Azure VMs, see the art
For more information about running SQL Server on Azure Virtual Machines, see the [What is SQL Server on Azure Virtual Machines?](sql-server-on-azure-vm-iaas-what-is-overview.md).
-To learn more, see [frequently asked questions](frequently-asked-questions-faq.yml).
+To learn more, see [frequently asked questions](frequently-asked-questions-faq.yml).
azure-vmware Configure Site To Site Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-site-to-site-vpn-gateway.md
A virtual hub is a virtual network that is created and used by Virtual WAN. It's
>[!IMPORTANT] >You must first have a private cloud created before you can patch the platform. + [!INCLUDE [request-authorization-key](includes/request-authorization-key.md)] 1. Link Azure VMware Solution and the VPN gateway together in the Virtual WAN hub. You'll use the authorization key and ExpressRoute ID (peer circuit URI) from the previous step.
azure-vmware Fix Deployment Provisioning Failures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/fix-deployment-provisioning-failures.md
To copy the ExpressRoute ID:
1. In the right pane, select the **ExpressRoute** tab. 1. Select the copy icon for **ExpressRoute ID** and save the value to use in your support request. ## Pre-validation failures
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Data disk size | Individual disk size can be up to 32 TB and a maximum of 256 TB
Storage type | Standard HDD, Standard SSD, Premium SSD. Managed disks | Supported. Encrypted disks | Supported.<br/><br/> Azure VMs enabled with Azure Disk Encryption can be backed up (with or without the Azure AD app).<br/><br/> Encrypted VMs can't be recovered at the file/folder level. You must recover the entire VM.<br/><br/> You can enable encryption on VMs that are already protected by Azure Backup.
-Disks with Write Accelerator enabled | Currently, Azure VM with WA disk backup is previewed in all Azure public regions. <br><br> (Quota is exceeded and no further change to the approved list is possible until GA) <br><br> Snapshots donΓÇÖt include WA disk snapshots for unsupported subscriptions as WA disk will be excluded.
+Disks with Write Accelerator enabled | Currently, Azure VM with WA disk backup is previewed in all Azure public regions. <br><br> (Quota is exceeded and no further change to the approved list is possible until GA) <br><br> Snapshots donΓÇÖt include WA disk snapshots for unsupported subscriptions as WA disk will be excluded. <br><br>**Important** <br> Virtual machines with WA disks need internet connectivity for a successful backup (even though those disks are excluded from the backup).
Back up & Restore deduplicated VMs/disks | Azure Backup doesn't support deduplication. For more information, see this [article](./backup-support-matrix.md#disk-deduplication-support) <br/> <br/> - Azure Backup doesn't deduplicate across VMs in the Recovery Services vault <br/> <br/> - If there are VMs in deduplication state during restore, the files can't be restored because the vault doesn't understand the format. However, you can successfully perform the full VM restore. Add disk to protected VM | Supported. Resize disk on protected VM | Supported.
cloud-services-extended-support Available Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/available-sizes.md
To retrieve a list of available sizes see [Resource Skus - List](/rest/api/compu
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Certificates And Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/certificates-and-key-vault.md
Key Vault is used to store certificates that are associated to Cloud Services (e
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Cloud Services Model And Package https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/cloud-services-model-and-package.md
Where the variables are defined as follows:
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
cloud-services-extended-support Configure Scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/configure-scaling.md
Consider the following information when configuring scaling of your Cloud Servic
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-portal.md
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
7. Once all fields have been completed, move to the **Review and Create** tab to validate your deployment configuration and create your Cloud Service (extended support). ## Next steps -- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md). - Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-powershell.md
If you are using a Static IP you need to reference it as a Reserved IP in Servic
``` ## Next steps -- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md). - Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
Deployments that utilized the old diagnostics plugins need the settings removed
``` ## Access Control
-The subsciption containing networking resources needs to have [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) access or above for Cloud Services (extended support). For more details on please refer to [RBAC built in roles](../role-based-access-control/built-in-roles.md)
+The subscription containing networking resources needs to have [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) access or above for Cloud Services (extended support). For more details on please refer to [RBAC built in roles](../role-based-access-control/built-in-roles.md)
## Key Vault creation
Key Vault is used to store certificates that are associated to Cloud Services (e
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
+- Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Deploy Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-sdk.md
Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services
m_NrpClient.VirtualNetworks.CreateOrUpdate(resourceGroupName, ΓÇ£ContosoVNetΓÇ¥, vnet); ```
-7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic] (https://docs.microsoft.com/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
+7. Create a public IP address and set the DNS label property of the public IP address. Cloud Services (extended support) only supports [Basic](/azure/virtual-network/public-ip-addresses#basic) SKU Public IP addresses. Standard SKU Public IPs do not work with Cloud Services.
If you are using a Static IP you need to reference it as a Reserved IP in Service Configuration (.cscfg) file ```csharp
If you are using a Static IP you need to reference it as a Reserved IP in Servic
``` ## Next steps-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy Cloud Services (extended support) by using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), a [template](deploy-template.md), or [Visual Studio](deploy-visual-studio.md). - Visit the [Samples repository for Cloud Services (extended support)](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
## Next steps -- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md). - Visit the [Cloud Services (extended support) samples repository](https://github.com/Azure-Samples/cloud-services-extended-support)
cloud-services-extended-support Enable Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-alerts.md
This article explains how to enable alerts on existing Cloud Service (extended s
6. When you have finished setting up alerts, save the changes and based on the metrics configured you will begin to see the **Alerts** blade populate over time. ## Next steps -- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Enable Rdp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-rdp.md
Once remote desktop is enabled on the roles, you can initiate a connection direc
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Enable Wad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/enable-wad.md
Here is an example of the private configuration XML file
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/extensions.md
To know more about Azure Antimalware, please visit [here](../security/fundamenta
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).-- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
+- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).
cloud-services-extended-support Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/faq.md
- Title: Frequently asked questions for to Azure Cloud Services (extended support)
-description: Frequently asked questions for to Azure Cloud Services (extended support)
----- Previously updated : 10/13/2020---
-# Frequently asked questions for Azure Cloud Services (extended support)
-This article covers frequently asked questions related to Azure Cloud Services (extended support).
-
-## General
-
-### What is the resource name for Cloud Services (classic) & Cloud Services (extended support)?
-- Cloud Services (classic): `microsoft.classiccompute/domainnames`-- Cloud Services (extended support): `microsoft.compute/cloudservices`-
-### What locations are available to deploy Cloud Services (extended support)?
-Cloud Services (extended support) is available in all public cloud regions.
-
-### How does my quota change?
-Customers will need to request quota using the same processes as any other Azure Resource Manager product.Quota in Azure Resource Manager is regional and a separate quota request will be needed for each region.
-
-### Why donΓÇÖt I see a production & staging slot anymore?
-Cloud Services (extended support) does not support the logical concept of a hosted service, which included two slots (Production & Staging). Each deployment is an independent Cloud Service (extended support) deployment. To test and stage a new release of a cloud service, deploy a cloud service (extended support) and tag it as VIP swappable with another cloud service (extended support)
-
-### Why canΓÇÖt I create an empty Cloud Service anymore?
-The concept of hosted service names does not exist anymore, you cannot create an empty Cloud Service (extended support).
-
-### Does Cloud Services (extended support) support Resource Health Check (RHC)?
-No, Cloud Services (extended support) does not support Resource Health Check (RHC).
-
-### How are role instance metrics changing?
-There are no changes in the role instance metrics.
-
-### How are web & worker roles changing?
-There are no changes to the design, architecture or components of web and worker roles.
-
-### How are role instances changing?
-There are no changes to the design, architecture or components of the role instances.
-
-### How will guest os updates change?
- There are no changes to the rollout method. Cloud Services (classic) and Cloud Services (extended support) will get the same updates.
-
-### Does Cloud Services (extended support) support stopped-allocated and stopped-deallocated states?
-
-Cloud Services (extended support) deployment only supports the Stopped- Allocated state which appears as "stopped" in the Azure portal. Stopped- Deallocated state is not supported.
-
-### Do Cloud Services (extended support) deployments support scaling across clusters, availability zones, and regions?
-Cloud Services (extended support) deployments cannot scale across multiple clusters, availability zones and regions.
-
-### How can I get the deployment ID for my Cloud Service (extended support)
-Deployment ID aka Private ID can be accessed using the [CloudServiceInstanceView](/rest/api/compute/cloudservices/getinstanceview#cloudserviceinstanceview) API. It is also available on the Azure portal under the Role and Instances blade of the Cloud Service (extended support)
-
-### Are there any pricing differences between Cloud Services (classic) and Cloud Services (extended support)?
-Cloud Services (extended support) uses Azure Key Vault and Basic (ARM) Public IP addresses. Customers requiring certificates need to use Azure Key Vault for certificate management ([learn more](https://azure.microsoft.com/pricing/details/key-vault/) about Azure Key Vault pricing.)  Each Public IP address for Cloud Services (extended support) is charged separately ([learn more](https://azure.microsoft.com/pricing/details/ip-addresses/) about Public IP Address pricing.)
-## Resources
-
-### What resources linked to a Cloud Services (extended support) deployment need to live in the same resource group?
-Load balancers, network security groups and route tables need to live in the same region and resource group.
-
-### What resources linked to a Cloud Services (extended support) deployment need to live in the same region?
-Key Vault, virtual network, public IP addresses, network security groups and route tables need to live in the same region.
-
-### What resources linked to a Cloud Services (extended support) deployment need to live in the same virtual network?
-Public IP addresses, load balancers, network security groups and route tables need to live in the same virtual network.
-
-## Deployment files
-
-### How can I use a template to deploy or manage my deployment?
-Template and parameter files can be passed as a parameter using REST, PowerShell and CLI. They can also be uploaded using the Azure portal.
-
-### Do I need to maintain four files now? (template, parameter, csdef, cscfg)
-Template and parameter files are only used for deployment automation. Like Cloud Services (classic), you can manually create dependent resources first and then a Cloud Services (extended support) deployment using PowerShell, CLI commands or through Portal with existing csdef, cscfg.
-
-### How does my application code change on Cloud Services (extended support)
-There are no changes required for your application code packaged in cspkg. Your existing applications will continue to work as before.
-
-### Does Cloud Services (extended support) allow CTP package format?
-CTP package format is not supported in Cloud Services (extended support). However, it allows an enhanced package size limit of 800 MB
-
-## Migration
-
-### Will Cloud Services (extended support) mitigate the failures due to allocation failures?
-No, Cloud Service (extended support) deployments are tied to a cluster like Cloud Services (classic). Therefore, allocation failures will continue to exist if the cluster is full.
-
-### When do I need to migrate?
-Estimating the time required and complexity migration depends on a range of variables. Planning is the most effective step to understand the scope of work, blockers and complexity of migration.
-
-## Networking
-
-### Why canΓÇÖt I create a deployment without virtual network?
-Virtual networks are a required resource for any deployment on Azure Resource Manager. Cloud Services (extended support) deployment must live inside a virtual network.
-
-### Why am I now seeing so many networking resources?
-In Azure Resource Manager, components of your Cloud Services (extended support) deployment are exposed as a resource for better visibility and improved control. The same type of resources were used in Cloud Services (classic) however they were just hidden. One example of such a resource is the Public Load Balancer, which is now an explicit 'read only' resource automatically created by the platform
-
-### What restrictions apply for a subnet with respective to Cloud Services (extended support)?
-A subnet containing Cloud Services (extended support) deployments cannot be shared with deployments from other compute products such as Virtual Machines, Virtual Machines Scale Sets, Service Fabric, etc.
-
-### What IP allocation methods are supported on Cloud services (extended support)?
-Cloud Services (extended support) supports dynamic & static IP allocation methods. Static IP addresses are referenced as reserved IPs in the cscfg file.
-
-### Why am I getting charged for IP addresses?
-Customers are billed for IP Address use on Cloud Services (extended support) just as users are billed for IP addresses associated with virtual machines.
-
-### Can the reserved IP be updated after a successful deployment?
-A reserved IP cannot be added, removed or changed during deployment update or upgrade. If the IP addresses needs to be changed, please use a swappable Cloud Service or deploy two Cloud Services with a CName in Azure DNS\Traffic Manager so that the IP can be pointed to either of them.
-
-### Can I use a DNS name with Cloud Services (extended support)?
-Yes. Cloud Services (extended support) can also be given a DNS name. With Azure Resource Manager, the DNS label is an optional property of the public IP address that is assigned to the Cloud Service. The format of the DNS name for Azure Resource Manager based deployments is `<userlabel>.<region>.cloudapp.azure.com`
-
-### Can I update or change the virtual network reference for an existing cloud service (extended support)?
-No. Virtual network reference is mandatory during the creation of a cloud service. For an existing cloud service, the virtual network reference cannot be changed. The virtual network address space itself can be modified using VNet APIs.
-
-## Certificates & Key Vault
-
-### Why do I need to manage my certificates on Cloud Services (extended support)?
-Cloud Services (extended support) has adopted the same process as other compute offerings where certificates reside within customer managed Key Vaults. This enables customers to have complete control over their secrets & certificates.
-
-### Can I use one Key Vault for all my deployments in all regions?
-No. Key Vault is a regional resource and customers need one Key Vault in each region. However, one Key Vault can be used for all deployments within a given region.
-
-### When specifying secrets/certificates to be installed to a Cloud Service, must the KeyVault resource be in the same Azure subscription as the Cloud Service resource?
-Yes. We do not allow cross subscription key vault references in Cloud Services to guard against escalation of privilege attacks through CS-ES. The subscription is not a boundary that CS-ES will cross for references to secrets. The reason we do not allow cross subscription references is as an important final step to prevent malicious users from using CS-ES as a privilege escalation mechanism to access other users secrets. Subscription isnΓÇÖt a security boundary, but defense in depth is a requirement. However, you can use the Key Vault extension to get cross subscription and cross region support for your certificates. Please refer to the documentation [here](./enable-key-vault-virtual-machine.md)
-
-### When specifying secrets/certificates to be installed to a Cloud Service, must the KeyVault resource be in the same region as the Cloud Service resource?
-Yes. The reason that we enforce region boundaries is to prevent users from creating architectures that have cross region dependencies. Regional isolation is a key design principle of cloud based applications. However, you can use the Key Vault extension to get cross subscription and cross region support for your certificates. Please refer to the documentation [here](./enable-key-vault-virtual-machine.md)
-
-## Next steps
-To start using Cloud Services (extended support), see [Deploy a Cloud Service (extended support) using PowerShell](deploy-powershell.md)
cloud-services-extended-support Generate Template Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/generate-template-portal.md
# Generate ARM Template for Cloud Services (extended support) using the Azure portal
-This article explains how to download the ARM template and parameter file from the [Azure portal](https://portal.azure.com) for your Cloud Service. The ARM template and parameter file can be used in deployments via Powershell to create or update a cloud service
+This article explains how to download the ARM template and parameter file from the [Azure portal](https://portal.azure.com) for your Cloud Service. The ARM template and parameter file can be used in deployments via PowerShell to create or update a cloud service
## Get ARM template via portal
This article explains how to download the ARM template and parameter file from t
:::image type="content" source="media/download-template-portal-2.png" alt-text="Image shows the package SAS URI and configuration SAS URI parameters on the Azure portal."::: ## Next steps -- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
- Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md)
cloud-services-extended-support Override Sku https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/override-sku.md
The Azure portal doesn't allow you to use the **allowModelOverride** property to
## Next steps - View the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support).-- View [frequently asked questions](faq.md) for Cloud Services (extended support).
+- View [frequently asked questions](faq.yml) for Cloud Services (extended support).
cloud-services-extended-support Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/overview.md
With this change, the Azure Service Manager based deployment model for Cloud Ser
## What does not change - You create the code, define the configurations, and deploy it to Azure. Azure sets up the compute environment, runs your code then monitors and maintains it for you.-- Cloud Services (extended support) also supports two types of roles, [web and worker](../cloud-services/cloud-services-choose-me.md). There are no changes to the design, architecture or components of web and worker roles. -- The three components of a cloud service, the service definition (.csdef), the service config (.cscfg), and the service package (.cspkg) are carried forward and there is no change in the their [formats](cloud-services-model-and-package.md).
+- Cloud Services (extended support) also supports two types of roles, [web and worker](../cloud-services/cloud-services-choose-me.md). There are no changes to the design, architecture, or components of web and worker roles.
+- The three components of a cloud service, the service definition (.csdef), the service config (.cscfg), and the service package (.cspkg) are carried forward and there is no change in the [formats](cloud-services-model-and-package.md).
- No changes are required to runtime code as data plane is the same and control plane is only changing. - Azure GuestOS releases and associated updates are aligned with Cloud Services (classic) - Underlying update process with respect to update domains, how upgrade proceeds, rollback and allowed service changes during an update don't change
Minimal changes are required to Service Configuration (.cscfg) and Service Defin
The major differences between Cloud Services (classic) and Cloud Services (extended support) with respect to deployment are: -- Azure Resource Manager deployments use [ARM templates](../azure-resource-manager/templates/overview.md) which is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. Service Configuration and Service definition file needs to be consistent with the [ARM Template](../azure-resource-manager/templates/overview.md) while deploying Cloud Services (extended support). This can be achieved either by [manually creating the ARM template](deploy-template.md) or using [PowerShell](deploy-powershell.md), [Portal](deploy-portal.md) and [Visual Studio](deploy-visual-studio.md).
+- Azure Resource Manager deployments use [ARM templates](../azure-resource-manager/templates/overview.md), which is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. Service Configuration and Service definition file needs to be consistent with the [ARM Template](../azure-resource-manager/templates/overview.md) while deploying Cloud Services (extended support). This can be achieved either by [manually creating the ARM template](deploy-template.md) or using [PowerShell](deploy-powershell.md), [Portal](deploy-portal.md) and [Visual Studio](deploy-visual-studio.md).
- Customers must use [Azure Key Vault](../key-vault/general/overview.md) to [manage certificates in Cloud Services (extended support)](certificates-and-key-vault.md). Azure Key Vault lets you securely store and manage application credentials such as secrets, keys and certificates in a central and secure cloud repository. Your applications can authenticate to Key Vault at run time to retrieve credentials.
Depending on the application, Cloud Services (extended support) may require subs
## Next steps - Review the [deployment prerequisites](deploy-prerequisite.md) for Cloud Services (extended support). - Deploy a Cloud Service (extended support) using the [Azure portal](deploy-portal.md), [PowerShell](deploy-powershell.md), [Template](deploy-template.md) or [Visual Studio](deploy-visual-studio.md).-- Review [frequently asked questions](faq.md) for Cloud Services (extended support).
+- Review [frequently asked questions](faq.yml) for Cloud Services (extended support).
cloud-services-extended-support Support Help https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/support-help.md
Here are suggestions for where you can get help when developing your Azure Cloud
<img alt='Self help content' src='./media/logos/doc-logo.png'> </div>
-For common issues and and workarounds, see [Troubleshoot Azure Cloud Services (extended support) role start failures](role-startup-failure.md) and [Frequently asked questions](faq.md)
+For common issues and and workarounds, see [Troubleshoot Azure Cloud Services (extended support) role start failures](role-startup-failure.md) and [Frequently asked questions](faq.yml)
cloud-services-extended-support Swap Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/swap-cloud-service.md
After you swap the deployments, you can stage and test your new release by using
> [!NOTE] > You can't swap between an Azure Cloud Services (classic) deployment and an Azure Cloud Services (extended support) deployment.
-You must make a cloud service swappable with another cloud service when you deploy the second of a pair of cloud services for the first time. Once the second pair of cloud service is deployed, it canot be made swappable with an existing cloud service in subsquent updates.
+You must make a cloud service swappable with another cloud service when you deploy the second of a pair of cloud services for the first time. Once the second pair of cloud service is deployed, it can not be made swappable with an existing cloud service in subsequent updates.
You can swap the deployments by using an Azure Resource Manager template (ARM template), the Azure portal, or the REST API.
A cloud service swap usually is fast because it's only a configuration change in
## Next steps * Review [deployment prerequisites](deploy-prerequisite.md) for Azure Cloud Services (extended support).
-* Review [frequently asked questions](faq.md) for Azure Cloud Services (extended support).
+* Review [frequently asked questions](faq.yml) for Azure Cloud Services (extended support).
* Deploy an Azure Cloud Services (extended support) cloud service by using one of these options: * [Azure portal](deploy-portal.md) * [PowerShell](deploy-powershell.md)
cognitive-services Speech Service Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-service-vnet-service-endpoint.md
Title: How to use VNet service endpoints with Speech service
+ Title: Use Virtual Network service endpoints with Speech service
-description: Learn how to use Speech service with Virtual Network service endpoints
+description: This article describes how to use Speech service with an Azure Virtual Network service endpoint.
# Use Speech service through a Virtual Network service endpoint
-[Virtual Network](../../virtual-network/virtual-networks-overview.md) (VNet) [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
+[Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) [service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) help to provide secure and direct connectivity to Azure services over an optimized route on the Azure backbone network. Endpoints help you secure your critical Azure service resources to only your virtual networks. Service endpoints enable private IP addresses in the virtual network to reach the endpoint of an Azure service without needing a public IP address on the virtual network.
-This article explains how to set up and use VNet service endpoints with Speech service in Azure Cognitive Services.
+This article explains how to set up and use Virtual Network service endpoints with Speech service in Azure Cognitive Services.
> [!NOTE]
-> Before you proceed, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
+> Before you start, review [how to use virtual networks with Cognitive Services](../cognitive-services-virtual-networks.md).
-This article also describes [how to remove VNet service endpoints later, but still use the Speech resource](#use-a-speech-resource-with-a-custom-domain-name-and-without-allowed-vnets).
+This article also describes [how to remove Virtual Network service endpoints later but still use the Speech resource](#use-a-speech-resource-that-has-a-custom-domain-name-but-that-doesnt-have-allowed-virtual-networks).
-Setting up a Speech resource for the VNet service endpoint scenarios requires performing the following tasks:
-1. [Create Speech resource custom domain name](#create-a-custom-domain-name)
-1. [Configure VNet(s) and the Speech resource networking settings](#configure-vnets-and-the-speech-resource-networking-settings)
-1. [Adjust existing applications and solutions](#adjust-existing-applications-and-solutions)
+To set up a Speech resource for Virtual Network service endpoint scenarios, you need to:
+1. [Create a custom domain name for the Speech resource](#create-a-custom-domain-name).
+1. [Configure virtual networks and networking settings for the Speech resource](#configure-virtual-networks-and-the-speech-resource-networking-settings).
+1. [Adjust existing applications and solutions](#adjust-existing-applications-and-solutions).
> [!NOTE]
-> Setting up and using VNet service endpoints for Speech service is very similar to setting up and using the private endpoints. In this article we reference the correspondent sections of the [article on using private endpoints](speech-services-private-link.md), when the content is equivalent.
+> Setting up and using Virtual Network service endpoints for Speech service is similar to setting up and using private endpoints. In this article, we refer to the corresponding sections of the [article on using private endpoints](speech-services-private-link.md) when the procedures are the same.
[!INCLUDE [](includes/speech-vnet-service-enpoints-private-endpoints.md)]
-This article describes the usage of the VNet service endpoints with Speech service. Usage of the private endpoints is described [here](speech-services-private-link.md).
+This article describes how to use Virtual Network service endpoints with Speech service. For information about private endpoints, see [Use Speech service through a private endpoint](speech-services-private-link.md).
## Create a custom domain name
-VNet service endpoints require a [custom subdomain name for Cognitive Services](../cognitive-services-custom-subdomains.md). Create a custom domain referring to [this section](speech-services-private-link.md#create-a-custom-domain-name) of the private endpoint article. Note, that all warnings in the section are also applicable to the VNet service endpoint scenario.
+Virtual Network service endpoints require a [custom subdomain name for Cognitive Services](../cognitive-services-custom-subdomains.md). Create a custom domain by following the [guidance](speech-services-private-link.md#create-a-custom-domain-name) in the private endpoint article. All warnings in the section also apply to Virtual Network service endpoints.
-## Configure VNet(s) and the Speech resource networking settings
+## Configure virtual networks and the Speech resource networking settings
-You need to add all Virtual networks that are allowed access via the service endpoint to the Speech resource networking properties.
+You need to add all virtual networks that are allowed access via the service endpoint to the Speech resource networking properties.
> [!NOTE]
-> To access a Speech resource via the VNet service endpoint you need to enable `Microsoft.CognitiveServices` service endpoint type for the required subnet(s) of your VNet. This in effect will route **all** subnet Cognitive Services related traffic via the private backbone network. If you intend to access any other Cognitive Services resources from the same subnet, make sure these resources are configured to allow your VNet. See next Note for the details.
-
-> [!NOTE]
-> If a VNet is not added as allowed to the Speech resource networking properties, it will **not** have access to this Speech resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the VNet. Moreover, if the service endpoint is enabled, but the VNet is not allowed, the Speech resource will be unaccessible for this VNet through a public IP address as well, no matter what the Speech resource other network security settings are. The reason is that enabling `Microsoft.CognitiveServices` endpoint routes **all** Cognitive Services related traffic through the private backbone network, and in this case the VNet should be explicitly allowed to access the resource. This is true not only for Speech but for all other Cognitive Services resources (see the previous Note).
+> To access a Speech resource via the Virtual Network service endpoint, you need to enable the `Microsoft.CognitiveServices` service endpoint type for the required subnets of your virtual network. Doing so will route all subnet traffic related to Cognitive Services through the private backbone network. If you intend to access any other Cognitive Services resources from the same subnet, make sure these resources are configured to allow your virtual network.
+>
+> If a virtual network isn't added as *allowed* in the Speech resource networking properties, it won't have access to the Speech resource via the service endpoint, even if the `Microsoft.CognitiveServices` service endpoint is enabled for the virtual network. And if the service endpoint is enabled but the virtual network isn't allowed, the Speech resource won't be accessible for the virtual network through a public IP address, no matter what the Speech resource's other network security settings are. That's because enabling the `Microsoft.CognitiveServices` endpoint routes all traffic related to Cognitive Services through the private backbone network, and in this case the virtual network should be explicitly allowed to access the resource. This guidance applies for all Cognitive Services resources, not just for Speech resources.
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the required Speech resource.
-1. In the **Resource Management** group on the left pane, select **Networking**.
+1. Select the Speech resource.
+1. In the **Resource Management** group in the left pane, select **Networking**.
1. On the **Firewalls and virtual networks** tab, select **Selected Networks and Private Endpoints**.
-> [!NOTE]
-> To use VNet service endpoints you need to select **Selected Networks and Private Endpoints** network security option. No other options are supported. If your scenario requires **All networks** option, consider using the [private endpoints](speech-services-private-link.md), which support all three network security options.
+ > [!NOTE]
+ > To use Virtual Network service endpoints, you need to select the **Selected Networks and Private Endpoints** network security option. No other options are supported. If your scenario requires the **All networks** option, consider using [private endpoints](speech-services-private-link.md), which support all three network security options.
-5. Select **Add existing virtual network** or **Add new virtual network**, fill in the required parameters, and select **Add** for the existing or **Create** for the new virtual network. Note, that if you add an existing virtual network then the `Microsoft.CognitiveServices` service endpoint will be automatically enabled for the selected subnet(s). This operation can take up to 15 minutes. Also do not forget to consider the Notes in the beginning of this section.
+5. Select **Add existing virtual network** or **Add new virtual network** and provide the required parameters. Select **Add** for an existing virtual network or **Create** for a new one. If you add an existing virtual network, the `Microsoft.CognitiveServices` service endpoint will automatically be enabled for the selected subnets. This operation can take up to 15 minutes. Also, see the note at the beginning of this section.
-### Enabling service endpoint for an existing VNet
+### Enabling service endpoint for an existing virtual network
-As described in the previous section when you add a VNet as allowed for the speech resource the `Microsoft.CognitiveServices` service endpoint is automatically enabled. However, if later you disable it for whatever reason, you need to re-enable it manually to restore the service endpoint access to the Speech resource (as well as other Cognitive Services resources):
+As described in the previous section, when you configure a virtual network as *allowed* for the Speech resource, the `Microsoft.CognitiveServices` service endpoint is automatically enabled. If you later disable it, you need to re-enable it manually to restore the service endpoint access to the Speech resource (and to other Cognitive Services resources):
1. Go to the [Azure portal](https://portal.azure.com/) and sign in to your Azure account.
-1. Select the required VNet.
-1. In the **Settings** group on the left pane, select **Subnets**.
+1. Select the virtual network.
+1. In the **Settings** group in the left pane, select **Subnets**.
1. Select the required subnet.
-1. A new right panel appears. In this panel in the **Service Endpoints** section select `Microsoft.CognitiveServices` from the **Services** drop-down list.
+1. A new panel appears on the right side of the window. In this panel, in the **Service Endpoints** section, select `Microsoft.CognitiveServices` in the **Services** list.
1. Select **Save**. ## Adjust existing applications and solutions
-A Speech resource with a custom domain enabled uses a different way to interact with Speech Services. This is true for a custom-domain-enabled Speech resource both with and without service endpoints configured. Information in this section applies to both scenarios.
+A Speech resource that has a custom domain enabled interacts with the Speech service in a different way. This is true for a custom-domain-enabled Speech resource regardless of whether service endpoints are configured. Information in this section applies to both scenarios.
-### Use a Speech resource with a custom domain name and allowed VNet(s) configured
+### Use a Speech resource that has a custom domain name and allowed virtual networks
-This is the case when **Selected Networks and Private Endpoints** option is selected in networking settings of the Speech resource **AND** at least one VNet is allowed. The usage is equivalent to [using a Speech resource with a custom domain name and a private endpoint enabled](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint).
+In this scenario, the **Selected Networks and Private Endpoints** option is selected in the networking settings of the Speech resource and at least one virtual network is allowed. This scenario is equivalent to [using a Speech resource that has a custom domain name and a private endpoint enabled](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-with-a-private-endpoint).
-### Use a Speech resource with a custom domain name and without allowed VNet(s)
+### Use a Speech resource that has a custom domain name but that doesn't have allowed virtual networks
-This is the case when private endpoints are **not** enabled, and any of the following is true:
+In this scenario, private endpoints aren't enabled and one of these statements is true:
-- **Selected Networks and Private Endpoints** option is selected in networking settings of the Speech resource, but **no** allowed VNet(s) are configured-- **All networks** option is selected in networking settings of the Speech resource
+- The **Selected Networks and Private Endpoints** option is selected in the networking settings of the Speech resource, but no allowed virtual networks are configured.
+- The **All networks** option is selected in the networking settings of the Speech resource.
-The usage is equivalent to [using a Speech resource with a custom domain name and without private endpoints](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
+This scenario is equivalent to [using a Speech resource that has a custom domain name and that doesn't have private endpoints](speech-services-private-link.md#adjust-an-application-to-use-a-speech-resource-without-private-endpoints).
[!INCLUDE [](includes/speech-vnet-service-enpoints-private-endpoints-simultaneously.md)]
The usage is equivalent to [using a Speech resource with a custom domain name an
## Learn more * [Use Speech service through a private endpoint](speech-services-private-link.md)
-* [Azure VNet service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md)
+* [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)
* [Azure Private Link](../../private-link/private-link-overview.md) * [Speech SDK](speech-sdk.md) * [Speech-to-text REST API](rest-speech-to-text.md)
-* [Text-to-speech REST API](rest-text-to-speech.md)
+* [Text-to-speech REST API](rest-text-to-speech.md)
cognitive-services Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/big-data/getting-started.md
In this article, we'll perform these steps to get you started:
## Create a Cognitive Services resource
-To use the Big Data Cognitive Services, we must first create a Cognitive Service for our workflow. There are two main types of Cognitive
+To use the Big Data Cognitive Services, you must first create a Cognitive Service for your workflow. There are two main types of Cognitive
### Cloud services
-Cloud-based Cognitive Services is intelligent algorithms hosted in Azure. These services are ready for use without training, you just need an internet connection. You can [create a Cognitive Service in the Azure portal](../cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows) or with the [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows).
+Cloud-based Cognitive Services are intelligent algorithms hosted in Azure. These services are ready for use without training, you just need an internet connection. You can [create a Cognitive Service in the Azure portal](../cognitive-services-apis-create-account.md?tabs=multiservice%2Cwindows) or with the [Azure CLI](../cognitive-services-apis-create-account-cli.md?tabs=windows).
### Containerized services (optional)
Follow [this guide](../cognitive-services-container-support.md?tabs=luis) to cre
## Create an Apache Spark cluster
-[Apache Spark&trade;](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the Big Data Cognitive Services, we must first create a cluster. If you already have a Spark cluster, feel free to try an example.
+[Apache Spark&trade;](http://spark.apache.org/) is a distributed computing framework designed for big-data data processing. Users can work with Apache Spark in Azure with services like Azure Databricks, Azure Synapse Analytics, HDInsight, and Azure Kubernetes Services. To use the Big Data Cognitive Services, you must first create a cluster. If you already have a Spark cluster, feel free to try an example.
### Azure Databricks
To get started on Azure Kubernetes Service, follow these steps:
## Try a sample
-After you set up your Spark cluster and environment, we can run a short sample. This section demonstrates how to use the Big Data for Cognitive Services in Azure Databricks.
+After you set up your Spark cluster and environment, you can run a short sample. This section demonstrates how to use the Big Data for Cognitive Services in Azure Databricks.
-First, we can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit.
+First, you can create a notebook in Azure Databricks. For other Spark cluster providers, use their notebooks or Spark Submit.
-1. Create a new Databricks notebook, by choosing **New notebook** from the **Azure Databricks** menu.
+1. Create a new Databricks notebook, by choosing **New Notebook** from the **Azure Databricks** menu.
<img src="media/new-notebook.png" alt="Create a new notebook" width="50%"/>
display(results.select("text", col("sentiment")[0].getItem("score").alias("senti
- [Short Python Examples](samples-python.md) - [Short Scala Examples](samples-scala.md) - [Recipe: Predictive Maintenance](recipes/anomaly-detection.md)-- [Recipe: Intelligent Art Exploration](recipes/art-explorer.md)
+- [Recipe: Intelligent Art Exploration](recipes/art-explorer.md)
cognitive-services Cognitive Services Development Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/cognitive-services-development-options.md
Cognitive Services are organized into four categories: Decision, Language, Speec
* Cognitive Services Docker containers for secure access. * Tools like Apache Spark, Azure Databricks, Azure Synapse Analytics, and Azure Kubernetes Service for Big Data scenarios.
-Before we jump in, it's important to know that the Cognitive Services is primarily used for two distinct tasks. Based on the task you want to perform, you have different development and deployment options to choose from.
+Before we jump in, it's important to know that the Cognitive Services are primarily used for two distinct tasks. Based on the task you want to perform, you have different development and deployment options to choose from.
* [Development options for prediction and analysis](#development-options-for-prediction-and-analysis) * [Tools to customize and configure models](#tools-to-customize-and-configure-models) ## Development options for prediction and analysis
-The tools that you will use to customize and configure models are different than those that you'll use to call the Cognitive Services. Out of the box, most Cognitive Services allow you to send data and receive insights without any customization. For example:
+The tools that you will use to customize and configure models are different from those that you'll use to call the Cognitive Services. Out of the box, most Cognitive Services allow you to send data and receive insights without any customization. For example:
* You can send an image to the Computer Vision service to detect words and phrases or count the number of people in the frame * You can send an audio file to the Speech service and get transcriptions and translate the speech to text at the same time
Cognitive Services client libraries and REST APIs provide you direct access to y
* **UI**: N/A - Code only * **Subscription(s)**: Azure account + Cognitive Services resources
-If you want to learn more about available client libraries and REST APIs, use our [Cognitive Services overview](index.yml) to pick and service and get started with one of our quickstarts for vision, decision, language, and speech.
+If you want to learn more about available client libraries and REST APIs, use our [Cognitive Services overview](index.yml) to pick a service and get started with one of our quickstarts for vision, decision, language, and speech.
### Cognitive Services for Big Data
-With Cognitive Services for Big Data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications. Cognitive Services for Big Data supports the following platforms and connectors: Azure Databricks, Azure Synapse, Azure Kubernetes Service, and Data Connectors.
+With Cognitive Services for Big Data you can embed continuously improving, intelligent models directly into Apache Spark&trade; and SQL computations. These tools liberate developers from low-level networking details, so that they can focus on creating smart, distributed applications. Cognitive Services for Big Data support the following platforms and connectors: Azure Databricks, Azure Synapse, Azure Kubernetes Service, and Data Connectors.
* **Target user(s)**: Data scientists and data engineers
-* **Benefits**: The Azure Cognitive Services for Big Data lets users channel terabytes of data through Cognitive Services using Apache Spark&trade;. It's easy to create large-scale intelligent applications with any datastore.
+* **Benefits**: The Azure Cognitive Services for Big Data let users channel terabytes of data through Cognitive Services using Apache Spark&trade;. It's easy to create large-scale intelligent applications with any datastore.
* **UI**: N/A - Code only * **Subscription(s)**: Azure account + Cognitive Services resources
If you want to learn more about Big Data for Cognitive Services, a good place to
### Azure Logic Apps
-[Azure Logic Apps](../logic-apps/index.yml) share the same workflow designer and connectors as Power Automate but provides more advanced and control including integrations with Visual Studio and DevOps. Power Automate makes it easy to integrate with your cognitive services resources through service-specific connectors that provide a proxy or wrapper around the APIs. These are the same connectors as those available in Power Automate.
+[Azure Logic Apps](../logic-apps/index.yml) share the same workflow designer and connectors as Power Automate but provide more advanced control, including integrations with Visual Studio and DevOps. Power Automate makes it easy to integrate with your Cognitive Services resources through service-specific connectors that provide a proxy or wrapper around the APIs. These are the same connectors as those available in Power Automate.
* **Target user(s)**: Developers, integrators, IT pros, DevOps * **Benefits**: Designer-first (declarative) development model providing advanced options and integration in a low-code solution
If you want to learn more about Big Data for Cognitive Services, a good place to
### Power Automate
-Power automate is a service in the [Power Platform](/power-platform/) that helps you create automated workflows between apps and services without writing code. We offer several connectors to make it easy to interact with your Cognitive Services resource in a Power Automate solution. Power Automate is built on top of Logic Apps.
+Power Automate is a service in the [Power Platform](/power-platform/) that helps you create automated workflows between apps and services without writing code. We offer several connectors to make it easy to interact with your Cognitive Services resource in a Power Automate solution. Power Automate is built on top of Logic Apps.
* **Target user(s)**: Business users (analysts) and SharePoint administrators * **Benefits**: Automate repetitive manual tasks simply by recording mouse clicks, keystrokes and copy paste steps from your desktop!
Power automate is a service in the [Power Platform](/power-platform/) that helps
### AI Builder
-[AI Builder](/ai-builder/overview) is a Microsoft Power Platform capability you can use to improve business performance by automating processes and predicting outcomes. AI builder brings the power of AI to your solutions through a point-and-click experience. Many cognitive services such as Form Recognizer, Text Analytics, and Computer Vision have been directly integrated here and you don't need to create your own Cognitive Services.
+[AI Builder](/ai-builder/overview) is a Microsoft Power Platform capability you can use to improve business performance by automating processes and predicting outcomes. AI Builder brings the power of AI to your solutions through a point-and-click experience. Many cognitive services such as Form Recognizer, Text Analytics, and Computer Vision have been directly integrated here and you don't need to create your own Cognitive Services.
* **Target user(s)**: Business users (analysts) and SharePoint administrators * **Benefits**: A turnkey solution that brings the power of AI through a point-and-click experience. No coding or data science skills required.
Power automate is a service in the [Power Platform](/power-platform/) that helps
### Continuous integration and deployment
-You can use Azure DevOps and GitHub actions to manage your deployments. In the [section below](#continuous-integration-and-delivery-with-devops-and-github-actions) that discusses, we have two examples of CI/CD integrations to train and deploy custom models for Speech and the Language Understanding (LUIS) service.
+You can use Azure DevOps and GitHub actions to manage your deployments. In the [section below](#continuous-integration-and-delivery-with-devops-and-github-actions), we have two examples of CI/CD integrations to train and deploy custom models for Speech and the Language Understanding (LUIS) service.
* **Target user(s)**: Developers, data scientists, and data engineers * **Benefits**: Allows you to continuously adjust, update, and deploy applications and models programmatically. There is significant benefit when regularly using your data to improve and update models for Speech, Vision, Language, and Decision.
You can use Azure DevOps and GitHub actions to manage your deployments. In the [
As you progress on your journey building an application or workflow with the Cognitive Services, you may find that you need to customize the model to achieve the desired performance. Many of our services allow you to build on top of the pre-built models to meet your specific business needs. For all our customizable services, we provide both a UI-driven experience for walking through the process as well as APIs for code-driven training. For example:
-* You want to train a Custom Speech model to correctly recognize medical terms with a word error rate (WER) below 3%
+* You want to train a Custom Speech model to correctly recognize medical terms with a word error rate (WER) below 3 percent
* You want to build an image classifier with Custom Vision that can tell the difference between coniferous and deciduous trees * You want to build a custom neural voice with your personal voice data for an improved automated customer experience
-The tools that you will use to train and configure models are different than those that you'll use to call the Cognitive Services. In many cases, Cognitive Services that support customization provide portals and UI tools designed to help you train, evaluate, and deploy models. Let's quickly take a look at a few options:<br><br>
+The tools that you will use to train and configure models are different from those that you'll use to call the Cognitive Services. In many cases, Cognitive Services that support customization provide portals and UI tools designed to help you train, evaluate, and deploy models. Let's quickly take a look at a few options:<br><br>
| Pillar | Service | Customization UI | Quickstart | |--||||
Language Understanding and the Speech service offer continuous integration and c
* [CI/CD for Custom Speech](./speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md) * [CI/CD for LUIS](./luis/luis-concept-devops-automation.md)
-## On-prem containers
+## On-premises containers
-Many of the Cognitive Services can be deployed in containers for on-prem access and use. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security or other operational reasons. For a complete list of Cognitive Services containers, see [On-prem containers for Cognitive Services](./cognitive-services-container-support.md).
+Many of the Cognitive Services can be deployed in containers for on-premises access and use. Using these containers gives you the flexibility to bring Cognitive Services closer to your data for compliance, security, or other operational reasons. For a complete list of Cognitive Services containers, see [On-premises containers for Cognitive Services](./cognitive-services-container-support.md).
## Next steps <!--
cognitive-services Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/diagnostic-logging.md
Previously updated : 06/14/2019 Last updated : 07/19/2021 # Enable diagnostic logging for Azure Cognitive Services
-This guide provides step-by-step instructions to enable diagnostic logging for an Azure Cognitive Service. These logs provide rich, frequent data about the operation of a resource that are used for issue identification and debugging. Before you continue, you must have an Azure account with a subscription to at least one Cognitive Service, such as [Bing Web Search](./bing-web-search/overview.md), [Speech Services](./speech-service/overview.md), or [LUIS](./luis/what-is-luis.md).
+This guide provides step-by-step instructions to enable diagnostic logging for an Azure Cognitive Service. These logs provide rich, frequent data about the operation of a resource that are used for issue identification and debugging. Before you continue, you must have an Azure account with a subscription to at least one Cognitive Service, such as [Speech Services](./speech-service/overview.md), or [LUIS](./luis/what-is-luis.md).
## Prerequisites
To enable diagnostic logging, you'll need somewhere to store your log data. This
* [Log Analytics](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace) - A flexible log search and analytics tool that allows for analysis of raw logs generated by an Azure resource. > [!NOTE]
-> Additional configuration options are available. To learn more, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
+> * Additional configuration options are available. To learn more, see [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
+> * "Trace" in diagnostic logging is only available for [Custom question answering](/azure/cognitive-services/qnamaker/how-to/get-analytics-knowledge-base?tabs=v2).
## Enable diagnostic log collection
Let's start by enabling diagnostic logging using the Azure portal.
> [!NOTE] > To enable this feature using PowerShell or the Azure CLI, use the instructions provided in [Collect and consume log data from your Azure resources](../azure-monitor/essentials/platform-logs-overview.md).
-1. Navigate to the Azure portal. Then locate and select a Cognitive Services resource. For example, your subscription to Bing Web Search.
+1. Navigate to the Azure portal. Then locate and select a Cognitive Services resource. For example, your subscription to Speech Services.
2. Next, from the left-hand navigation menu, locate **Monitoring** and select **Diagnostic settings**. This screen contains all previously created diagnostic settings for this resource. 3. If there is a previously created resource that you'd like to use, you can select it now. Otherwise, select **+ Add diagnostic setting**. 4. Enter a name for the setting. Then select **Archive to a storage account** and **Send to log Analytics**.
cognitive-services What Are Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/what-are-cognitive-services.md
Azure Cognitive Services are cloud-based services with REST APIs and client libr
## Categories of Cognitive Services
-The catalog of cognitive services that provide cognitive understanding are categorized into five main pillars:
+The catalog of cognitive services that provide cognitive understanding is categorized into five main pillars:
* Vision * Speech
The catalog of cognitive services that provide cognitive understanding are categ
* Decision * Search
-The following sections in this article provides a list of services that are part of these five pillars.
+The following sections in this article provide a list of services that are part of these five pillars.
## Vision APIs
The following sections in this article provides a list of services that are part
|Service Name|Service Description| |:--|:|
-|[Anomaly Detector](./anomaly-detector/index.yml "Anomaly Detector") |Anomaly Detector allows you to monitor and detect abnormalities in your time series data. See [Anomaly Detector quickstart](./anomaly-detector/quickstarts/client-libraries.md) to get started with the service|
+|[Anomaly Detector](./anomaly-detector/index.yml "Anomaly Detector") |Anomaly Detector allows you to monitor and detect abnormalities in your time series data. See [Anomaly Detector quickstart](./anomaly-detector/quickstarts/client-libraries.md) to get started with the service.|
|[Content Moderator](./content-moderator/overview.md "Content Moderator")|Content Moderator provides monitoring for possible offensive, undesirable, and risky content. See [Content Moderator quickstart](./content-moderator/client-libraries.md) to get started with the service.| |[Personalizer](./personalizer/index.yml "Personalizer")|Personalizer allows you to choose the best experience to show to your users, learning from their real-time behavior. See [Personalizer quickstart](./personalizer/quickstart-personalizer-sdk.md) to get started with the service.|
The following sections in this article provides a list of services that are part
|[Bing Custom Search](/azure/cognitive-services/bing-custom-search "Bing Custom Search")|Bing Custom Search allows you to create tailored search experiences for topics that you care about.| |[Bing Entity Search](/azure/cognitive-services/bing-entities-search/ "Bing Entity Search")|Bing Entity Search returns information about entities that Bing determines are relevant to a user's query.| |[Bing Image Search](/azure/cognitive-services/bing-image-search "Bing Image Search")|Bing Image Search returns a display of images determined to be relevant to the user's query.|
-|[Bing Visual Search](/azure/cognitive-services/bing-visual-search "Bing Visual Search")|Bing Visual Search provides returns insights about an image such as visually similar images, shopping sources for products found in the image, and related searches.|
+|[Bing Visual Search](/azure/cognitive-services/bing-visual-search "Bing Visual Search")|Bing Visual Search returns insights about an image such as visually similar images, shopping sources for products found in the image, and related searches.|
|[Bing Local Business Search](/azure/cognitive-services/bing-local-business-search/ "Bing Local Business Search")| Bing Local Business Search API enables your applications to find contact and location information about local businesses based on search queries.| |[Bing Spell Check](/azure/cognitive-services/bing-spell-check/ "Bing Spell Check")|Bing Spell Check allows you to perform contextual grammar and spell checking.|
Cognitive Services provides several support options to help you move forward wit
* [Create a Cognitive Services account](cognitive-services-apis-create-account.md "Create a Cognitive Services account") * [What's new in Cognitive Services docs](whats-new-docs.md "What's new in Cognitive Services docs")
-* [Plan and manage costs for Cognitive Services](plan-manage-costs.md)
+* [Plan and manage costs for Cognitive Services](plan-manage-costs.md)
confidential-computing How To Fortanix Confidential Computing Manager Node Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/how-to-fortanix-confidential-computing-manager-node-agent.md
# Run an application by using Fortanix Confidential Computing Manager
-Learn how to run your application in Azure confidential computing by using [Fortanix Confidential Computing Manager](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.enclave_manager?tab=Overview) and [Node Agent](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent) from [Fortanix](https://www.fortanix.com/).
+Learn how to run your application in Azure confidential computing by using [Fortanix Confidential Computing Manager](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.em_managed?tab=Overview) and [Node Agent](https://azuremarketplace.microsoft.com/marketplace/apps/fortanix.rte_node_agent) from [Fortanix](https://www.fortanix.com/).
Fortanix is a third-party software vendor that provides products and services that work with the Azure infrastructure. There are other third-party providers that offer similar confidential computing services for Azure.
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* You can't use a [managed identity](container-instances-managed-identity.md) in a container group deployed to a virtual network. * You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network. * Due to the additional networking resources involved, deployments to a virtual network are typically slower than deploying a standard container instance.
+* Outbound connection to port 25 is not supported at this time.
* If you are connecting your container group to an Azure Storage Account, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to that resource. * [IPv6 addresses](../virtual-network/ipv6-overview.md) are not supported at this time.
container-registry Container Registry Check Health https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-check-health.md
Title: Check registry health description: Learn how to run a quick diagnostic command to identify common problems when using an Azure container registry, including local Docker configuration and connectivity to the registry Previously updated : 07/02/2019 Last updated : 07/14/2021 # Check the health of an Azure container registry
To check access to a registry as well as perform local environment checks, pass
az acr check-health --name myregistry ```
+### Check registry access in a virtual network
+
+To verify DNS settings to route to a private endpoint, pass the virtual network's name or resource ID. The resource ID is required when the virtual network is in a different subscription or resource group than the registry.
+
+```azurecli
+az acr check-health --name myregistry --vnet myvnet
+```
+ ## Error reporting The command logs information to the standard output. If a problem is detected, it provides an error code and description. For more information about the codes and possible solutions, see the [error reference](container-registry-health-error-reference.md).
By default, the command stops whenever it finds an error. You can also run the c
# Check environment only az acr check-health --ignore-errors
-# Check environment and target registry
-az acr check-health --name myregistry --ignore-errors
+# Check environment and target registry; skip confirmation to pull image
+az acr check-health --name myregistry --ignore-errors --yes
``` Sample output:
Fetch refresh token for registry 'myregistry.azurecr.io' : OK
Fetch access token for registry 'myregistry.azurecr.io' : OK ``` -- ## Next steps For details about error codes returned by the [az acr check-health][az-acr-check-health] command, see the [Health check error reference](container-registry-health-error-reference.md).
container-registry Container Registry Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-private-link.md
Title: Set up private endpoint with private link description: Set up a private endpoint on a container registry and enable access over a private link in a local virtual network. Private link access is a feature of the Premium service tier. Previously updated : 03/31/2021 Last updated : 07/14/2021 # Connect privately to an Azure container registry using Azure Private Link - Limit access to a registry by assigning virtual network private IP addresses to the registry endpoints and using [Azure Private Link](../private-link/private-link-overview.md). Network traffic between the clients on the virtual network and the registry's private endpoints traverses the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. Private Link also enables private registry access from on-premises through [Azure ExpressRoute](../expressroute/expressroute-introduction.MD) private peering or a [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). You can [configure DNS settings](../private-link/private-endpoint-overview.md#dns-configuration) for the registry's private endpoints, so that the settings resolve to the registry's allocated private IP address. With DNS configuration, clients and services in the network can continue to access the registry at the registry's fully qualified domain name, such as *myregistry.azurecr.io*.
-This feature is available in the **Premium** container registry service tier. Currently, a maximum of 10 private endpoints can be set up for a registry. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md).
+This article shows how to configure a private endpoint for your registry using the Azure portal (recommended) or the Azure CLI. This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry tiers](container-registry-skus.md).
[!INCLUDE [container-registry-scanning-limitation](../../includes/container-registry-scanning-limitation.md)]
+> [!NOTE]
+> Currently, a maximum of 10 private endpoints can be set up for a registry.
+ ## Prerequisites
+* A virtual network and subnet in which to set up the private endpoint. If needed, [create a new virtual network and subnet](../virtual-network/quick-create-portal.md).
+* For testing, it's recommended to set up a VM in the virtual network. For steps to create a test virtual machine to access your registry, see [Create a Docker-enabled virtual machine](container-registry-vnet.md#create-a-docker-enabled-virtual-machine).
* To use the Azure CLI steps in this article, Azure CLI version 2.6.0 or later is recommended. If you need to install or upgrade, see [Install Azure CLI][azure-cli]. Or run in [Azure Cloud Shell](../cloud-shell/quickstart.md). * If you don't already have a container registry, create one (Premium tier required) and [import](container-registry-import-images.md) a sample public image such as `mcr.microsoft.com/hello-world` from Microsoft Container Registry. For example, use the [Azure portal][quickstart-portal] or the [Azure CLI][quickstart-cli] to create a registry.
-* To configure registry access using a private link in a different Azure subscription, you need to register the resource provider for Azure Container Registry in that subscription. For example:
- ```azurecli
- az account set --subscription <Name or ID of subscription of private link>
+### Register container registry resource provider
- az provider register --namespace Microsoft.ContainerRegistry
- ```
+To configure registry access using a private link in a different Azure subscription or tenant, you need to [register the resource provider](../azure-resource-manager/management/resource-providers-and-types.md) for Azure Container Registry in that subscription. Use the Azure portal, Azure CLI, or other tools.
-The Azure CLI examples in this article use the following environment variables. Substitute values appropriate for your environment. All examples are formatted for the Bash shell:
+Example:
-```bash
-REGISTRY_NAME=<container-registry-name>
-REGISTRY_LOCATION=<container-registry-location> # Azure region such as westeurope where registry created
-RESOURCE_GROUP=<resource-group-name>
-VM_NAME=<virtual-machine-name>
-```
+```azurecli
+az account set --subscription <Name or ID of subscription of private link>
+az provider register --namespace Microsoft.ContainerRegistry
+```
-## Set up private link - CLI
+## Set up private endpoint - portal (recommended)
-### Get network and subnet names
+Set up a private endpoint when you create a registry, or add a private endpoint to an existing registry.
-If you don't have them already, you'll need the names of a virtual network and subnet to set up a private link. In this example, you use the same subnet for the VM and the registry's private endpoint. However, in many scenarios you would set up the endpoint in a separate subnet.
+### Create a private endpoint - new registry
-When you create a VM, Azure by default creates a virtual network in the same resource group. The name of the virtual network is based on the name of the virtual machine. For example, if you name your virtual machine *myDockerVM*, the default virtual network name is *myDockerVMVNET*, with a subnet named *myDockerVMSubnet*. Set these values in environment variables by running the [az network vnet list][az-network-vnet-list] command:
+1. When creating a registry in the portal, on the **Basics** tab, in **SKU**, select **Premium**.
+1. Select the **Networking** tab.
+1. In **Network connectivity**, select **Private endpoint** > **+ Add**.
+1. Enter or select the following information:
-```azurecli
-NETWORK_NAME=$(az network vnet list \
- --resource-group $RESOURCE_GROUP \
- --query '[].{Name: name}' --output tsv)
+ | Setting | Value |
+ | - | -- |
+ | Subscription | Select your subscription. |
+ | Resource group | Enter the name of an existing group or create a new one.|
+ | Name | Enter a unique name. |
+ | Registry subresource |Select **registry**|
+ | **Networking** | |
+ | Virtual network| Select the virtual network for the private endpoint. Example: *myDockerVMVNET*. |
+ | Subnet | Select the subnet for the private endpoint. Example: *myDockerVMSubnet*. |
+ |**Private DNS integration**||
+ |Integrate with private DNS zone |Select **Yes**. |
+ |Private DNS Zone |Select *(New) privatelink.azurecr.io* |
+ |||
+1. Configure the remaining registry settings, and then select **Review + create**.
+
-SUBNET_NAME=$(az network vnet list \
- --resource-group $RESOURCE_GROUP \
- --query '[].{Subnet: subnets[0].name}' --output tsv)
-echo NETWORK_NAME=$NETWORK_NAME
-echo SUBNET_NAME=$SUBNET_NAME
-```
+Your private link is now configured and ready for use.
+
+### Create a private endpoint - existing registry
+
+1. In the portal, navigate to your container registry.
+1. Under **Settings**, select **Networking**.
+1. On the **Private endpoints** tab, select **+ Private endpoint**.
+ :::image type="content" source="media/container-registry-private-link/private-endpoint-existing-registry.png" alt-text="Add private endpoint to registry":::
+
+1. In the **Basics** tab, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Enter the name of an existing group or create a new one.|
+ | **Instance details** | |
+ | Name | Enter a name. |
+ |Region|Select a region.|
+ |||
+1. Select **Next: Resource**.
+1. Enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ |Connection method | For this example, select **Connect to an Azure resource in my directory**.|
+ | Subscription| Select your subscription. |
+ | Resource type | Select **Microsoft.ContainerRegistry/registries**. |
+ | Resource |Select the name of your registry|
+ |Target subresource |Select **registry**|
+ |||
+1. Select **Next: Configuration**.
+1. Enter or select the information:
+
+ | Setting | Value |
+ | - | -- |
+ |**Networking**| |
+ | Virtual network| Select the virtual network for the private endpoint |
+ | Subnet | Select the subnet for the private endpoint |
+ |**Private DNS Integration**||
+ |Integrate with private DNS zone |Select **Yes**. |
+ |Private DNS Zone |Select *(New) privatelink.azurecr.io* |
+ |||
+
+1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
+1. When you see the **Validation passed** message, select **Create**.
+
+### Confirm endpoint configuration
+
+After the private endpoint is created, DNS settings in the private zone appear with the **Private endpoints** settings in the portal:
+
+1. In the portal, navigate to your container registry and select **Settings > Networking**.
+1. On the **Private endpoints** tab, select the private endpoint you created.
+1. Select **DNS configuration**.
+1. Review the link settings and custom DNS settings.
+
+## Set up private endpoint - CLI
+
+The Azure CLI examples in this article use the following environment variables. You'll need the names of an existing container registry, virtual network, and subnet to set up a private endpoint. Substitute values appropriate for your environment. All examples are formatted for the Bash shell:
+
+```bash
+REGISTRY_NAME=<container-registry-name>
+REGISTRY_LOCATION=<container-registry-location> # Azure region such as westeurope where registry created
+RESOURCE_GROUP=<resource-group-name> # Resource group for your existing virtual network and subnet
+NETWORK_NAME=<virtual-network-name>
+SUBNET_NAME=<subnet-name>
+```
### Disable network policies in subnet [Disable network policies](../private-link/disable-private-endpoint-network-policy.md) such as network security groups in the subnet for the private endpoint. Update your subnet configuration with [az network vnet subnet update][az-network-vnet-subnet-update]:
az network private-endpoint create \
### Get endpoint IP configuration
-To configure DNS records, get the IP configuration of the private endpoint. Associated with the private endpoint's network interface in this example are two private IP addresses for the container registry: one for the registry itself, and one for the registry's data endpoint.
+To configure DNS records, get the IP configuration of the private endpoint. Associated with the private endpoint's network interface in this example are two private IP addresses for the container registry: one for the registry itself, and one for the registry's data endpoint. If your registry is geo-replicated, an additional IP address is associated with each replica.
First, run [az network private-endpoint show][az-network-private-endpoint-show] to query the private endpoint for the network interface ID:
NETWORK_INTERFACE_ID=$(az network private-endpoint show \
--output tsv) ```
-The following [az network nic show][az-network-nic-show] commands get the private IP addresses for the container registry and the registry's data endpoint:
+The following [az network nic show][az-network-nic-show] commands get the private IP addresses and FQDNs for the container registry and the registry's data endpoint:
```azurecli REGISTRY_PRIVATE_IP=$(az network nic show \
DATA_ENDPOINT_FQDN=$(az network nic show \
--output tsv) ```
-> [!NOTE]
-> If your registry is [geo-replicated](container-registry-geo-replication.md), query for the additional data endpoint for each registry replica.
+#### Additional endpoints for geo-replicas
+
+If your registry is [geo-replicated](container-registry-geo-replication.md), query for the additional data endpoint for each registry replica. For example, in the *eastus* region:
+
+```azurecli
+REPLICA_LOCATION=eastus
+GEO_REPLICA_DATA_ENDPOINT_PRIVATE_IP=$(az network nic show \
+ --ids $NETWORK_INTERFACE_ID \
+ --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REPLICA_LOCATION'].privateIpAddress" \
+ --output tsv)
+GEO_REPLICA_DATA_ENDPOINT_FQDN=$(az network nic show \
+ --ids $NETWORK_INTERFACE_ID \
+ --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$REPLICA_LOCATION'].privateLinkConnectionProperties.fqdns" \
+ --output tsv)
+```
### Create DNS records in the private zone The following commands create DNS records in the private zone for the registry endpoint and its data endpoint. For example, if you have a registry named *myregistry* in the *westeurope* region, the endpoint names are `myregistry.azurecr.io` and `myregistry.westeurope.data.azurecr.io`.
-> [!NOTE]
-> If your registry is [geo-replicated](container-registry-geo-replication.md), create additonal DNS records for each replica's data endpoint IP.
-
-First run [az network private-dns record-set a create][az-network-private-dns-record-set-a-create] to create empty A record sets for the registry endpoint and data endpoint:
+First run [az network private-dns record-set a create][az-network-private-dns-record-set-a-create] to create empty A-record sets for the registry endpoint and data endpoint:
```azurecli az network private-dns record-set a create \
az network private-dns record-set a create \
--resource-group $RESOURCE_GROUP ```
-Run the [az network private-dns record-set a add-record][az-network-private-dns-record-set-a-add-record] command to create the A records for the registry endpoint and data endpoint:
+Run the [az network private-dns record-set a add-record][az-network-private-dns-record-set-a-add-record] command to create the A-records for the registry endpoint and data endpoint:
```azurecli az network private-dns record-set a add-record \
az network private-dns record-set a add-record \
--ipv4-address $DATA_ENDPOINT_PRIVATE_IP ```
-The private link is now configured and ready for use.
-
-## Set up private link - portal
-
-Set up a private link when you create a registry, or add a private link to an existing registry. The following steps assume you already have a virtual network and subnet set up with a VM for testing. You can also [create a new virtual network and subnet](../virtual-network/quick-create-portal.md).
-
-### Create a private endpoint - new registry
-
-1. When creating a registry in the portal, on the **Basics** tab, in **SKU**, select **Premium**.
-1. Select the **Networking** tab.
-1. In **Network connectivity**, select **Private endpoint** > **+ Add**.
-1. Enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Subscription | Select your subscription. |
- | Resource group | Enter the name of an existing group or create a new one.|
- | Name | Enter a unique name. |
- | Subresource |Select **registry**|
- | **Networking** | |
- | Virtual network| Select the virtual network where your virtual machine is deployed, such as *myDockerVMVNET*. |
- | Subnet | Select a subnet, such as *myDockerVMSubnet* where your virtual machine is deployed. |
- |**Private DNS Integration**||
- |Integrate with private DNS zone |Select **Yes**. |
- |Private DNS Zone |Select *(New) privatelink.azurecr.io* |
- |||
-1. Configure the remaining registry settings, and then select **Review + Create**.
+#### Additional records for geo-replicas
- ![Create registry with private endpoint](./media/container-registry-private-link/private-link-create-portal.png)
+If your registry is geo-replicated, create additional DNS settings for each replica. Continuing the example in the *eastus* region:
-### Create a private endpoint - existing registry
-
-1. In the portal, navigate to your container registry.
-1. Under **Settings**, select **Networking**.
-1. On the **Private endpoints** tab, select **+ Private endpoint**.
-1. In the **Basics** tab, enter or select the following information:
+```azurecli
+az network private-dns record-set a create \
+ --name ${REGISTRY_NAME}.${REPLICA_LOCATION}.data \
+ --zone-name privatelink.azurecr.io \
+ --resource-group $RESOURCE_GROUP
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Enter the name of an existing group or create a new one.|
- | **Instance details** | |
- | Name | Enter a name. |
- |Region|Select a region.|
- |||
-5. Select **Next: Resource**.
-6. Enter or select the following information:
+az network private-dns record-set a add-record \
+ --record-set-name ${REGISTRY_NAME}.${REPLICA_LOCATION}.data \
+ --zone-name privatelink.azurecr.io \
+ --resource-group $RESOURCE_GROUP \
+ --ipv4-address $GEO_REPLICA_DATA_ENDPOINT_PRIVATE_IP
+```
- | Setting | Value |
- | - | -- |
- |Connection method | Select **Connect to an Azure resource in my directory**.|
- | Subscription| Select your subscription. |
- | Resource type | Select **Microsoft.ContainerRegistry/registries**. |
- | Resource |Select the name of your registry|
- |Target subresource |Select **registry**|
- |||
-7. Select **Next: Configuration**.
-8. Enter or select the information:
+The private link is now configured and ready for use.
- | Setting | Value |
- | - | -- |
- |**Networking**| |
- | Virtual network| Select the virtual network where your virtual machine is deployed, such as *myDockerVMVNET*. |
- | Subnet | Select a subnet, such as *myDockerVMSubnet* where your virtual machine is deployed. |
- |**Private DNS Integration**||
- |Integrate with private DNS zone |Select **Yes**. |
- |Private DNS Zone |Select *(New) privatelink.azurecr.io* |
- |||
+## Disable public access
-1. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-2. When you see the **Validation passed** message, select **Create**.
+For many scenarios, disable registry access from public networks. This configuration prevents clients outside the virtual network from reaching the registry endpoints.
-After the private endpoint is created, DNS settings in the private zone appear on the **Private endpoints** page in the portal:
+### Disable public access - portal
1. In the portal, navigate to your container registry and select **Settings > Networking**.
-1. On the **Private endpoints** tab, select the private endpoint you created.
-1. On the **Overview** page, review the link settings and custom DNS settings.
-
- ![Endpoint DNS settings](./media/container-registry-private-link/private-endpoint-overview.png)
-
-Your private link is now configured and ready for use.
-
-## Disable public access
-
-For many scenarios, disable registry access from public networks. This configuration prevents clients outside the virtual network from reaching the registry endpoints.
+1. On the **Public access** tab, in **Allow public network access**, select **Disabled**. Then select **Save**.
### Disable public access - CLI To disable public access using the Azure CLI, run [az acr update][az-acr-update] and set `--public-network-enabled` to `false`.
-> [!NOTE]
-> The `public-network-enabled` argument requires Azure CLI 2.6.0 or later.
- ```azurecli az acr update --name $REGISTRY_NAME --public-network-enabled false ``` -
-### Disable public access - portal
-
-1. In the portal, navigate to your container registry and select **Settings > Networking**.
-1. On the **Public access** tab, in **Allow public network access**, select **Disabled**. Then select **Save**.
- ## Validate private link connection You should validate that the resources within the subnet of the private endpoint connect to your registry over a private IP address, and have the correct private DNS zone integration.
-To validate the private link connection, SSH to the virtual machine you set up in the virtual network.
+To validate the private link connection, connect to the virtual machine you set up in the virtual network.
Run a utility such as `nslookup` or `dig` to look up the IP address of your registry over the private link. For example:
xxxx.westeurope.cloudapp.azure.com. 10 IN A 20.45.122.144
### Registry operations over private link
-Also verify that you can perform registry operations from the virtual machine in the subnet. Make an SSH connection to your virtual machine, and run [az acr login][az-acr-login] to login to your registry. Depending on your VM configuration, you might need to prefix the following commands with `sudo`.
+Also verify that you can perform registry operations from the virtual machine in the network. Make an SSH connection to your virtual machine, and run [az acr login][az-acr-login] to login to your registry. Depending on your VM configuration, you might need to prefix the following commands with `sudo`.
```bash az acr login --name $REGISTRY_NAME
For some scenarios, you may need to manually configure DNS records in a private
> [!IMPORTANT] > If you later add a new replica, you need to manually add a new DNS record for the data endpoint in that region. For example, if you create a replica of *myregistry* in the northeurope location, add a record for `myregistry.northeurope.data.azurecr.io`.
-The FQDNs and private IP addresses you need to create DNS records are associated with the private endpoint's network interface. You can obtain this information using the Azure CLI or from the portal:
+The FQDNs and private IP addresses you need to create DNS records are associated with the private endpoint's network interface. You can obtain this information using the Azure portal or Azure CLI.
+* In the portal, navigate to your private endpoint, and select **DNS configuration**.
* Using the Azure CLI, run the [az network nic show][az-network-nic-show] command. For example commands, see [Get endpoint IP configuration](#get-endpoint-ip-configuration), earlier in this article.
-* In the portal, navigate to your private endpoint, and select **DNS configuration**.
- After creating DNS records, make sure that the registry FQDNs resolve properly to their respective private IP addresses. ## Clean up resources
+To clean up your resources in the portal, navigate to your resource group. Once the resource group is loaded, click on **Delete resource group** to remove the resource group and the resources stored there.
+ If you created all the Azure resources in the same resource group and no longer need them, you can optionally delete the resources by using a single [az group delete](/cli/azure/group) command: ```azurecli az group delete --name $RESOURCE_GROUP ```
-To clean up your resources in the portal, navigate to your resource group. Once the resource group is loaded, click on **Delete resource group** to remove the resource group and the resources stored there.
- ## Next steps * To learn more about Private Link, see the [Azure Private Link](../private-link/private-link-overview.md) documentation.
+* To verify DNS settings in the virtual network that route to a private endpoint, run the [az acr check-health](/cli/azure/acr#az_acr_check_health) command with the `--vnet` parameter. For more information, see [Check the health of an Azure container registry](container-registry-check-health.md)
+ * If you need to set up registry access rules from behind a client firewall, see [Configure rules to access an Azure container registry behind a firewall](container-registry-firewall-access-rules.md). * [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md)
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-troubleshoot-access.md
Title: Troubleshoot network issues with registry description: Symptoms, causes, and resolution of common problems when accessing an Azure container registry in a virtual network or behind a firewall Previously updated : 03/30/2021 Last updated : 05/10/2021 # Troubleshoot network issues with registry
May include one or more of the following:
* A client firewall or proxy prevents access - [solution](#configure-client-firewall-access) * Public network access rules on the registry prevent access - [solution](#configure-public-access-to-registry)
-* Virtual network configuration prevents access - [solution](#configure-vnet-access)
+* Virtual network or private endpoint configuration prevents access - [solution](#configure-vnet-access)
* You attempt to integrate Azure Security Center or certain other Azure services with a registry that has a private endpoint, service endpoint, or public IP access rules - [solution](#configure-service-access) ## Further diagnosis
Related links:
Confirm that the virtual network is configured with either a private endpoint for Private Link or a service endpoint (preview). Currently an Azure Bastion endpoint isn't supported.
-If a private endpoint is configured, confirm that DNS resolves the registry's public FQDN such as *myregistry.azurecr.io* to the registry's private IP address. Use a network utility such as `dig` or `nslookup` for DNS lookup. Ensure that [DNS records are configured](container-registry-private-link.md#dns-configuration-options) for the registry FQDN and for each of the data endpoint FQDNs.
+If a private endpoint is configured, confirm that DNS resolves the registry's public FQDN such as *myregistry.azurecr.io* to the registry's private IP address.
+
+ * Run the [az acr check-health](/cli/azure/acr#az_acr_check_health) command with the `--vnet` parameter to confirm the DNS routing to the private endpoint in the virtual network.
+ * Use a network utility such as `dig` or `nslookup` for DNS lookup.
+ * Ensure that [DNS records are configured](container-registry-private-link.md#dns-configuration-options) for the registry FQDN and for each of the data endpoint FQDNs.
Review NSG rules and service tags used to limit traffic from other resources in the network to the registry.
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
description: Learn how to troubleshoot connector issues in Azure Data Factory.
Previously updated : 07/12/2021 Last updated : 07/16/2021
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
3. Update the table schema accordingly.
+### Error code: FailedDbOperation
+
+- **Message**: `User does not have permission to perform this action.`
+
+- **Recommendation**: Make sure the user configured in the Azure Synapse Analytics connector must have 'CONTROL' permission on the target database while using PolyBase to load data. For more detailed information, refer to this [document](https://docs.microsoft.com/azure/data-factory/connector-azure-sql-data-warehouse#required-database-permission).
+ ## Azure Table Storage
data-factory How To Clean Up Ssisdb Logs With Elastic Jobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md
Title: Clean up SSISDB logs with Azure Elastic Database Jobs
-description: "This article describes how to clean up SSISDB logs by using Azure Elastic Database jobs to trigger the stored procedure that exists for this purpose"
+ Title: How to clean up SSISDB logs automatically
+description: This article describes how to clean up SSIS project deployment and package execution logs stored in SSISDB by invoking the relevant SSISDB stored procedure automatically via Azure Data Factory, Azure SQL Managed Instance Agent, or Elastic Database Jobs.
Previously updated : 07/09/2020 Last updated : 07/18/2021
-# Clean up SSISDB logs with Azure Elastic Database Jobs
+# How to clean up SSISDB logs automatically
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This article describes how to use Azure Elastic Database Jobs to trigger the stored procedure that cleans up logs for the SQL Server Integration Services catalog database, `SSISDB`.
+Once you provision an Azure-SQL Server Integration Services (SSIS) integration runtime (IR) in Azure Data Factory (ADF), you can use it to run SSIS packages deployed into:
-Elastic Database Jobs is an Azure service that makes it easy to automate and run jobs against a database or a group of databases. You can schedule, run, and monitor these jobs by using the Azure portal, Transact-SQL, PowerShell, or REST APIs. Use the Elastic Database Job to trigger the stored procedure for log cleanup one time or on a schedule. You can choose the schedule interval based on SSISDB resource usage to avoid heavy database load.
+- SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance (Project Deployment Model)
+- file system, Azure Files, or SQL Server database (MSDB) hosted by Azure SQL Managed Instance (Package Deployment Model)
+
+In the Project Deployment Model, your Azure-SSIS IR will deploy SSIS projects into SSISDB, fetch SSIS packages to run from SSISDB, and write package execution logs back into SSISDB. To manage the accumulated logs, we've provided relevant SSISDB properties and stored procedure that can be invoked automatically via ADF, Azure SQL Managed Instance Agent, or Elastic Database Jobs.
+
+## SSISDB log clean-up properties and stored procedure
+To configure SSISDB log clean-up properties, you can connect to SSISDB hosted by your Azure SQL Database server/Managed Instance using SQL Server Management Studio (SSMS), see [Connecting to SSISDB](https://docs.microsoft.com/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial?view=sql-server-ver15#connect-to-the-ssisdb-database). Once connected, on the **Object Explorer** window of SSMS, you can expand the **Integration Services Catalogs** node, right-click on the **SSISDB** subnode, and select the **Properties** menu item to open **Catalog Properties** dialog box. On the **Catalog Properties** dialog box, you can find the following SSISDB log clean-up properties:
+
+- **Clean Logs Periodically**: Enables automatic clean-up of package execution logs, by default set to *True*.
+- **Retention Period (days)**: Specifies the maximum age of retained logs (in days), by default set to *365* and older logs are deleted by automatic clean-up.
+- **Periodically Remove Old Versions**: Enables automatic clean-up of stored project versions, by default set to *True*.
+- **Maximum Number of Versions per Project**: Specifies the maximum number of stored project versions, by default set to *10* and older versions are deleted by automatic clean-up.
+
+![SSISDB log clean-up properties](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/clean-up-logs-ssms-ssisdb-properties.png)
+
+Once SSISDB log clean-up properties are configured, you can invoke the relevant SSISDB stored procedure, `[internal].[cleanup_server_retention_window_exclusive]`, to clean up logs automatically via ADF, Azure SQL Managed Instance Agent, or Elastic Database Jobs.
+
+## Clean up SSISDB logs automatically via ADF
+Regardless whether you use Azure SQL database server/Managed Instance to host SSISDB, you can always use ADF to clean up SSISDB logs automatically. To do so, you can prepare an Execute SSIS Package activity in ADF pipeline with an embedded package containing a single Execute SQL Task that invokes the relevant SSISDB stored procedure. See example 4) in our blog: [Run Any SQL Anywhere in 3 Easy Steps with SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244).
+
+![SSISDB log clean-up via ADF](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/run-sql-ssis-activity-ssis-parameters-ssisdb-clean-up.png)
+
+Once your ADF pipeline is prepared, you can attach a schedule trigger to run it periodically, see [How to trigger ADF pipeline on a schedule](quickstart-create-data-factory-portal.md#trigger-the-pipeline-on-a-schedule).
+
+## Clean up SSISDB logs automatically via Azure SQL Managed Instance Agent
+If you use Azure SQL Managed Instance to host SSISDB, you can also use its built-in job orchestrator/scheduler, Azure SQL Managed Instance Agent, to clean up SSISDB logs automatically. If SSISDB is recently created in your Azure SQL Managed Instance, we've also created a T-SQL job called **SSIS Server Maintenance Job** under Azure SQL Managed Instance Agent for this purpose. It's by default disabled and configured with a schedule to run daily. If you want to enable it and or reconfigure its schedule, you can do so by connecting to your Azure SQL Managed Instance using SSMS. Once connected, on the **Object Explorer** window of SSMS, you can expand the **SQL Server Agent** node, expand the **Jobs** subnode, and double click on the **SSIS Server Maintenance Job** to enable/reconfigure it.
+
+![SSISDB log clean-up via Azure SQL Managed Instance Agent](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/clean-up-logs-ssms-maintenance-job.png)
+
+If your Azure SQL Managed Instance Agent doesn't yet have the **SSIS Server Maintenance Job** created under it, you can add it manually by running the following T-SQL script on your Azure SQL Managed Instance.
+
+```sql
+USE msdb
+IF EXISTS(SELECT * FROM sys.server_principals where name = '##MS_SSISServerCleanupJobLogin##')
+ DROP LOGIN ##MS_SSISServerCleanupJobLogin##
+
+DECLARE @loginPassword nvarchar(256)
+SELECT @loginPassword = REPLACE (CONVERT( nvarchar(256), CRYPT_GEN_RANDOM( 64 )), N'''', N'''''')
+EXEC ('CREATE LOGIN ##MS_SSISServerCleanupJobLogin## WITH PASSWORD =''' +@loginPassword + ''', CHECK_POLICY = OFF')
+ALTER LOGIN ##MS_SSISServerCleanupJobLogin## DISABLE
+
+USE master
+GRANT VIEW SERVER STATE TO ##MS_SSISServerCleanupJobLogin##
+
+USE SSISDB
+IF EXISTS (SELECT name FROM sys.database_principals WHERE name = '##MS_SSISServerCleanupJobUser##')
+ DROP USER ##MS_SSISServerCleanupJobUser##
+CREATE USER ##MS_SSISServerCleanupJobUser## FOR LOGIN ##MS_SSISServerCleanupJobLogin##
+GRANT EXECUTE ON [internal].[cleanup_server_retention_window_exclusive] TO ##MS_SSISServerCleanupJobUser##
+GRANT EXECUTE ON [internal].[cleanup_server_project_version] TO ##MS_SSISServerCleanupJobUser##
+
+USE msdb
+EXEC dbo.sp_add_job
+ @job_name = N'SSIS Server Maintenance Job',
+ @enabled = 0,
+ @owner_login_name = '##MS_SSISServerCleanupJobLogin##',
+ @description = N'Runs every day. The job removes operation records from the database that are outside the retention window and maintains a maximum number of versions per project.'
+
+DECLARE @IS_server_name NVARCHAR(30)
+SELECT @IS_server_name = CONVERT(NVARCHAR, SERVERPROPERTY('ServerName'))
+EXEC sp_add_jobserver @job_name = N'SSIS Server Maintenance Job',
+ @server_name = @IS_server_name
+
+EXEC sp_add_jobstep
+ @job_name = N'SSIS Server Maintenance Job',
+ @step_name = N'SSIS Server Operation Records Maintenance',
+ @subsystem = N'TSQL',
+ @command = N'
+ DECLARE @role int
+ SET @role = (SELECT [role] FROM [sys].[dm_hadr_availability_replica_states] hars INNER JOIN [sys].[availability_databases_cluster] adc ON hars.[group_id] = adc.[group_id] WHERE hars.[is_local] = 1 AND adc.[database_name] =''SSISDB'')
+ IF DB_ID(''SSISDB'') IS NOT NULL AND (@role IS NULL OR @role = 1)
+ EXEC [SSISDB].[internal].[cleanup_server_retention_window_exclusive]',
+ @database_name = N'msdb',
+ @on_success_action = 3,
+ @retry_attempts = 3,
+ @retry_interval = 3;
+
+EXEC sp_add_jobstep
+ @job_name = N'SSIS Server Maintenance Job',
+ @step_name = N'SSIS Server Max Version Per Project Maintenance',
+ @subsystem = N'TSQL',
+ @command = N'
+ DECLARE @role int
+ SET @role = (SELECT [role] FROM [sys].[dm_hadr_availability_replica_states] hars INNER JOIN [sys].[availability_databases_cluster] adc ON hars.[group_id] = adc.[group_id] WHERE hars.[is_local] = 1 AND adc.[database_name] =''SSISDB'')
+ IF DB_ID(''SSISDB'') IS NOT NULL AND (@role IS NULL OR @role = 1)
+ EXEC [SSISDB].[internal].[cleanup_server_project_version]',
+ @database_name = N'msdb',
+ @retry_attempts = 3,
+ @retry_interval = 3;
+
+EXEC sp_add_jobschedule
+ @job_name = N'SSIS Server Maintenance Job',
+ @name = 'SSISDB Scheduler',
+ @enabled = 1,
+ @freq_type = 4, /*daily*/
+ @freq_interval = 1,/*every day*/
+ @freq_subday_type = 0x1,
+ @active_start_date = 20001231,
+ @active_end_date = 99991231,
+ @active_start_time = 0,
+ @active_end_time = 120000
+```
+
+## Clean up SSISDB logs automatically via Elastic Database Jobs
+If you use Azure SQL Database server to host SSISDB, it doesn't have a built-in job orchestrator/scheduler, so you must use an external component, e.g. ADF (see above) or Elastic Database Jobs (see the rest of this section), to clean up SSISDB logs automatically.
+
+Elastic Database Jobs is an Azure service that can automate and run jobs against a database or group of databases. You can schedule, run, and monitor these jobs by using Azure portal, Azure PowerShell, T-SQL, or REST APIs. Use Elastic Database Jobs to invoke the relevant SSISDB stored procedure for log clean-up one time or on a schedule. You can choose the schedule interval based on SSISDB resource usage to avoid heavy database load.
For more info, see [Manage groups of databases with Elastic Database Jobs](../azure-sql/database/elastic-jobs-overview.md).
-The following sections describe how to trigger the stored procedure `[internal].[cleanup_server_retention_window_exclusive]`, which removes SSISDB logs that are outside the retention window set by the administrator.
+The following sections describe how to invoke the relevant SSISDB stored procedure, `[internal].[cleanup_server_retention_window_exclusive]`, which removes SSISDB logs that are outside the configured retention window.
-## Clean up logs with Power Shell
+### Configure Elastic Database Jobs using Azure PowerShell
[!INCLUDE [requires-azurerm](../../includes/requires-azurerm.md)]
-The following sample PowerShell scripts create a new Elastic Job to trigger the stored procedure for SSISDB log cleanup. For more info, see [Create an Elastic Job agent using PowerShell](../azure-sql/database/elastic-jobs-powershell-create.md).
+The following Azure PowerShell scripts create a new Elastic Job that invokes SSISDB log clean-up stored procedure. For more info, see [Create an Elastic Job agent using PowerShell](../azure-sql/database/elastic-jobs-powershell-create.md).
-### Create parameters
+#### Create parameters
``` powershell
-# Parameters needed to create the Job Database
+# Parameters needed to create your job database
param( $ResourceGroupName = $(Read-Host "Please enter an existing resource group name"),
-$AgentServerName = $(Read-Host "Please enter the name of an existing logical SQL server(for example, yhxserver) to hold the SSISDBLogCleanup job database"),
-$SSISDBLogCleanupJobDB = $(Read-Host "Please enter a name for the Job Database to be created in the given SQL Server"),
-# The Job Database should be a clean,empty,S0 or higher service tier. We set S0 as default.
+$AgentServerName = $(Read-Host "Please enter the name of an existing Azure SQL Database server, for example myjobserver, to hold your job database"),
+$SSISDBLogCleanupJobDB = $(Read-Host "Please enter a name for your job database to be created in the given Azure SQL Database server"),
+
+# Your job database should be a clean, empty S0 or higher service tier. We set S0 as default.
$PricingTier = "S0",
-# Parameters needed to create the Elastic Job agent
-$SSISDBLogCleanupAgentName = $(Read-Host "Please enter a name for your new Elastic Job agent"),
+# Parameters needed to create your Elastic Job agent
+$SSISDBLogCleanupAgentName = $(Read-Host "Please enter a name for your Elastic Job agent"),
+
+# Parameters needed to create credentials in your job database for connecting to SSISDB
+$PasswordForSSISDBCleanupUser = $(Read-Host "Please provide a new password for the log clean-up job user to connect to SSISDB"),
-# Parameters needed to create the job credential in the Job Database to connect to SSISDB
-$PasswordForSSISDBCleanupUser = $(Read-Host "Please provide a new password for SSISDBLogCleanup job user to connect to SSISDB database for log cleanup"),
-# Parameters needed to create a login and a user in the SSISDB of the target server
-$SSISDBServerEndpoint = $(Read-Host "Please enter the name of the target logical SQL server which contains SSISDB you need to cleanup, for example, myserver") + '.database.windows.net',
+# Parameters needed to create the login and user for SSISDB
+$SSISDBServerEndpoint = $(Read-Host "Please enter the name of target Azure SQL Database server that contains SSISDB, for example myssisdbserver") + '.database.windows.net',
$SSISDBServerAdminUserName = $(Read-Host "Please enter the target server admin username for SQL authentication"), $SSISDBServerAdminPassword = $(Read-Host "Please enter the target server admin password for SQL authentication"), $SSISDBName = "SSISDB",
-# Parameters needed to set job scheduling to trigger execution of cleanup stored procedure
-$RunJobOrNot = $(Read-Host "Please indicate whether you want to run the job to cleanup SSISDB logs outside the log retention window immediately(Y/N). Make sure the retention window is set appropriately before running the following powershell scripts. Those removed SSISDB logs cannot be recovered"),
-$IntervalType = $(Read-Host "Please enter the interval type for the execution schedule of SSISDB log cleanup stored procedure. For the interval type, Year, Month, Day, Hour, Minute, Second can be supported."),
-$IntervalCount = $(Read-Host "Please enter the detailed interval value in the given interval type for the execution schedule of SSISDB log cleanup stored procedure"),
-# StartTime of the execution schedule is set as the current time as default.
+# Parameters needed to set the job schedule for invoking SSISDB log clean-up stored procedure
+$RunJobOrNot = $(Read-Host "Please indicate whether you want to run the job to clean up SSISDB logs outside the retention window immediately (Y/N). Make sure the retention window is set properly before running the following scripts as deleted logs cannot be recovered."),
+$IntervalType = $(Read-Host "Please enter the interval type for SSISDB log clean-up schedule: Year, Month, Day, Hour, Minute, Second are supported."),
+$IntervalCount = $(Read-Host "Please enter the count of interval type for SSISDB log clean-up schedule."),
+
+# The start time for SSISDB log clean-up schedule is set to current time by default.
$StartTime = (Get-Date) ```
-### Trigger the cleanup stored procedure
+#### Invoke SSISDB log clean-up stored procedure
```powershell
-# Install the latest PackageManagement powershell package which PowershellGet v1.6.5 is dependent on
+# Install the latest PowerShell PackageManagement module that PowerShellGet v1.6.5 depends on
Find-Package PackageManagement -RequiredVersion 1.1.7.2 | Install-Package -Force
-# You may need to restart the powershell session
-# Install the latest PowershellGet module which adds the -AllowPrerelease flag to Install-Module
+
+# You may need to restart your PowerShell session
+# Install the latest PowerShellGet module that adds the -AllowPrerelease flag to Install-Module
Find-Package PowerShellGet -RequiredVersion 1.6.5 | Install-Package -Force
-# Place AzureRM.Sql preview cmdlets side by side with existing AzureRM.Sql version
+# Install AzureRM.Sql preview cmdlets side by side with the existing AzureRM.Sql version
Install-Module -Name AzureRM.Sql -AllowPrerelease -Force # Sign in to your Azure account Connect-AzureRmAccount
-# Create a Job Database which is used for defining jobs of triggering SSISDB log cleanup stored procedure and tracking cleanup history of jobs
-Write-Output "Creating a blank SQL database to be used as the SSISDBLogCleanup Job Database ..."
+# Create your job database for defining SSISDB log clean-up job and tracking the job history
+Write-Output "Creating a blank SQL database to be used as your job database ..."
$JobDatabase = New-AzureRmSqlDatabase -ResourceGroupName $ResourceGroupName -ServerName $AgentServerName -DatabaseName $SSISDBLogCleanupJobDB -RequestedServiceObjectiveName $PricingTier $JobDatabase
-# Enable the Elastic Jobs preview in your Azure subscription
+# Enable Elastic Database Jobs preview in your Azure subscription
Register-AzureRmProviderFeature -FeatureName sqldb-JobAccounts -ProviderNamespace Microsoft.Sql
-# Create the Elastic Job agent
-Write-Output "Creating the Elastic Job agent..."
+# Create your Elastic Job agent
+Write-Output "Creating your Elastic Job agent..."
$JobAgent = $JobDatabase | New-AzureRmSqlElasticJobAgent -Name $SSISDBLogCleanupAgentName $JobAgent
-# Create the job credential in the Job Database to connect to SSISDB database in the target server for log cleanup
-Write-Output "Creating job credential to connect to SSISDB database..."
+# Create job credentials in your job database for connecting to SSISDB in target server
+Write-Output "Creating job credentials for connecting to SSISDB..."
$JobCredSecure = ConvertTo-SecureString -String $PasswordForSSISDBCleanupUser -AsPlainText -Force $JobCred = New-Object -TypeName "System.Management.Automation.PSCredential" -ArgumentList "SSISDBLogCleanupUser", $JobCredSecure $JobCred = $JobAgent | New-AzureRmSqlElasticJobCredential -Name "SSISDBLogCleanupUser" -Credential $JobCred
-# In the master database of the target SQL server which contains SSISDB to cleanup
-# - Create the job user login
-Write-Output "Grant permissions on the master database of the target server..."
+# Create the job user login in master database of target server
+Write-Output "Grant permissions on the master database of target server..."
$Params = @{ 'Database' = 'master' 'ServerInstance' = $SSISDBServerEndpoint
$Params = @{
} Invoke-SqlCmd @Params
-# For SSISDB database of the target SQL server
-# - Create the SSISDBLogCleanup user from the SSISDBlogCleanup user login
-# - Grant permissions for the execution of SSISDB log cleanup stored procedure
-Write-Output "Grant appropriate permissions on SSISDB database..."
+# Create SSISDB log clean-up user from login in SSISDB and grant it permissions to invoke SSISDB log clean-up stored procedure
+Write-Output "Grant appropriate permissions on SSISDB..."
$TargetDatabase = $SSISDBName $CreateJobUser = "CREATE USER SSISDBLogCleanupUser FROM LOGIN SSISDBLogCleanupUser" $GrantStoredProcedureExecution = "GRANT EXECUTE ON internal.cleanup_server_retention_window_exclusive TO SSISDBLogCleanupUser"
$TargetDatabase | ForEach-Object -Process {
Invoke-SqlCmd @Params }
-# Create a target group which includes SSISDB database needed to cleanup
-Write-Output "Creating the target group including only SSISDB database needed to cleanup ..."
+# Create your target group that includes only SSISDB to clean up
+Write-Output "Creating your target group that includes only SSISDB to clean up..."
$SSISDBTargetGroup = $JobAgent | New-AzureRmSqlElasticJobTargetGroup -Name "SSISDBTargetGroup" $SSISDBTargetGroup | Add-AzureRmSqlElasticJobTarget -ServerName $SSISDBServerEndpoint -Database $SSISDBName
-# Create the job to trigger execution of SSISDB log cleanup stored procedure
-Write-Output "Creating a new job to trigger execution of the stored procedure for SSISDB log cleanup"
+# Create your job to invoke SSISDB log clean-up stored procedure
+Write-Output "Creating your job to invoke SSISDB log clean-up stored procedure..."
$JobName = "CleanupSSISDBLog" $Job = $JobAgent | New-AzureRmSqlElasticJob -Name $JobName -RunOnce $Job
-# Add the job step to execute internal.cleanup_server_retention_window_exclusive
-Write-Output "Adding the job step for the cleanup stored procedure execution"
+# Add your job step to invoke internal.cleanup_server_retention_window_exclusive
+Write-Output "Adding your job step to invoke SSISDB log clean-up stored procedure..."
$SqlText = "EXEC internal.cleanup_server_retention_window_exclusive"
-$Job | Add-AzureRmSqlElasticJobStep -Name "step to execute cleanup stored procedure" -TargetGroupName $SSISDBTargetGroup.TargetGroupName -CredentialName $JobCred.CredentialName -CommandText $SqlText
+$Job | Add-AzureRmSqlElasticJobStep -Name "Step to invoke SSISDB log clean-up stored procedure" -TargetGroupName $SSISDBTargetGroup.TargetGroupName -CredentialName $JobCred.CredentialName -CommandText $SqlText
-# Run the job to immediately start cleanup stored procedure execution for once
+# Run your job to immediately invoke SSISDB log clean-up stored procedure once
if ($RunJobOrNot -eq 'Y') {
-Write-Output "Start a new execution of the stored procedure for SSISDB log cleanup immediately..."
+Write-Output "Invoking SSISDB log clean-up stored procedure immediately..."
$JobExecution = $Job | Start-AzureRmSqlElasticJob $JobExecution }
-# Schedule the job running to trigger stored procedure execution on schedule for removing SSISDB logs outside the retention window
-Write-Output "Start the execution schedule of the stored procedure for SSISDB log cleanup..."
+# Schedule your job to invoke SSISDB log clean-up stored procedure periodically, deleting SSISDB logs outside the retention window
+Write-Output "Starting your schedule to invoke SSISDB log clean-up stored procedure periodically..."
$Job | Set-AzureRmSqlElasticJob -IntervalType $IntervalType -IntervalCount $IntervalCount -StartTime $StartTime -Enable ```
-## Clean up logs with Transact-SQL
+### Configure Elastic Database Jobs using T-SQL
-The following sample Transact-SQL scripts create a new Elastic Job to trigger the stored procedure for SSISDB log cleanup. For more info, see [Use Transact-SQL (T-SQL) to create and manage Elastic Database Jobs](../azure-sql/database/elastic-jobs-tsql-create-manage.md).
+The following T-SQL scripts create a new Elastic Job that invokes SSISDB log clean-up stored procedure. For more info, see [Use T-SQL to create and manage Elastic Database Jobs](../azure-sql/database/elastic-jobs-tsql-create-manage.md).
-1. Create or identify an empty S0 or higher Azure SQL Database to be the SSISDBCleanup Job Database. Then create an Elastic Job Agent in the [Azure portal](https://ms.portal.azure.com/#create/Microsoft.SQLElasticJobAgent).
+1. Identify an empty S0/higher service tier of Azure SQL Database or create a new one for your job database. Then create an Elastic Job Agent in [Azure portal](https://ms.portal.azure.com/#create/Microsoft.SQLElasticJobAgent).
-2. In the Job Database, create a credential for the SSISDB log cleanup job. This credential is used to connect to your SSISDB database to clean up the logs.
+2. In your job database, create credentials for connecting to SSISDB in your target server.
```sql
- -- Connect to the job database specified when creating the job agent
- -- Create a database master key if one does not already exist, using your own password.
- CREATE MASTER KEY ENCRYPTION BY PASSWORD= '<EnterStrongPasswordHere>';
+ -- Connect to the job database specified when creating your job agent.
+ -- Create a database master key if one doesn't already exist, using your own password.
+ CREATE MASTER KEY ENCRYPTION BY PASSWORD= '<EnterStrongPasswordHere>';
- -- Create a credential for SSISDB log cleanup.
+ -- Create credentials for SSISDB log clean-up.
CREATE DATABASE SCOPED CREDENTIAL SSISDBLogCleanupCred WITH IDENTITY = 'SSISDBLogCleanupUser', SECRET = '<EnterStrongPasswordHere>'; ```
-3. Define the target group that includes the SSISDB database for which you want to run the cleanup stored procedure.
+3. Define your target group that includes only SSISDB to clean up.
```sql
- -- Connect to the job database
- -- Add a target group
+ -- Connect to your job database.
+ -- Add your target group.
EXEC jobs.sp_add_target_group 'SSISDBTargetGroup'
- -- Add SSISDB database into the target group
+ -- Add SSISDB to your target group
EXEC jobs.sp_add_target_group_member 'SSISDBTargetGroup', @target_type = 'SqlDatabase', @server_name = '<EnterSSISDBTargetServerName>',
- @database_name = '<EnterSSISDBName>'
+ @database_name = 'SSISDB'
- --View the recently created target group and target group members
+ -- View your recently created target group and its members.
SELECT * FROM jobs.target_groups WHERE target_group_name = 'SSISDBTargetGroup'; SELECT * FROM jobs.target_group_members WHERE target_group_name = 'SSISDBTargetGroup'; ```
-4. Grant appropriate permissions for the SSISDB database. The SSISDB catalog must have proper permissions for the stored procedure to run SSISDB log cleanup successfully. For detailed guidance, see [Manage logins](../azure-sql/database/logins-create-manage.md).
+4. Create SSISDB log clean-up user from login in SSISDB and grant it permissions to invoke SSISDB log clean-up stored procedure. For detailed guidance, see [Manage logins](../azure-sql/database/logins-create-manage.md).
```sql
- -- Connect to the master database in the target server including SSISDB
+ -- Connect to the master database of target server that hosts SSISDB
CREATE LOGIN SSISDBLogCleanupUser WITH PASSWORD = '<strong_password>';
- -- Connect to SSISDB database in the target server to cleanup logs
+ -- Connect to SSISDB
CREATE USER SSISDBLogCleanupUser FROM LOGIN SSISDBLogCleanupUser; GRANT EXECUTE ON internal.cleanup_server_retention_window_exclusive TO SSISDBLogCleanupUser ```
-5. Create the job and add a job step to trigger the execution of the stored procedure for SSISDB log cleanup.
+5. Create your job and add your job step to invoke SSISDB log clean-up stored procedure.
```sql
- --Connect to the job database
- --Add the job for the execution of SSISDB log cleanup stored procedure.
- EXEC jobs.sp_add_job @job_name='CleanupSSISDBLog', @description='Remove SSISDB logs which are outside the retention window'
+ -- Connect to your job database.
+ -- Add your job to invoke SSISDB log clean-up stored procedure.
+ EXEC jobs.sp_add_job @job_name='CleanupSSISDBLog', @description='Remove SSISDB logs outside the configured retention window'
- --Add a job step to execute internal.cleanup_server_retention_window_exclusive
+ -- Add your job step to invoke internal.cleanup_server_retention_window_exclusive
EXEC jobs.sp_add_jobstep @job_name='CleanupSSISDBLog', @command=N'EXEC internal.cleanup_server_retention_window_exclusive', @credential_name='SSISDBLogCleanupCred', @target_group_name='SSISDBTargetGroup' ```
-6. Before you continue, make sure the retention window has been set appropriately. SSISDB logs outside the window are deleted and can't be recovered.
-
- Then you can run the job immediately to begin SSISDB log cleanup.
+6. Before continuing, make sure you set the retention window properly. SSISDB logs outside this window will be deleted and can't be recovered. You can then run your job immediately to start SSISDB log clean-up.
```sql
- --Connect to the job database
- --Run the job immediately to execute the stored procedure for SSISDB log cleanup
+ -- Connect to your job database.
+ -- Run your job immediately to invoke SSISDB log clean-up stored procedure.
declare @je uniqueidentifier exec jobs.sp_start_job 'CleanupSSISDBLog', @job_execution_id = @je output
- --Watch the execution results for SSISDB log cleanup
+ -- Watch SSISDB log clean-up results
select @je select * from jobs.job_executions where job_execution_id = @je ```
-7. Optionally, schedule job executions to remove SSISDB logs outside the retention window on a schedule. Use a similar statement to update the job parameters.
+7. Optionally, you can delete SSISDB logs outside the retention window on a schedule. Configure your job parameters as follows.
```sql
- --Connect to the job database
+ -- Connect to your job database.
EXEC jobs.sp_update_job @job_name='CleanupSSISDBLog', @enabled=1,
The following sample Transact-SQL scripts create a new Elastic Job to trigger th
@schedule_end_time='<EnterProperEndTimeForSchedule>' ```
-## Monitor the cleanup job in the Azure portal
+### Monitor SSISDB log clean-up job using Azure portal
-You can monitor the execution of the cleanup job in the Azure portal. For each execution, you see the status, start time, and end time of the job.
+You can monitor SSISDB log clean-up job in Azure portal. For each execution, you can see its status, start time, and end time.
-![Monitor the cleanup job in the Azure portal](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/monitor-cleanup-job-portal.png)
+![Monitor SSISDB log clean-up job in Azure portal](media/how-to-clean-up-ssisdb-logs-with-elastic-jobs/monitor-cleanup-job-portal.png)
-## Monitor the cleanup job with Transact-SQL
+### Monitor SSISDB log clean-up job using T-SQL
-You can also use Transact-SQL to view the execution history of the cleanup job.
+You can also use T-SQL to view the execution history of SSISDB log clean-up job.
```sqlConnect to the job database View all execution statuses for the job to cleanup SSISDB logs
+-- Connect to your job database.
+-- View all SSISDB log clean-up job executions.
SELECT * FROM jobs.job_executions WHERE job_name = 'CleanupSSISDBLog' ORDER BY start_time DESC View all active executions
+-- View all active executions.
SELECT * FROM jobs.job_executions WHERE is_active = 1 ORDER BY start_time DESC ``` ## Next steps
-For management and monitoring tasks related to the Azure-SSIS Integration Runtime, see the following articles. The Azure-SSIS IR is the runtime engine for SSIS packages stored in SSISDB in Azure SQL Database.
+To manage and monitor your Azure-SSIS IR, see the following articles.
-- [Reconfigure the Azure-SSIS integration runtime](manage-azure-ssis-integration-runtime.md)
+- [Reconfigure the Azure-SSIS integration runtime](manage-azure-ssis-integration-runtime.md)
-- [Monitor the Azure-SSIS integration runtime](monitor-integration-runtime.md#azure-ssis-integration-runtime).
+- [Monitor the Azure-SSIS integration runtime](monitor-integration-runtime.md#azure-ssis-integration-runtime).
databox-online Azure Stack Edge Gpu 2106 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2106-release-notes.md
+
+ Title: Azure Stack Edge 2106 release notes
+description: Describes critical open issues and resolutions for the Azure Stack Edge running 2106 release.
++
+
+++ Last updated : 07/19/2021+++
+# Azure Stack Edge 2106 release notes
++
+The following release notes identify the critical open issues and the resolved issues for the 2106 release for your Azure Stack Edge devices. These release notes are applicable for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R devices. Features and issues that correspond to a specific model are called out wherever applicable.
+
+The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they are added. Before you deploy your device, carefully review the information contained in the release notes.
+
+This article applies to the **Azure Stack Edge 2106** release, which maps to software version number **2.2.1636.3457**. This software can be applied to your device if you are running at least Azure Stack Edge 2010 (2.1.1377.2170) software.
+
+## What's new
+
+The following new features are available in the Azure Stack Edge 2106 release.
+
+- **Windows updates and security fixes** - The [latest cumulative update (LCU) for Windows and June security fixes](https://support.microsoft.com/en-us/topic/june-8-2021-kb5003697-monthly-rollup-457aa997-18a0-46e9-8612-497f01ccaa54) were rolled into the updates package for Azure Stack Edge.
+- **Bug fixes for Azure Private Multi-Access Edge Compute** - Multiple issues were fixed for Azure Private MEC deployments.
+
+ - Issues related to guest VM health monitoring such as link flapping, errors in boot log, and reboots.
+ - Memory resource consumption over time.
+ - Mellanox driver, firmware, and tools.
+ - Tools to debug VM-related issues and network health check.
+ - Issues that caused Single root I/O virtualization (SR-IOV) VM outbound packets or the traffic from LAN/WAN VM NetAdapters to be dropped.
+- **Log collection improvements** - This release has log collection improvements related to Azure Stack Edge update scenarios.
+++
+## Issues fixed in 2106 release
+
+The following table lists the issues that were release noted in previous releases and fixed in the current release.
+
+| No. | Feature | Issue |
+| | | |
+|**1.**|Azure Private MEC |VM net adapter link status flaps at boot time and periodically.|
+|**2.**|Azure Private MEC |VFtoPF DHCP redirect flag when used on Mellanox network interfaces can cause the packets to be dropped.|
+|**3.**|Azure Private MEC |The Mellanox network interface driver, firmware, and tools need to be upgraded to version 2.60.|
+|**4.**|VM |The cmdlet `Get-VMInguestLogs` available for the collection of VM guest logs when connecting via the PowerShell interface of the device fails.|
+|**5.**|Azure Private MEC |When web proxy is configured, the web proxy bypass setting causes VM provisioning failure. |
+|**6.**|Azure Private MEC |For MEC/NFM deployments prior to the 2105 update, you may face this rare issue where traffic from LAN/WAN VM NetAdapters is dropped. In 2106, this issue is fixed by setting the enableIPForwarding to true on VM LAN/WAN network interfaces, regardless of whether the VMs were created before 2105 or after 2105 release. |
+|**7.**|Azure Private MEC |Single root I/O virtualization (SR-IOV) VMΓÇÖs outbound packets may be dropped by the Mellanox network interfaces (Port 5 and Port 6 on the device) when a combination of Mellanox driver, SR-IOV Virtual Functions (VF) and vftopfDHCPRedirect feature is used. In 2106, the issue is fixed by disabling the vftopfDHCPRedirect feature. |
++
+## Known issues in 2106 release
+
+The following table provides a summary of known issues in the 2106 release.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+|**1.**|Preview features |For this release, the following features: Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, Azure Arc enabled Kubernetes, VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R, Multi-process service (MPS), and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU - are all available in preview. |These features will be generally available in later releases. |
++
+## Known issues from previous releases
+
+The following table provides a summary of known issues carried over from the previous releases.
+
+| No. | Feature | Issue | Workaround/comments |
+| | | | |
+| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <ul><li>In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**</li><li>Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). </li><li>Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.</li><li>Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd".</li>After this, steps 3-4 from the current documentation should be identical. </li></ul> |
+| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<ul><li>Create blob in cloud. Or delete a previously uploaded blob from the device.</li><li>Refresh blob from the cloud into the appliance using the refresh functionality.</li><li>Update only a portion of the blob using Azure SDK REST APIs.</li></ul>These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.|
+|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: cannot create directory 'test': Permission deniedΓÇï|
+|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.|
+|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<ul><li> Only block blobs are supported. Page blobs are not supported.</li><li>There is no snapshot or copy API support.</li><li> Hadoop workload ingestion through `distcp` is not supported as it uses the copy operation heavily.</li></ul>||
+|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.|
+|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You will need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.|
+|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).|
+|**9.**|Azure Arc enabled Kubernetes |For the GA release, Azure Arc enabled Kubernetes is updated from version 0.1.18 to 0.2.9. As the Azure Arc enabled Kubernetes update is not supported on Azure Stack Edge device, you will need to redeploy Azure Arc enabled Kubernetes.|Follow these steps:<ol><li>[Apply device software and Kubernetes updates](azure-stack-edge-gpu-install-update.md).</li><li>Connect to the [PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md).</li><li>Remove the existing Azure Arc agent. Type: `Remove-HcsKubernetesAzureArcAgent`.</li><li>Deploy [Azure Arc to a new resource](azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md). Do not use an existing Azure Arc resource.</li></ol>|
+|**10.**|Azure Arc enabled Kubernetes|Azure Arc deployments are not supported if web proxy is configured on your Azure Stack Edge Pro device.||
+|**11.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Do not use reserved IPs.|
+|**12.**|Kubernetes |Kubernetes does not currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).|
+|**13.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see Modify Azure IoT Edge modules from marketplace to run on Azure Stack Edge device.<!-- insert link-->|
+|**14.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts cannot be bound to paths in IoT Edge containers. If possible, map the parent directory.|
+|**15.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates are not picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.|
+|**16.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected.<ul><li>**Status** column in **Certificates** page.</li><li>**Security** tile in **Get started** page.</li><li>**Configuration** tile in **Overview** page.</li></ul> |
+|**17.**|IoT Edge |Modules deployed through IoT Edge can't use host network. | |
+|**18.**|Compute + Kubernetes |Compute/Kubernetes does not support NTLM web proxy. ||
+|**19.**|Kubernetes + update |Earlier software versions such as 2008 releases have a race condition update issue that causes the update to fail with ClusterConnectionException. |Using the newer builds should help avoid this issue. If you still see this issue, the workaround is to retry the upgrade, and it should work.|
+|**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.|
+|**21.**|Kubernetes Dashboard | *Https* endpoint for Kubernetes Dashboard with SSL certificate is not supported. | |
+|**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). |
+|**24.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| |
+|**25.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| |
+|**26.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it is not possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create 1 VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available 1 GPU. |
+|**27.**|Custom script VM extension |There is a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <ul><li> Connect to the Windows VM using remote desktop protocol (RDP). </li><li> Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. </li><li> If the `waappagent.exe` is not running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.</li><li> While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. </li><li>After you kill the process, the process starts running again with the newer version.</li><li>Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.</li><li>[Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). </li><ul> |
+|**28.**|GPU VMs |Prior to this release, GPU VM lifecycle was not managed in the update flow. Hence, when updating to 2103 release, GPU VMs are not stopped automatically during the update. You will need to manually stop the GPU VMs using a `stop-stayProvisioned` flag before you update your device. For more information, see [Suspend or shut down the VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#suspend-or-shut-down-the-vm).<br> All the GPU VMs that are kept running before the update, are started after the update. In these instances, the workloads running on the VMs aren't terminated gracefully. And the VMs could potentially end up in an undesirable state after the update. <br>All the GPU VMs that are stopped via the `stop-stayProvisioned` before the update, are automatically started after the update. <br>If you stop the GPU VMs via the Azure portal, you'll need to manually start the VM after the device update.| If running GPU VMs with Kubernetes, stop the GPU VMs right before the update. <br>When the GPU VMs are stopped, Kubernetes will take over the GPUs that were used originally by VMs. <br>The longer the GPU VMs are in stopped state, higher the chances that Kubernetes will take over the GPUs. |
+|**29.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting is not retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. |
++
+## Next steps
+
+- [Update your device](azure-stack-edge-gpu-install-update.md)
databox-online Azure Stack Edge Gpu Create Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-image.md
Previously updated : 07/08/2021 Last updated : 07/16/2021 #Customer intent: As an IT admin, I need to understand how to create Azure VM images that I can use to deploy virtual machines on my Azure Stack Edge Pro GPU device.
You can now use this VHD to create and deploy VMs on your Azure Stack Edge Pro G
## Copy VHD to storage account using AzCopy
-The following procedures describe how to use AzCopy to copy a custom VM image to an Azure Storage account so you can use the image to deploy VMs on your Azure Stack Edge Pro GPU device. We recommend that you store your custom VM images in the same storage account that you're using for your Azure Stack Edge Pro GPU device.
+The following procedures describe how to use AzCopy to copy a custom VM image to an Azure Storage account so you can use the image to deploy VMs on your Azure Stack Edge Pro GPU device. We recommend that you store your custom VM images in any existing storage account that you're using which is in the same region/subscription as Azure Stack Edge.
### Create target URI for a container
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-install-update.md
Previously updated : 05/27/2021 Last updated : 07/12/2021 # Update your Azure Stack Edge Pro GPU
This article describes the steps required to install update on your Azure Stack
The procedure described in this article was performed using a different version of software, but the process remains the same for the current software version. > [!IMPORTANT]
-> - Update **2105** is the current update and corresponds to:
-> - Device software version - **2.2.1606.3320**
+> - Update **2106** is the current update and corresponds to:
+> - Device software version - **2.2.1636.3457**
> - Kubernetes server version - **v1.20.2** > - IoT Edge version: **0.1.0-beta14** > - GPU driver version: **460.32.03**
Do the following steps to download the update from the Microsoft Update Catalog.
<!--![Search catalog 2](./media/azure-stack-edge-gpu-install-update/download-update-2-b.png)-->
-4. Select **Download**. There are two packages to download: <!--KB 4616970 and KB 4616971--> one for the device software updates (*SoftwareUpdatePackage.exe*) and toher for the Kubernetes updates (*Kubernetes_Package.exe*) respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
+4. Select **Download**. There are two packages to download: <!--KB 4616970 and KB 4616971--> one for the device software updates (*SoftwareUpdatePackage.exe*) and another for the Kubernetes updates (*Kubernetes_Package.exe*), respectively. Download the packages to a folder on the local system. You can also copy the folder to a network share that is reachable from the device.
### Install the update or the hotfix
databox Data Box Disk Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-disk-limits.md
Previously updated : 07/15/2021 Last updated : 07/16/2021 # Azure Data Box Disk limits
For the latest information on Azure storage service limits and best practices fo
- If a folder has the same name as an existing container, the folder's contents are merged with the container's contents. Files or blobs that aren't already in the cloud are added to the container. If a file or blob has the same name as a file or blob that's already in the container, the existing file or blob is overwritten. - Every file written into *BlockBlob* and *PageBlob* shares is uploaded as a block blob and page blob respectively. - Any empty directory hierarchy (without any files) created under *BlockBlob* and *PageBlob* folders is not uploaded.
+- To improve performance during data uploads, we recommend that you [enable large file shares on the storage account and increase share capacity to 100 TiB](../../articles/storage/files/storage-how-to-create-file-share.md#enable-large-files-shares-on-an-existing-account) before you place your order. Large file shares are only supported for storage accounts with locally redundant storage (LRS).
- If there are any errors when uploading data to Azure, an error log is created in the target storage account. The path to this error log is available in the portal when the upload is complete and you can review the log to take corrective action. Do not delete data from the source without verifying the uploaded data. - File metadata and NTFS permissions are not preserved when the data is uploaded to Azure Files. For example, the *Last modified* attribute of the files will not be kept when the data is copied. - If you specified managed disks in the order, review the following additional considerations:
databox Data Box Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-limits.md
Previously updated : 07/15/2021 Last updated : 07/16/2021 # Azure Data Box limits
defender-for-iot Concept Micro Agent Linux Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/concept-micro-agent-linux-dependencies.md
+
+ Title: Micro agent Linux dependencies (Preview)
+description: This article describes the different Linux OS dependencies for the Defender for IoT micro agent.
+ Last updated : 07/19/2021++
+# Micro agent Linux dependencies (Preview)
+
+This article describes the different Linux OS dependencies for the Defender for IoT micro agent.
+
+## Linux dependencies
+
+The table below shows the Linux dependencies for each component.
+
+| Component | Dependency | Type | Required by IoT SDK | Notes |
+|--|--|--|--|--|
+| **Core** | | | | |
+| | libcurl-openssl (libcurl) | Library | Γ£ö | |
+| | libssl | Library | Γ£ö | |
+| | uuid | Library | Γ£ö | |
+| | pthread | ulibc compilation flag | Γ£ö | |
+| | libuv1 | Library | | |
+| | sudo | Package | | |
+| | uuid-runtime | Package | | |
+| **System information collector** | | | | |
+| | uname | System call | | |
+| **Baseline collector** | | | | |
+| | BusyBox | Linux compilation flag | | |
+| | Bash | Linux compilation flag | | |
+| **Process collector** | | | | |
+| | CONFIG_CONNECTOR=y | Kernel config | | |
+| | CONFIG_PROC_EVENTS=y | Kernel config | | |
+| **Network collector** | | | | |
+| | libpcap | Library | | |
+| | CONFIG_PACKET=y | Kernel config | | |
+| | CONFIG_NETFILTER =y | Kernel config | | Optional ΓÇô Performance improvement |
+
+## Next steps
+
+[Install the Defender for IoT micro agent (Preview)](quickstart-standalone-agent-binary-installation.md).
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/architecture.md
Title: What is agentless solution architecture description: Learn about Azure Defender for IoT agentless architecture and information flow. Previously updated : 1/25/2021- Last updated : 07/19/2021 # Azure Defender for IoT architecture This article describes the functional system architecture of the Defender for IoT agentless solution. Azure Defender for IoT offers two sets of capabilities to fit your environment's needs, agentless solution for organizations, and agent-based solution for device builders.
-## Agentless solution for organizations
+## Agentless solution architecture for organizations
### Defender for IoT components Defender for IoT connects both to the Azure cloud and to on-premises components. The solution is designed for scalability in large and geographically distributed environments with multiple remote locations. This solution enables a multi-layered distributed architecture by country, region, business unit, or zone.
defender-for-iot Resources Sensor Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/resources-sensor-deployment-checklist.md
+
+ Title: Azure Defender for IoT pre-deployment checklist
+description: This article provides information, and a checklist that should be used prior to deployment when preparing your site.
Last updated : 07/18/2021+++
+# Pre-deployment checklist overview
+
+This article provides information, and a checklist that should be used prior to deployment when preparing your site to ensure a successful onboarding.
+
+- The Defender for IoT physical sensor should connect to managed switches that see the industrial communications between layers 1 and 2 (in some cases also layer 3).
+- The sensor listens on a switch Mirror port (SPAN port) or a TAP.
+- The management port is connected to the business/corporate network using SSL.
+
+## Checklist
+
+Having an overview of an industrial network diagram, will allow the site engineers to define the proper location for Azure Defender for IoT equipment.
+
+### 1. Global network diagram
+
+The global network diagram provides a diagram of the industrial OT environment
+++
+> [!Note]
+> The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
+
+### 2. Committed devices
+
+Provide the approximate number of network devices that will be monitored. You will need this information when onboarding your subscription to the Azure Defender for IoT portal. During the onboarding process, you will be prompted to enter the number of devices in increments of 1000.
+
+### 3. (Optional) Subnet list
+
+Provide a subnet list of the production networks.
+
+| **#** | **Subnet name** | **Description** |
+|--|--|--|
+| 1 | | |
+| 2 | | |
+| 3 | | |
+| 4 | | |
+
+### 4. VLANs
+
+Provide a VLAN list of the production networks.
+
+| **#** | **VLAN Name** | **Description** |
+|--|--|--|
+| 1 | | |
+| 2 | | |
+| 3 | | |
+| 4 | | |
+
+### 5. Switch models and mirroring support
+
+To verify that the switches have port mirroring capability, provide the switch model numbers that the Defender for IoT platform should connect to.
+
+| **#** | **Switch** | **Model** | **Traffic mirroring support (SPAN, RSPAN, or none)** |
+|--|--|--|--|
+| 1 | | |
+| 2 | | |
+| 3 | | |
+| 4 | | |
+
+### 6. Third-party switch management
+
+Does a third party manage the switches? Y or N
+
+If yes, who? __________________________________
+
+What is their policy? __________________________________
+
+### 7. Serial connection
+
+Are there devices that communicate via a serial connection in the network? Yes or No
+
+If yes, specify which serial communication protocol: ________________
+
+If yes, indicate on the network diagram what devices communicate with serial protocols, and where they are.
+
+*Add your network diagram with marked serial connections.*
+
+### 8. Vendors and protocols (industrial equipment)
+
+Provide a list of vendors and protocols of the industrial equipment. (Optional)
+
+| **#** | **Vendor** | **Communication protocol** |
+|--|--|--|
+| 1 | | |
+| 2 | | |
+| 3 | | |
+| 4 | | |
+
+For example:
+
+- Siemens
+
+- Rockwell automation ΓÇô Ethernet or IP
+
+- Emerson ΓÇô DeltaV, Ovation
+
+### 9. QoS
+
+For QoS, the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
+
+ Business unit (BU): ________________
+
+### 10. Sensor
+
+The sensor appliance is connected to switch SPAN port through a network adapter. It's connected to the customer's corporate network for management through another dedicated network adapter.
+
+Provide address details for the sensor NIC that will be connected in the corporate network:
+
+| Item | Appliance 1 | Appliance 2 | Appliance 3 |
+|--|--|--|--|
+| Appliance IP address | | | |
+| Subnet | | | |
+| Default gateway | | | |
+| DNS | | | |
+| Host name | | | |
+
+### 11. iDRAC/iLO/Server management
+
+| Item | Appliance 1 | Appliance 2 | Appliance 3 |
+|--|--|--|--|
+| Appliance IP address | | | |
+| Subnet | | | |
+| Default gateway | | | |
+| DNS | | | |
+
+### 12. On-premises management console
+
+| Item | Active | Passive (when using HA) |
+|--|--|--|
+| IP address | | |
+| Subnet | | |
+| Default gateway | | |
+| DNS | | |
+
+### 13. SNMP
+
+| Item | Details |
+|--|--|
+| IP | |
+| IP address | |
+| Username | |
+| Password | |
+| Authentication type | MD5 or SHA |
+| Encryption | DES or AES |
+| Secret key | |
+| SNMP v2 community string |
+
+### 14. SSL certificate
+
+Are you planning to use an SSL certificate? Yes or No
+
+If yes, what service will you use to generate it? What attributes will you include in the certificate (for example, domain or IP address)?
+
+### 15. SMTP authentication
+
+Are you planning to use SMTP to forward alerts to an email server? Yes or No
+
+If yes, what authentication method you will use?
+
+### 16. Active Directory or local users
+
+Contact an Active Directory administrator to create an Active Directory site user group or create local users. Be sure to have your users ready for the deployment day.
+
+### 17. IoT device types in the network
+
+| Device type | Number of devices in the network | Average bandwidth |
+|--|--|--|
+| Ex. Camera | |
+| EX. X-ray machine | |
+| | |
+| | |
+| | |
+| | |
+| | |
+| | |
+| | |
+| | |
+
+## Next steps
+
+[About Azure Defender for IoT network setup](how-to-set-up-your-network.md)
+
+[About the Defender for IoT installation](how-to-install-software.md)
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/custom-event-quickstart.md
az group create --name gridResourceGroup --location westus2
## Create a custom topic
-An event grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group. Replace `<your-topic-name>` with a unique name for your topic. The custom topic name must be unique because it's part of the DNS entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
+An event grid topic provides a user-defined endpoint that you post your events to. The following example creates the custom topic in your resource group using Bash in Azure Cloud Shell. Replace `<your-topic-name>` with a unique name for your topic. The custom topic name must be unique because it's part of the DNS entry. Additionally, it must be between 3-50 characters and contain only values a-z, A-Z, 0-9, and "-"
```azurecli-interactive topicname=<your-topic-name>
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[Sprint](https://business.sprint.com/solutions/cloud-networking/)** |Supported |Supported |Chicago, Silicon Valley, Washington DC | | **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich | | **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** |Supported |Supported |Amsterdam, Chennai, Hong Kong SAR, London, Mumbai, Sao Paulo, Silicon Valley, Singapore, Washington DC |
-| **[Telefonica](https://www.business-solutions.telefonica.com/es/enterprise/solutions/efficient-infrastructure/managed-voice-data-connectivity/)** |Supported |Supported |Amsterdam, Sao Paulo |
+| **[Telefonica](https://www.telefonica.com/es/home)** |Supported |Supported |Amsterdam, Sao Paulo |
| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** |Supported |Supported |London, London2, Singapore2 | | **Telenor** |Supported |Supported |Amsterdam, London, Oslo | | **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported |Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Silicon Valley, Stockholm, Washington DC |
firewall-manager Secure Cloud Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/secure-cloud-network-powershell.md
In this tutorial, you learn how to:
- PowerShell 7 This tutorial requires that you run Azure PowerShell locally on PowerShell 7. To install PowerShell 7, see [Migrating from Windows PowerShell 5.1 to PowerShell 7](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7&preserve-view=true).-- Az.Network version 3.2.0-
- If you have Az.Network version 3.4.0 or later, you'll need to downgrade to use some of the commands in this tutorial. You can check the version of your Az.Network module with the command `Get-InstalledModule -Name Az.Network`. To uninstall the Az.Network module, run `Uninstall-Module -name az.network`. To install the Az.Network 3.2.0 module, run `Install-Module az.network -RequiredVersion 3.2.0 -force`.
## Sign in to Azure
Create two virtual networks and connect them to the hub as spokes:
$Spoke1 = New-AzVirtualNetwork -Name "spoke1" -ResourceGroupName $RG -Location $Location -AddressPrefix "10.1.1.0/24" $Spoke2 = New-AzVirtualNetwork -Name "spoke2" -ResourceGroupName $RG -Location $Location -AddressPrefix "10.1.2.0/24" # Connect Virtual Network to Virtual WAN
-$Spoke1Connection = New-AzVirtualHubVnetConnection -ResourceGroupName $RG -ParentResourceName $HubName -Name "spoke1" -RemoteVirtualNetwork $Spoke1
-$Spoke2Connection = New-AzVirtualHubVnetConnection -ResourceGroupName $RG -ParentResourceName $HubName -Name "spoke2" -RemoteVirtualNetwork $Spoke2
+$Spoke1Connection = New-AzVirtualHubVnetConnection -ResourceGroupName $RG -ParentResourceName $HubName -Name "spoke1" -RemoteVirtualNetwork $Spoke1 -EnableInternetSecurityFlag $True
+$Spoke2Connection = New-AzVirtualHubVnetConnection -ResourceGroupName $RG -ParentResourceName $HubName -Name "spoke2" -RemoteVirtualNetwork $Spoke2 -EnableInternetSecurityFlag $True
``` At this point, you have a fully functional Virtual WAN providing any-to-any connectivity. To enhance it with security, you need to deploy an Azure Firewall to each Virtual Hub. Firewall Policies can be used to efficiently manage the virtual WAN Azure Firewall instance. So a firewall policy is created as well in this example:
Now you can continue with the second step, to add the static routes to the `Defa
```azurepowershell # Create static routes in default Route table $AzFWId = $(Get-AzVirtualHub -ResourceGroupName $RG -name $HubName).AzureFirewall.Id
-$AzFWRoute = New-AzVHubRoute -Name "private-traffic" -Destination @("0.0.0.0/0", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16") -DestinationType "CIDR" -NextHop $AzFWId -NextHopType "ResourceId"
+$AzFWRoute = New-AzVHubRoute -Name "all_traffic" -Destination @("0.0.0.0/0", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16") -DestinationType "CIDR" -NextHop $AzFWId -NextHopType "ResourceId"
$DefaultRT = Update-AzVHubRouteTable -Name "defaultRouteTable" -ResourceGroupName $RG -VirtualHubName $HubName -Route @($AzFWRoute) ```
+> [!NOTE]
+> String "***all_traffic***" as value for parameter "-Name" in the New-AzVHubRoute command above has a special meaning: if you use this exact string, the configuration applied in this article will be properly reflected in the Azure Portal (Firewall Manager --> Virtual hubs --> [Your Hub] --> Security Configuration). If a different name will be used, the desired configuration will be applied, but will not be reflected in the Azure Portal.
## Test connectivity
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/features.md
Previously updated : 06/11/2021 Last updated : 07/15/2021
Azure Firewall includes the following features:
- Multiple public IP addresses - Azure Monitor logging - Forced tunneling-- Web categories (preview)
+- Web categories
- Certifications ## Built-in high availability
Azure Firewall Workbook provides a flexible canvas for Azure Firewall data analy
You can configure Azure Firewall to route all Internet-bound traffic to a designated next hop instead of going directly to the Internet. For example, you may have an on-premises edge firewall or other network virtual appliance (NVA) to process network traffic before it's passed to the Internet. For more information, see [Azure Firewall forced tunneling](forced-tunneling.md).
-## Web categories (preview)
+## Web categories
-Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories are included in Azure Firewall Standard, but it's more fine-tuned in Azure Firewall Premium Preview. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic. For more information about Azure Firewall Premium Preview, see [Azure Firewall Premium Preview features](premium-features.md).
+Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories are included in Azure Firewall Standard, but it's more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic. For more information about Azure Firewall Premium, see [Azure Firewall Premium features](premium-features.md).
For example, if Azure Firewall intercepts an HTTPS request for `www.google.com/news`, the following categorization is expected:
For example, if Azure Firewall intercepts an HTTPS request for `www.google.com/n
The categories are organized based on severity under **Liability**, **High-Bandwidth**, **Business Use**, **Productivity Loss**, **General Surfing**, and **Uncategorized**.
-### Categorization change
-
-You can request a categorization change if you:
-
-
-or
--- have a suggested category for an uncategorized FQDN or URL-
-You're welcome to submit a request at [https://aka.ms/azfw-webcategories-request](https://aka.ms/azfw-webcategories-request).
- ### Category exceptions You can create exceptions to your web category rules. Create a separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the pre-defined **Social networking** web category.
Azure Firewall is Payment Card Industry (PCI), Service Organization Controls (SO
## Next steps -- [Azure Firewall Premium Preview features](premium-features.md)
+- [Azure Firewall Premium features](premium-features.md)
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
Previously updated : 07/01/2021 Last updated : 07/15/2021 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
You can centrally create, enforce, and log application and network connectivity
To learn about Azure Firewall features, see [Azure Firewall features](features.md).
-## Azure Firewall Premium Preview
+## Azure Firewall Premium
-Azure Firewall Premium Preview is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. These capabilities include TLS inspection, IDPS, URL filtering, and Web categories.
+Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. These capabilities include TLS inspection, IDPS, URL filtering, and Web categories.
-To learn about Azure Firewall Premium Preview features, see [Azure Firewall Premium Preview features](premium-features.md).
+To learn about Azure Firewall Premium features, see [Azure Firewall Premium features](premium-features.md).
-To see how the Firewall Premium Preview is configured in the Azure portal, see [Azure Firewall Premium Preview in the Azure portal](premium-portal.md).
+To see how the Firewall Premium is configured in the Azure portal, see [Azure Firewall Premium in the Azure portal](premium-portal.md).
## Pricing and SLA
Azure Firewall has the following known issues:
|Adding a DNAT rule to a secured virtual hub with a security provider is not supported.|This results in an asynchronous route for the returning DNAT traffic, which goes to the security provider.|Not supported.| | Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource manager limit). | This is a current limitation. | - ## Next steps - [Quickstart: Create an Azure Firewall and a firewall policy - ARM template](../firewall-manager/quick-firewall-policy.md)
firewall Premium Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-certificates.md
Title: Azure Firewall Premium Preview certificates
-description: To properly configure TLS inspection on Azure Firewall Premium Preview, you must configure and install Intermediate CA certificates.
+ Title: Azure Firewall Premium certificates
+description: To properly configure TLS inspection on Azure Firewall Premium, you must configure and install Intermediate CA certificates.
Previously updated : 03/09/2021 Last updated : 07/15/2021
-# Azure Firewall Premium Preview certificates
+# Azure Firewall Premium certificates
+
-> [!IMPORTANT]
-> Azure Firewall Premium is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- To properly configure Azure Firewall Premium Preview TLS inspection, you must provide a valid intermediate CA certificate and deposit it in Azure Key vault.
+ To properly configure Azure Firewall Premium TLS inspection, you must provide a valid intermediate CA certificate and deposit it in Azure Key vault.
-## Certificates used by Azure Firewall Premium Preview
+## Certificates used by Azure Firewall Premium
There are three types of certificates used in a typical deployment:
There are three types of certificates used in a typical deployment:
A certificate authority can issue multiple certificates in the form of a tree structure. A root certificate is the top-most certificate of the tree.
-Azure Firewall Premium Preview can intercept outbound HTTP/S traffic and auto-generate a server certificate for `www.website.com`. This certificate is generated using the Intermediate CA certificate that you provide. End-user browser and client applications must trust your organization Root CA certificate or intermediate CA certificate for this procedure to work.
+Azure Firewall Premium can intercept outbound HTTP/S traffic and auto-generate a server certificate for `www.website.com`. This certificate is generated using the Intermediate CA certificate that you provide. End-user browser and client applications must trust your organization Root CA certificate or intermediate CA certificate for this procedure to work.
:::image type="content" source="media/premium-certificates/certificate-process.png" alt-text="Certificate process":::
You can either create or reuse an existing user-assigned managed identity, which
## Configure a certificate in your policy
-To configure a CA certificate in your Firewall Premium policy, select your policy and then select **TLS inspection (preview)**. Select **Enabled** on the **TLS inspection** page. Then select your CA certificate in Azure Key Vault, as shown in the following figure:
+To configure a CA certificate in your Firewall Premium policy, select your policy and then select **TLS inspection**. Select **Enabled** on the **TLS inspection** page. Then select your CA certificate in Azure Key Vault, as shown in the following figure:
:::image type="content" source="media/premium-certificates/tls-inspection.png" alt-text="Azure Firewall Premium overview diagram":::
To configure a CA certificate in your Firewall Premium policy, select your polic
> To see and configure a certificate from the Azure portal, you must add your Azure user account to the Key Vault Access policy. Give your user account **Get** and **List** under **Secret Permissions**. :::image type="content" source="media/premium-certificates/secret-permissions.png" alt-text="Azure Key Vault Access policy"::: - ## Create your own self-signed CA certificate
-To help you test and verify TLS inspection, you can use the following scripts to create your own self-signed Root CA and Intermediate CA.
+If you want to create your own certificates to help you test and verify TLS inspection, you can use the following scripts to create your own self-signed Root CA and Intermediate CA.
> [!IMPORTANT] > For production, you should use your corporate PKI to create an Intermediate CA certificate. A corporate PKI leverages the existing infrastructure and handles the Root CA distribution to all endpoint machines.
-> For more information, see [Deploy and configure Enterprise CA certificates for Azure Firewall Preview](premium-deploy-certificates-enterprise-ca.md).
+> For more information, see [Deploy and configure Enterprise CA certificates for Azure Firewall](premium-deploy-certificates-enterprise-ca.md).
There are two versions of this script: - a bash script `cert.sh`
firewall Premium Deploy Certificates Enterprise Ca https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-deploy-certificates-enterprise-ca.md
Title: Deploy and configure Enterprise CA certificates for Azure Firewall Premium Preview
-description: Learn how to deploy and configure Enterprise CA certificates for Azure Firewall Premium Preview.
+ Title: Deploy and configure Enterprise CA certificates for Azure Firewall Premium
+description: Learn how to deploy and configure Enterprise CA certificates for Azure Firewall Premium.
Previously updated : 03/18/2021 Last updated : 07/15/2021
-# Deploy and configure Enterprise CA certificates for Azure Firewall Preview
+# Deploy and configure Enterprise CA certificates for Azure Firewall
-> [!IMPORTANT]
-> Azure Firewall Premium is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+Azure Firewall Premium includes a TLS inspection feature, which requires a certificate authentication chain. For production deployments, you should use an Enterprise PKI to generate the certificates that you use with Azure Firewall Premium. Use this article to create and manage an Intermediate CA certificate for Azure Firewall Premium.
-Azure Firewall Premium Preview includes a TLS inspection feature, which requires a certificate authentication chain. For production deployments, you should use an Enterprise PKI to generate the certificates that you use with Azure Firewall Premium. Use this article to create and manage an Intermediate CA certificate for Azure Firewall Premium Preview.
-
-For more information about certificates used by Azure Firewall Premium Preview, see [Azure Firewall Premium Preview certificates](premium-certificates.md).
+For more information about certificates used by Azure Firewall Premium, see [Azure Firewall Premium certificates](premium-certificates.md).
## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-To use an Enterprise CA to generate a certificate to use with Azure Firewall Premium Preview, you must have the following resources:
+To use an Enterprise CA to generate a certificate to use with Azure Firewall Premium, you must have the following resources:
- an Active Directory Forest - an Active Directory Certification Services Root CA with Web Enrollment enabled
To use an Enterprise CA to generate a certificate to use with Azure Firewall Pre
1. In the Azure portal, navigate to the Certificates page of your Key Vault, and select **Generate/Import**. 1. Select **Import** as the method of creation, name the certificate, select the exported .pfx file, enter the password, and then select **Create**.
-1. Navigate to the **TLS Inspection (preview)** page of your Firewall policy and select your Managed identity, Key Vault, and certificate.
+1. Navigate to the **TLS Inspection** page of your Firewall policy and select your Managed identity, Key Vault, and certificate.
1. Select **Save**. :::image type="content" source="media/premium-deploy-certificates-enterprise-ca/tls-inspection.png" alt-text="TLS inspection":::
To use an Enterprise CA to generate a certificate to use with Azure Firewall Pre
## Next steps
-[Azure Firewall Premium Preview in the Azure portal](premium-portal.md)
+[Azure Firewall Premium in the Azure portal](premium-portal.md)
firewall Premium Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-deploy.md
Title: Deploy and configure Azure Firewall Premium Preview
+ Title: Deploy and configure Azure Firewall Premium
description: Learn how to deploy and configure Azure Firewall Premium. Previously updated : 05/27/2021 Last updated : 07/15/2021
-# Deploy and configure Azure Firewall Premium Preview
+# Deploy and configure Azure Firewall Premium
-> [!IMPORTANT]
-> Azure Firewall Premium is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Azure Firewall Premium Preview is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. It includes the following features:
+ Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. It includes the following features:
- **TLS inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination. - **IDPS** - A network intrusion detection and prevention system (IDPS) allows you to monitor network activities for malicious activity, log information about this activity, report it, and optionally attempt to block it.
To collect firewall logs, you need to add diagnostics settings to collect firewa
### IDPS tests
-To test IDPS, you'll need to deploy your own internal Web server with an appropriate server certificate. For more information about Azure Firewall Premium Preview certificate requirements, see [Azure Firewall Premium Preview certificates](premium-certificates.md).
+To test IDPS, you'll need to deploy your own internal Web server with an appropriate server certificate. For more information about Azure Firewall Premium certificate requirements, see [Azure Firewall Premium certificates](premium-certificates.md).
You can use `curl` to control various HTTP headers and simulate malicious traffic.
You can use `curl` to control various HTTP headers and simulate malicious traffi
> It can take some time for the data to begin showing in the logs. Give it at least 20 minutes to allow for the logs to begin showing the data. 5. Add a signature rule for signature 2008983:
- 1. Select the **DemoFirewallPolicy** and under **Settings** select **IDPS(preview)**.
+ 1. Select the **DemoFirewallPolicy** and under **Settings** select **IDPS**.
1. Select the **Signature rules** tab. 1. Under **Signature ID**, in the open text box type *2008983*. 1. Under **Mode**, select **Deny**.
You should see the same results that you had with the HTTP tests.
Use the following steps to test TLS Inspection with URL filtering.
-1. Edit the firewall policy application rules and add a new rule called `AllowURL` to the `AllowWeb` rule collection. Configure the target URL `www.nytimes.com/section/world`, Source IP address **\***, Destination type **URL (preview)**, select **TLS inspection (preview)**, and protocols **http, https**.
+1. Edit the firewall policy application rules and add a new rule called `AllowURL` to the `AllowWeb` rule collection. Configure the target URL `www.nytimes.com/section/world`, Source IP address **\***, Destination type **URL**, select **TLS inspection**, and protocols **http, https**.
3. When the deployment completes, open a browser on WorkerVM and go to `https://www.nytimes.com/section/world` and validate that the HTML response is displayed as expected in the browser. 4. In the Azure portal, you can view the entire URL in the Application rule Monitoring logs:
Let's create an application rule to allow access to sports web sites.
1. From the portal, open your resource group and select **DemoFirewallPolicy**. 2. Select **Application Rules**, and then **Add a rule collection**. 3. For **Name**, type *GeneralWeb*, **Priority** *103*, **Rule collection group** select **DefaultApplicationRuleCollectionGroup**.
-4. Under **Rules** for **Name** type *AllowSports*, **Source** *\**, **Protocol** *http, https*, select **TLS inspection**, **Destination Type** select *Web categories (preview)*, **Destination** select *Sports*.
+4. Under **Rules** for **Name** type *AllowSports*, **Source** *\**, **Protocol** *http, https*, select **TLS inspection**, **Destination Type** select *Web categories*, **Destination** select *Sports*.
5. Select **Add**. :::image type="content" source="media/premium-deploy/web-categories.png" alt-text="Sports web category":::
Let's create an application rule to allow access to sports web sites.
## Next steps -- [Azure Firewall Premium Preview in the Azure portal](premium-portal.md)
+- [Azure Firewall Premium in the Azure portal](premium-portal.md)
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-features.md
Title: Azure Firewall Premium Preview features
+ Title: Azure Firewall Premium features
description: Azure Firewall Premium is a managed, cloud-based network security service that protects your Azure Virtual Network resources. Previously updated : 06/22/2021 Last updated : 07/19/2021
-# Azure Firewall Premium Preview features
+# Azure Firewall Premium features
:::image type="content" source="media/premium-features/icsa-cert-firewall-small.png" alt-text="ICSA certification logo" border="false"::::::image type="content" source="media/premium-features/pci-logo.png" alt-text="PCI certification logo" border="false":::
-> [!IMPORTANT]
-> Azure Firewall Premium is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
- Azure Firewall Premium Preview is a next generation firewall with capabilities that are required for highly sensitive and regulated environments.
+ Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments.
:::image type="content" source="media/premium-features/premium-overview.png" alt-text="Azure Firewall Premium overview diagram":::
-Azure Firewall Premium Preview uses Firewall Policy, a global resource that can be used to centrally manage your firewalls using Azure Firewall Manager. Starting this release, all new features are configurable via Firewall Policy only. Firewall Rules (classic) continue to be supported and can be used to configure existing Standard Firewall features. Firewall Policy can be managed independently or with Azure Firewall Manager. A firewall policy associated with a single firewall has no charge.
+Azure Firewall Premium uses Firewall Policy, a global resource that can be used to centrally manage your firewalls using Azure Firewall Manager. Starting this release, all new features are configurable via Firewall Policy only. Firewall Rules (classic) continue to be supported and can be used to configure existing Standard Firewall features. Firewall Policy can be managed independently or with Azure Firewall Manager. A firewall policy associated with a single firewall has no charge.
> [!IMPORTANT] > Currently the Firewall Premium SKU is not supported in Secure Hub deployments and forced tunnel configurations.
-Azure Firewall Premium Preview includes the following features:
+Azure Firewall Premium includes the following features:
- **TLS inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination. - **IDPS** - A network intrusion detection and prevention system (IDPS) allows you to monitor network activities for malicious activity, log information about this activity, report it, and optionally attempt to block it.
Azure Firewall Premium terminates outbound and east-west TLS connections. Inboun
> [!TIP] > TLS 1.0 and 1.1 are being deprecated and wonΓÇÖt be supported. TLS 1.0 and 1.1 versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable, and while they still currently work to allow backwards compatibility, they aren't recommended. Migrate to TLS 1.2 as soon as possible.
-To learn more about Azure Firewall Premium Preview Intermediate CA certificate requirements, see [Azure Firewall Premium Preview certificates](premium-certificates.md).
+To learn more about Azure Firewall Premium Intermediate CA certificate requirements, see [Azure Firewall Premium certificates](premium-certificates.md).
## IDPS A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium Preview provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they are fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic.
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network level traffic (Layers 4-7), they are fully managed, and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic.
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
The Azure Firewall signatures/rulesets include:
IDPS allows you to detect attacks in all ports and protocols for non-encrypted traffic. However, when HTTPS traffic needs to be inspected, Azure Firewall can use its TLS inspection capability to decrypt the traffic and better detect malicious activities.
-The IDPS Bypass List allows you to not filter traffic to any of the IP addresses, ranges, and subnets specified in the bypass list.
+The IDPS Bypass List allows you to not filter traffic to any of the IP addresses, ranges, and subnets specified in the bypass list.
+
+You can also use signature rules when the IDPS mode is set to **Alert**, but there are one or more specific signatures that you want to block, including their associated traffic. In this case, you can add new signature rules by setting the TLS Inspection mode to **deny**.
+ ## URL filtering URL filtering extends Azure FirewallΓÇÖs FQDN filtering capability to consider an entire URL. For example, `www.contoso.com/a/c` instead of `www.contoso.com`.
-URL Filtering can be applied both on HTTP and HTTPS traffic. When HTTPS traffic is inspected, Azure Firewall Premium Preview can use its TLS inspection capability to decrypt the traffic and extract the target URL to validate whether access is permitted. TLS inspection requires opt-in at the application rule level. Once enabled, you can use URLs for filtering with HTTPS.
+URL Filtering can be applied both on HTTP and HTTPS traffic. When HTTPS traffic is inspected, Azure Firewall Premium can use its TLS inspection capability to decrypt the traffic and extract the target URL to validate whether access is permitted. TLS inspection requires opt-in at the application rule level. Once enabled, you can use URLs for filtering with HTTPS.
## Web categories
-Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories will also be included in Azure Firewall Standard, but it will be more fine-tuned in Azure Firewall Premium Preview. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic.
+Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. Web categories will also be included in Azure Firewall Standard, but it will be more fine-tuned in Azure Firewall Premium. As opposed to the Web categories capability in the Standard SKU that matches the category based on an FQDN, the Premium SKU matches the category according to the entire URL for both HTTP and HTTPS traffic.
For example, if Azure Firewall intercepts an HTTPS request for `www.google.com/news`, the following categorization is expected:
For example, if Azure Firewall intercepts an HTTPS request for `www.google.com/n
- Firewall Premium ΓÇô the complete URL will be examined, so `www.google.com/news` will be categorized as *News*.
-The categories are organized based on severity under **Liability**, **High-Bandwidth**, **Business Use**, **Productivity Loss**, **General Surfing**, and **Uncategorized**.
+The categories are organized based on severity under **Liability**, **High-Bandwidth**, **Business Use**, **Productivity Loss**, **General Surfing**, and **Uncategorized**. For a detailed description of the web categories, see [Azure Firewall web categories](web-categories.md).
+
+### Web category logging
+You can view traffic that has been filtered by **Web categories** in the Application logs. **Web categories** field is only displayed if it has been explicitly configured in your firewall policy application rules. For example, if you do not have a rule that explicitly denies *Search Engines*, and a user requests to go to www.bing.com, only a default deny message is displayed as opposed to a Web categories message. This is because the web category was not explicitly configured.
### Category exceptions You can create exceptions to your web category rules. Create a separate allow or deny rule collection with a higher priority within the rule collection group. For example, you can configure a rule collection that allows `www.linkedin.com` with priority 100, with a rule collection that denies **Social networking** with priority 200. This creates the exception for the pre-defined **Social networking** web category.
-### Categorization change
-
-You can request a categorization change if you:
+ ## Supported regions
-
-or
--- have a suggested category for an uncategorized FQDN or URL-
-You're welcome to submit a request at [https://aka.ms/azfw-webcategories-request](https://aka.ms/azfw-webcategories-request).
-
-## Supported regions
-
-Azure Firewall Premium Preview is supported in the following regions:
+Azure Firewall Premium is supported in the following regions:
- Australia Central (Public / Australia) - Australia Central 2 (Public / Australia)
Azure Firewall Premium Preview is supported in the following regions:
- Brazil South (Public / Brazil) - Canada Central (Public / Canada) - Canada East (Public / Canada)
+- Central India (Public / India)
- Central US (Public / United States) - Central US EUAP (Public / Canary (US)) - East Asia (Public / Asia Pacific)
Azure Firewall Premium Preview is supported in the following regions:
- East US 2 (Public / United States) - France Central (Public / France) - France South (Public / France)
+- Germany West Central (Public / Germany)
- Japan East (Public / Japan) - Japan West (Public / Japan) - Korea Central (Public / Korea) - Korea South (Public / Korea) - North Central US (Public / United States) - North Europe (Public / Europe)
+- Norway East (Public / Norway)
- South Africa North (Public / South Africa) - South Central US (Public / United States)
+- South India (Public / India)
- Southeast Asia (Public / Asia Pacific)
+- Switzerland North (Public / Switzerland)
- UAE Central (Public / UAE)
+- UAE North (Public / UAE)
- UK South (Public / United Kingdom) - UK West (Public / United Kingdom) - West Central US (Public / United States)
Azure Firewall Premium Preview is supported in the following regions:
- West India (Public / India) - West US (Public / United States) - West US 2 (Public / United States)
+- West US 3 (Public / United States)
## Known issues
-Azure Firewall Premium Preview has the following known issues:
+Azure Firewall Premium has the following known issues:
+ |Issue |Description |Mitigation | ||||
-|TLS Inspection supported only on the HTTPS standard port|TLS Inspection supports only HTTPS/443. |None. Other ports will be supported in GA.|
|ESNI support for FQDN resolution in HTTPS|Encrypted SNI isn't supported in HTTPS handshake.|Today only Firefox supports ESNI through custom configuration. Suggested workaround is to disable this feature.| |Client Certificates (TLS)|Client certificates are used to build a mutual identity trust between the client and the server. Client certificates are used during a TLS negotiation. Azure firewall renegotiates a connection with the server and has no access to the private key of the client certificates.|None| |QUIC/HTTP3|QUIC is the new major version of HTTP. It's a UDP-based protocol over 80 (PLAN) and 443 (SSL). FQDN/URL/TLS inspection won't be supported.|Configure passing UDP 80/443 as network rules.|
-Untrusted customer signed certificates|Customer signed certificates are not trusted by the firewall once received from an intranet-based web server.|Fix scheduled for GA.
-|Wrong source and destination IP addresses in Alerts for IDPS with TLS inspection.|When you enable TLS inspection and IDPS issues a new alert, the displayed source/destination IP address is wrong (the internal IP address is displayed instead of the original IP address).|Fix scheduled for GA.|
-|Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is public an IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|Fix scheduled for GA.|
-|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|Fix scheduled for GA.|
-|IDPS Bypass|IDPS Bypass doesn't work for TLS terminated traffic, and Source IP address and Source IP Groups aren't supported.|Fix scheduled for GA.|
+Untrusted customer signed certificates|Customer signed certificates are not trusted by the firewall once received from an intranet-based web server.|A fix is being investigated.
+|Wrong source IP address in Alerts with IDPS for HTTP (without TLS inspection).|When plain text HTTP traffic is in use, and IDPS issues a new alert, and the destination is public an IP address, the displayed source IP address is wrong (the internal IP address is displayed instead of the original IP address).|A fix is being investigated.|
+|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.|
|TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.|
-|KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|Fix scheduled for GA.|
-|IP Groups support|Azure Firewall Premium Preview does not support IP Groups.|Fix scheduled for GA.|
-
+|KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|A fix is being investigated.|
## Next steps - [Learn about Azure Firewall Premium certificates](premium-certificates.md)-- [Deploy and configure Azure Firewall Premium Preview](premium-deploy.md)-- [Migrate to Azure Firewall Premium Preview](premium-migrate.md)
+- [Deploy and configure Azure Firewall Premium](premium-deploy.md)
+- [Migrate to Azure Firewall Premium](premium-migrate.md)
firewall Premium Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-migrate.md
Title: Migrate to Azure Firewall Premium Preview
-description: Learn how to migrate from Azure Firewall Standard to Azure Firewall Premium Preview.
+ Title: Migrate to Azure Firewall Premium
+description: Learn how to migrate from Azure Firewall Standard to Azure Firewall Premium.
Previously updated : 02/16/2021 Last updated : 07/15/2021
-# Migrate to Azure Firewall Premium Preview
+# Migrate to Azure Firewall Premium
-You can migrate Azure Firewall Standard to Azure Firewall Premium Preview to take advantage of the new Premium capabilities. For more information about Azure Firewall Premium Preview features, see [Azure Firewall Premium Preview features](premium-features.md).
+You can migrate Azure Firewall Standard to Azure Firewall Premium to take advantage of the new Premium capabilities. For more information about Azure Firewall Premium features, see [Azure Firewall Premium features](premium-features.md).
The following two examples show how to: - Migrate an existing standard policy using Azure PowerShell
This example shows how to use the Azure portal to migrate a standard firewall (c
1. Select **Review + Create**. 1. Select **Create**.
-When the deployment completes, you can now configure all the new Azure Firewall Premium Preview features.
+When the deployment completes, you can now configure all the new Azure Firewall Premium features.
## Next steps
firewall Premium Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/premium-portal.md
Title: Azure Firewall Premium Preview in the Azure portal
-description: Learn about Azure Firewall Premium Preview in the Azure portal.
+ Title: Azure Firewall Premium in the Azure portal
+description: Learn about Azure Firewall Premium in the Azure portal.
Previously updated : 02/16/2021 Last updated : 07/15/2021
-# Azure Firewall Premium Preview in the Azure portal
+# Azure Firewall Premium in the Azure portal
-> [!IMPORTANT]
-> Azure Firewall Premium is currently in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Azure Firewall Premium Preview is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. It includes the following features:
+ Azure Firewall Premium is a next generation firewall with capabilities that are required for highly sensitive and regulated environments. It includes the following features:
- **TLS inspection** - decrypts outbound traffic, processes the data, then encrypts the data and sends it to the destination. - **IDPS** - A network intrusion detection and prevention system (IDPS) allows you to monitor network activities for malicious activity, log information about this activity, report it, and optionally attempt to block it.
For more information, see [Azure Firewall Premium features](premium-features.md)
## Deploy the firewall
-Deploying an Azure Firewall Premium Preview is similar to deploying a standard Azure Firewall:
+Deploying an Azure Firewall Premium is similar to deploying a standard Azure Firewall:
:::image type="content" source="media/premium-portal/premium-portal.png" alt-text="portal deployment":::
-For **Firewall tier**, you select **Premium (preview)** and for **Firewall policy**, you select an existing Premium policy or create a new one.
+For **Firewall tier**, you select **Premium** and for **Firewall policy**, you select an existing Premium policy or create a new one.
## Configure the Premium policy
When you configure application rules in a Premium policy, you can configure addi
## Next steps
-To see the Azure Firewall Premium Preview features in action, see [Deploy and configure Azure Firewall Premium Preview](premium-deploy.md).
+To see the Azure Firewall Premium features in action, see [Deploy and configure Azure Firewall Premium](premium-deploy.md).
firewall Web Categories https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/web-categories.md
+
+ Title: Azure Firewall web categories
+description: Learn about Azure Firewall web categories and their descriptions.
++++ Last updated : 07/19/2021++++
+# Azure Firewall web categories
+
+Web categories lets administrators allow or deny user access to web site categories such as gambling websites, social media websites, and others. The categories are organized based on severity under Liability, High-Bandwidth, Business use, Productivity loss, General surfing, and Uncategorized.
+
+## Liability
++
+|Category |Description |
+|||
+|Alcohol + tobacco |Sites that are contain, promote, or sell alcohol- or tobacco-related products or services.|
+|Child abuse images |Sites that present or discuss children in abusive or sexual acts.|
+|Child inappropriate |Sites that are unsuitable for children, which may contain R-rated or tasteless content, profanity, or adult material.|
+|Criminal activity|Sites that promote or advise on how to commit illegal or criminal activity, or to avoid detection for such activity. Criminal activity includes murder, building bombs, illegal manipulation of electronic devices, hacking, fraud, and illegal distribution of software.|
+|Dating + personals |Sites that promote networking for relationships such as dating and marriage, such as matchmaking, online dating, and spousal introduction.|
+|Gambling |Sites that offer or related to online gambling, lottery, betting agencies involving chance, and casinos.|
+|Hacking |Sites that promote or advise on how to get unauthorized access to proprietary computer systems, for stealing information, perpetrating fraud, creating viruses, or committing other illegal activity related to theft or digital inform.|
+|Hate + intolerance|Sites that promote a supremacist political agenda, encouraging oppression of people or groups of people based on their race, religion, gender, age, disability, sexual orientation, or nationality.|
+|Illegal drug |Sites with information on the purchase, manufacture, and use of illegal or recreational drugs, and misuse of prescription drugs and other compounds.|
+|Illegal software |Sites that illegally distribute software or copyrighted materials such as movies, music, software cracks, illicit serial numbers, illegal license key generators.|
+|Lingerie + swimsuits|Sites that offer images of models in suggestive costume, with semi-nudity permitted. Includes sites offering lingerie or swimwear.|
+|Marijuana |Sites that contain information, discussions, or sale of marijuana and associated products or services, including legalizing marijuana and/or using marijuana for medicinal purposes.|
+|Nudity | Sites that contain full or partial nudity that are not necessarily overtly sexual in intent.|
+|Pornography/sexually explicit |Sites that contain explicit sexual content. Includes adult products such as sex toys, CD-ROMs, and videos, adult services such as videoconferencing, escort services, and strip clubs, erotic stories, and textual descriptions of sexual acts. |
+|School cheating | Sites that promote unethical practices such as cheating or plagiarism by providing test answers, written essays, research papers, or term papers. |
+|Self-harm |Sites that promote actions that are relating to harming oneself, such as suicide, anorexia, bulimia, etc. |
+|Sex education | Sites relating to sex education, including subjects such as respect for partner, abortion, contraceptives, sexually transmitted diseases, and pregnancy. |
+|Tasteless |Sites with offensive or tasteless content, including profanity. |
+|Violence | Sites that contain images or text depicting or advocating physical assault against humans, animals, or institutions. Sites of gruesome nature. |
+|Weapons |Sites that depict, sell, review, or describe guns and weapons, including for sport. |
++
+## High bandwidth
+
+|Category |Description |
+|||
+|Image sharing | Sites that host digital photographs and images, online photo albums, and digital photo exchanges. |
+|Peer-to-peer | Sites that enable direct exchange of files between users without dependence on a central server. |
+|Streaming media + downloads | Sites that deliver streaming content, such as Internet radio, Internet TV or MP3 and live or archived media download sites. Includes fan sites, or official sites run by musicians, bands, or record labels. |
+|Download sites | Sites that contain downloadable software, whether shareware, freeware, or for a charge. |
+|Entertainment | Sites containing programming guides to television, movies, music and video (including video on demand), celebrity sites, and entertainment news. |
+| | |
+
+## Business use
+
+|Category |Description |
+|||
+|Business | Sites that provide business related information such as corporate web sites. Information, services, or products that help businesses of all sizes to do their day-to-day commercial activities. |
+|Computers + technology |Sites that contain information such as product reviews, discussions, and news about computers, software, hardware, peripheral, and computer services. |
+|Education | Sites sponsored by educational institutions and schools of all types including distance education. Includes general educational and reference materials such as dictionaries, encyclopedias, online courses, teaching aids and discussion guides. |
+|Finance | Sites related to banking, finance, payment or investment, including banks, brokerages, online stock trading, stock quotes, fund management, insurance companies, credit unions, credit card companies, and so on. |
+|Forums + newsgroups | Sites for sharing information in the form of newsgroups, forums, bulletin boards. Does not include personal blogs. |
+|Government | Sites run by governmental or military organizations, departments, or agencies, including police departments, fire departments, customs bureaus, emergency services, civil defense, and counterterrorism organizations. |
+|Health + medicine | Sites containing information pertaining to health, healthcare services, fitness and well-being, including information about medical equipment, hospitals, drugstores, nursing, medicine, procedures, prescription medications, etc. |
+|Information security | Sites that provide legitimate information about data protection, including newly discovered vulnerabilities and how to block them. |
+|Job search | Sites containing job listings, career information, assistance with job searches (such as resume writing, interviewing tips, etc.), employment agencies or head hunters. |
+|News | Sites covering news and current events such as newspapers, newswire services, personalized news services, broadcasting sites, and magazines. |
+|Non-profits + NGOs | Sites devoted to clubs, communities, unions, and non-profit organizations. Many of these groups exist for educational or charitable purposes. |
+|Personal sites | Sites about or hosted by personal individuals, including those hosted on commercial sites such as Blogger, AOL, etc. |
+|Private IP addresses | Sites that are private IP addresses as defined in RFC 1918, that is, hosts that do not require access to hosts in other enterprises (or require limited access) and whose IP address may be ambiguous between enterprises but are well-defined within a certain enterprise. |
+|Professional networking | Sites that enable professional networking for online communities. |
+|Search engines + portals |Sites enabling the searching of the Web, newsgroups, images, directories, and other online content. Includes portal and directory sites such as white/yellow pages. |
+|Translators | Sites that translate Web pages or phrases from one language to another. These sites bypass the proxy server, presenting the risk that unauthorized content may be accessed, similar to using an anonymizer. |
+|File repository | Web pages including collections of shareware, freeware, open source, and other software downloads. |
+|Web-based email | Sites that enable users to send and receive email through a web accessible email account. |
+| | |
++
+## Productivity loss
+
+|Category |Description |
+|||
+|Advertisements + pop-ups | Sites that provide advertising graphics or other ad content files that appear on Web pages. |
+|Chat | Sites that enable web-based exchange of real-time messages through chat services or chat rooms. |
+|Cults | Sites relating to non-traditional religious practice typically known as "cults," that is, considered to be false, unorthodox, extremist, or coercive, with members often living under the direction of a charismatic leader. |
+|Games | Sites relating to computer or other games, information about game producers, or how to obtain cheat codes. Game-related publication sites. |
+|Instant messaging | Sites that enable logging in to instant messaging services such as ICQ, AOL Instant Messenger, IRC, MSN, Jabber, Yahoo Messenger, and the like. |
+|Shopping | Sites for online shopping, catalogs, online ordering, auctions, classified ads. Excludes shopping for products and services exclusively covered by another category such as health & medicine. |
+|Social networking | Sites that enable social networking for online communities of various topics, for friendship, or/and dating. |
+| | |
+
+## General surfing
+
+|Category |Description |
+|||
+|Arts | Sites with artistic content or relating to artistic institutions such as theaters, museums, galleries, dance companies, photography, and digital graphic resources. |
+|Fashion + Beauty | Sites concerning fashion, jewelry, glamour, beauty, modeling, cosmetics, or related products or services. Includes product reviews, comparisons, and general consumer information. |
+|General | Sites that do not clearly fall into other categories, for example, blank web pages. |
+|Greeting cards |Sites that allow people to send and receive greeting cards and postcards. |
+|Leisure + recreation | Sites relating to recreational activities and hobbies including zoos, public recreation centers, pools, amusement parks, and hobbies such as gardening, literature, arts & crafts, home improvement, home décor, family, etc. |
+|Nature + conservation | Sites with information related to environmental issues, sustainable living, ecology, nature, and the environment. |
+|Politics | Sites that promote political parties or political advocacy, or provide information about political parties, interest groups, elections, legislation, or lobbying. Also includes sites that offer legal information and advice. |
+|Real estate |Sites relating to commercial or residential real estate services, including renting, purchasing, selling or financing homes, offices, etc. |
+|Religion | Sites that deal with faith, human spirituality or religious beliefs, including sites of churches, synagogues, mosques, and other houses of worship. |
+|Restaurants + dining |Sites that list, review, promote or advertise food, dining or catering services. Includes sites for recipes sites, cooking instruction and tips, food products, and wine advisors. |
+|Sports | Sites relating to sports teams, fan clubs, scores, and sports news. Relates to all sports, whether professional or recreational. |
+|Transportation | Sites that include information about motor vehicles such as cars, motorcycles, boats, trucks, RVs and the like, including online purchase sites. Includes manufacturer sites, dealerships, review sites, pricing, enthusiastΓÇÖs clubs, and public transportation etc. |
+|Travel | Sites that provide travel and tourism information or online booking or travel services such as airlines, accommodations, car rentals. Includes regional or city information sites. |
+|Uncategorized |Sites that have not been categorized, such as new websites, personal sites, and so on. |
+| | |
+
+## Next steps
+- [Quickstart: Create an Azure Firewall and a firewall policy - ARM template](../firewall-manager/quick-firewall-policy.md)
+- [Quickstart: Deploy Azure Firewall with Availability Zones - ARM template](deploy-template.md)
+- [Tutorial: Deploy and configure Azure Firewall using the Azure portal](tutorial-firewall-deploy-portal.md)
+
industrial-iot Industrial Iot Platform Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/industrial-iot-platform-versions.md
Last updated 03/08/2021
# Azure Industrial IoT Platform v2.8.0 LTS
-We are pleased to announce the declaration of Long-Term Support (LTS) for version 2.8.0. While we continue to develop and release updates to our ongoing projects on GitHub, we now also offer a branch that will only get critical bug fixes and security updates starting in July 2021. Customers can rely upon a longer-term support lifecycle for these LTS builds, providing stability and assurance for the planning on longer time horizons our customers require. The LTS branch offers customers a guarantee that they will benefit from any necessary security or critical bug fixes with minimal impact to their deployments and module interactions. At the same time, customers can access the latest updates in the main branch to keep pace with the latest developments and fastest cycle time for product updates.
+We are pleased to announce the declaration of Long-Term Support (LTS) for version 2.8.0. While we continue to develop and release updates to our ongoing projects on GitHub, we now also offer a branch that will only get critical bug fixes and security updates starting in July 2021. Customers can rely upon a longer-term support lifecycle for these LTS builds, providing stability and assurance for the planning on longer time horizons our customers require. The LTS branch offers customers a guarantee that they will benefit from any necessary security or critical bug fixes with minimal impact to their deployments and module interactions. At the same time, customers can access the latest updates in the [main branch](https://github.com/Azure/Industrial-IoT) to keep pace with the latest developments and fastest cycle time for product updates.
## Version history
internet-analyzer Internet Analyzer Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-analyzer/internet-analyzer-cli.md
Title: 'Create an Internet Analyzer test using CLI | Microsoft Docs' description: In this article, learn how to create your first Internet Analyzer test by using Azure CLI. -+ Last updated 10/16/2019-+ # Customer intent: As someone interested in migrating to Azure/ AFD/ CDN, I want to set up an Internet Analyzer test to understand the expected performance impact to my end users.
internet-analyzer Internet Analyzer Create Test Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-analyzer/internet-analyzer-create-test-portal.md
Title: 'Create an Internet Analyzer test using Portal | Microsoft Docs' description: In this article, learn how to create your first Internet Analyzer test. -+ Last updated 10/16/2019-+ ## Customer intent: As someone interested in migrating to Azure/ AFD/ CDN, I want to set up an Internet Analyzer test to understand the expected performance impact to my end users.
internet-analyzer Internet Analyzer Custom Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-analyzer/internet-analyzer-custom-endpoint.md
Title: 'Create a Custom Endpoint | Microsoft Docs' description: In this article, learn how to configure a custom endpoint to measure with your Internet Analyzer resource. -+ Last updated 10/16/2019-+ # Customer intent: (1) As someone interested in migrating to Azure from on-prem/ other cloud, I want to configure a custom endpoint to measure. (2) As someone interested in comparing my custom Azure configuration to on-prem/other cloud/ Azure, I want to configure a custom endpoint to measure.
internet-analyzer Internet Analyzer Embed Client https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-analyzer/internet-analyzer-embed-client.md
Title: 'Embed Internet Analyzer Client | Microsoft Docs' description: In this article, learn how to embed the Internet Analyzer JavaScript client in your application. -+ Last updated 10/16/2019-+ # Customer intent: As someone interested in creating an Internet Analyzer resource, I want to learn how to install the JavaScript client, which is necessary to run tests.
internet-analyzer Internet Analyzer Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-analyzer/internet-analyzer-faq.md
Title: 'Internet Analyzer FAQ | Microsoft Docs' description: The FAQ for Azure Internet Analyzer. -+ Last updated 10/16/2019-+ # Azure Internet Analyzer FAQ (Preview)
internet-analyzer Internet Analyzer Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-analyzer/internet-analyzer-overview.md
Title: 'Azure Internet Analyzer | Microsoft Docs' description: Learn about Azure Internet Analyzer -+ # Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure Internet analyzer so that I can test app and content delivery architectures in Azure. Last updated 10/16/2019-+ # What is Internet Analyzer? (Preview)
internet-analyzer Internet Analyzer Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/internet-analyzer/internet-analyzer-scorecard.md
Title: 'Interpreting your Scorecard | Microsoft Docs' description: Learn how to interpret your scorecard. The scorecard tab contains the aggregated and analyzed results from your tests. -+ Last updated 10/16/2019-+ # Interpreting your scorecard
iot-dps Concepts Device Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/concepts-device-reprovision.md
The following table shows the API versions before the availability of native rep
| REST API | C SDK | Python SDK | Node SDK | Java SDK | .NET SDK | | -- | -- | - | | -- | -- |
-| [2018-04-01 and earlier](/rest/api/iot-dps/createorupdateindividualenrollment/createorupdateindividualenrollment#uri-parameters) | [1.2.8 and earlier](https://github.com/Azure/azure-iot-sdk-c/blob/master/version.txt) | [1.4.2 and earlier](https://github.com/Azure/azure-iot-sdk-python/blob/0a549f21f7f4fc24bc036c1d2d5614e9544a9667/device/iothub_client_python/src/iothub_client_python.cpp#L53) | [1.7.3 or earlier](https://github.com/Azure/azure-iot-sdk-node/blob/074c1ac135aebb520d401b942acfad2d58fdc07f/common/core/package.json#L3) | [1.13.0 or earlier](https://github.com/Azure/azure-iot-sdk-java/blob/794c128000358b8ed1c4cecfbf21734dd6824de9/device/iot-device-client/pom.xml#L7) | [1.1.0 or earlier](https://github.com/Azure/azure-iot-sdk-csharp/blob/9f7269f4f61cff3536708cf3dc412a7316ed6236/provisioning/device/src/Microsoft.Azure.Devices.Provisioning.Client.csproj#L20)
+| [2018-04-01 and earlier](/rest/api/iot-dps/service/individual-enrollment/create-or-update#uri-parameters) | [1.2.8 and earlier](https://github.com/Azure/azure-iot-sdk-c/blob/master/version.txt) | [1.4.2 and earlier](https://github.com/Azure/azure-iot-sdk-python/blob/0a549f21f7f4fc24bc036c1d2d5614e9544a9667/device/iothub_client_python/src/iothub_client_python.cpp#L53) | [1.7.3 or earlier](https://github.com/Azure/azure-iot-sdk-node/blob/074c1ac135aebb520d401b942acfad2d58fdc07f/common/core/package.json#L3) | [1.13.0 or earlier](https://github.com/Azure/azure-iot-sdk-java/blob/794c128000358b8ed1c4cecfbf21734dd6824de9/device/iot-device-client/pom.xml#L7) | [1.1.0 or earlier](https://github.com/Azure/azure-iot-sdk-csharp/blob/9f7269f4f61cff3536708cf3dc412a7316ed6236/provisioning/device/src/Microsoft.Azure.Devices.Provisioning.Client.csproj#L20)
> [!NOTE] > These values and links are likely to change. This is only a placeholder attempt to determine where the versions can be determined by a customer and what the expected versions will be.
iot-dps Iot Dps Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/iot-dps-mqtt-support.md
-
+ Title: Understand Azure IoT Device Provisioning Service MQTT support | Microsoft Docs description: Developer guide - support for devices connecting to the Azure IoT Device Provisioning Service (DPS) device-facing endpoint using the MQTT protocol.
To use the MQTT protocol directly, your client *must* connect over TLS 1.2. Atte
To register a device through DPS, a device should subscribe using `$dps/registrations/res/#` as a **Topic Filter**. The multi-level wildcard `#` in the Topic Filter is used only to allow the device to receive additional properties in the topic name. DPS does not allow the usage of the `#` or `?` wildcards for filtering of subtopics. Since DPS is not a general-purpose pub-sub messaging broker, it only supports the documented topic names and topic filters. The device should publish a register message to DPS using `$dps/registrations/PUT/iotdps-register/?$rid={request_id}` as a **Topic Name**. The payload should contain the [Device Registration](/rest/api/iot-dps/runtimeregistration/registerdevice#deviceregistration) object in JSON format.
-In a successful scenario, the device will receive a response on the `$dps/registrations/res/202/?$rid={request_id}&retry-after=x` topic name where x is the retry-after value in seconds. The payload of the response will contain the [RegistrationOperationStatus](/rest/api/iot-dps/runtimeregistration/registerdevice#registrationoperationstatus) object in JSON format.
+In a successful scenario, the device will receive a response on the `$dps/registrations/res/202/?$rid={request_id}&retry-after=x` topic name where x is the retry-after value in seconds. The payload of the response will contain the [RegistrationOperationStatus](/rest/api/iot-dps/device/runtime-registration/register-device#registrationoperationstatus) object in JSON format.
## Polling for registration operation status
To learn more about the MQTT protocol, see the [MQTT documentation](https://mqtt
To further explore the capabilities of DPS, see:
-* [About IoT DPS](about-iot-dps.md)
+* [About IoT DPS](about-iot-dps.md)
iot-hub-device-update Understand Device Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/understand-device-update.md
Device Update for IoT Hub features provide a powerful and flexible experience, i
* At-a-glance update compliance and status views across heterogenous device fleets * Support for resilient device updates (A/B) to deliver seamless rollback * Subscription and role-based access controls available through the Azure.com portal
-* On-premise content cache and Nested Edge support to enable updating cloud disconnected devices
+* On-premises content cache and Nested Edge support to enable updating cloud disconnected devices
* Detailed update management and reporting tools With Device Update for IoT Hub management and deployment controls, users can maximize productivity and save valuable time. Device Update for IoT Hub includes the ability to group devices and specify
install the updates and getting status back.
## Next steps > [!div class="nextstepaction"]
-> [Create device update account and instance](create-device-update-account.md)
+> [Create device update account and instance](create-device-update-account.md)
key-vault Access Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/access-behind-firewall.md
Title: Access Key Vault behind a firewall - Azure Key Vault | Microsoft Docs description: Learn about the ports, hosts, or IP addresses to open to enable a key vault client application behind a firewall to access a key vault. -+ tags: azure-resource-manager Last updated 04/15/2021-+ # Access Azure Key Vault behind a firewall
key-vault Authentication Requests And Responses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/authentication-requests-and-responses.md
Title: Authentication, requests and responses description: Learn how Azure Key Vault uses JSON-formatted requests and responses and about required authentication for using a key vault. -+ tags: azure-resource-manager
Last updated 09/15/2020-+
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/overview-vnet-service-endpoints.md
Title: Virtual network service endpoints for Azure Key Vault description: Learn how virtual network service endpoints for Azure Key Vault allow you to restrict access to a specified virtual network, including usage scenarios. --++ Last updated 01/02/2019
key-vault About Keys Details https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/about-keys-details.md
Title: Key types, algorithms, and operations - Azure Key Vault description: Supported key types, algorithms, and operations (details). -+ Last updated 10/22/2020-+ # Key types, algorithms, and operations
key-vault About Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/about-keys.md
Title: About keys - Azure Key Vault description: Overview of Azure Key Vault REST interface and developer details for keys. -+ tags: azure-resource-manager
Last updated 02/17/2021-+ # About keys
key-vault Byok Specification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/byok-specification.md
Title: Bring your own key specification - Azure Key Vault | Microsoft Docs description: This document described bring your own key specification. -+ tags: azure-resource-manager
Last updated 02/04/2021-+
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys-byok.md
Title: How to generate & transfer HSM-protected keys ΓÇô BYOK ΓÇô Azure Key Vault description: Use this article to help you plan for, generate, and transfer your own HSM-protected keys to use with Azure Key Vault. Also known as bring your own key (BYOK). -+ tags: azure-resource-manager
Last updated 02/04/2021-+
key-vault Hsm Protected Keys Ncipher https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys-ncipher.md
Title: How to generate and transfer HSM-protected keys for Azure Key Vault - Azure Key Vault description: Use this article to help you plan for, generate, and then transfer your own HSM-protected keys to use with Azure Key Vault. Also known as BYOK or bring your own key. -+ tags: azure-resource-manager
Last updated 02/24/2021-+
key-vault Hsm Protected Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/keys/hsm-protected-keys.md
Title: How to generate & transfer HSM-protected keys ΓÇô Azure Key Vault description: Learn how to plan for, generate, and then transfer your own HSM-protected keys to use with Azure Key Vault. Also known as BYOK or bring your own key. -+ tags: azure-resource-manager
Last updated 02/24/2021-+
key-vault Access Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/access-control.md
Title: Azure Managed HSM access control description: Manage access permissions for Azure Managed HSM and keys. Covers the authentication and authorization model for Managed HSM, and how to secure your HSMs. -+ tags: azure-resource-manager Last updated 02/17/2021-+ # Customer intent: As the admin for managed HSMs, I want to set access policies and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for these managed HSMs.
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/backup-restore.md
Title: Full backup/restore and selective restore for Azure Managed HSM description: This document explains full backup/restore and selective restore -+ tags: azure-key-vault Last updated 09/15/2020-+ # Customer intent: As a developer using Key Vault I want to know the best practices so I can implement them. # Full backup and restore
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/best-practices.md
Title: Best practices using Azure Key Vault Managed HSM description: This document explains some of the best practices to use Key Vault -+ tags: azure-key-vault Last updated 06/21/2021-+ # Customer intent: As a developer using Managed HSM I want to know the best practices so I can implement them. # Best practices when using Managed HSM
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/built-in-roles.md
Title: Managed HSM local RBAC built-in roles - Azure Key Vault | Microsoft Docs description: An overview of Managed HSM built-in roles that can be assigned to users, service principals, groups, and managed identities -+ Last updated 06/01/2021-+ # Managed HSM local RBAC built-in roles
key-vault Disaster Recovery Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/disaster-recovery-guide.md
Title: What to do if there if an Azure service disruption that affects Managed HSM - Azure Key Vault | Microsoft Docs description: Learn what to do f there is an Azure service disruption that affects Managed HSM. -+ Last updated 09/15/2020-+ # Managed HSM disaster recovery
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/hsm-protected-keys-byok.md
Title: How to generate and transfer HSM-protected keys for Azure Key Vault Managed HSM - Azure Key Vault | Microsoft Docs description: Use this article to help you plan for, generate, and transfer your own HSM-protected keys to use with Managed HSM. Also known as bring your own key (BYOK). -+ tags: azure-resource-manager Last updated 02/04/2021-+ # Import HSM-protected keys to Managed HSM (BYOK)
key-vault Key Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/key-management.md
Title: Manage keys in a managed HSM - Azure Key Vault | Microsoft Docs description: Use this article to manage keys in a managed HSM -+ Last updated 09/15/2020-+
key-vault Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/private-link.md
Title: Configure Azure Key Vault Managed HSM with private endpoints description: Learn how to integrate Azure Key Vault Managed HSM with Azure Private Link Service--++ Last updated 06/21/2021
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/quick-create-cli.md
Title: Quickstart - Provision and activate an Azure Managed HSM description: Quickstart showing how to provision and activate a managed HSM using Azure CLI -+ tags: azure-resource-manager Last updated 06/21/2021-+ #Customer intent:As a security admin who is new to Azure, I want to provision and activate a managed HSM
key-vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/recovery.md
description: Managed HSM Recovery features are designed to prevent the accidenta
--++ Last updated 06/01/2021
key-vault Role Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/role-management.md
Title: Managed HSM data plane role management - Azure Key Vault | Microsoft Docs description: Use this article to manage role assignments for your managed HSM -+ Last updated 09/15/2020-+ # Managed HSM role management
key-vault Secure Your Managed Hsm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/secure-your-managed-hsm.md
Title: Secure access to a managed HSM - Azure Key Vault Managed HSM description: Learn how to secure access to Managed HSM using Azure RBAC and Managed HSM local RBAC -+ tags: azure-resource-manager Last updated 09/15/2020-+ # Customer intent: As a managed HSM administrator, I want to set access control and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for this Managed HSM.
key-vault Security Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/security-domain.md
description: Overview of the Managed HSM Security Domain, a set of core credenti
--++ Last updated 09/15/2020 # About the Managed HSM Security Domain
key-vault Soft Delete Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/soft-delete-overview.md
description: Soft-delete in Managed HSM allows you to recover deleted HSM instan
--++ Last updated 06/01/2021
key-vault Third Party Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/managed-hsm/third-party-solutions.md
Title: Azure Key Vault Managed HSM - Third-party solutions | Microsoft Docs description: Learn about third-party solutions integrated with Managed HSM. -+ editor: '' Last updated 06/23/2021-+
logic-apps Logic Apps Enterprise Integration Maps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-maps.md
Previously updated : 02/06/2019 Last updated : 07/13/2021 # Transform XML with maps in Azure Logic Apps with Enterprise Integration Pack
see [Limits and configuration information for Azure Logic Apps](../logic-apps/lo
where you store your maps and other artifacts for enterprise integration and business-to-business (B2B) solutions.
+* If your map references an external assembly, you need a 64-bit assembly. The transform service runs a 64-bit process, so 32-bit assemblies aren't supported. If you have the source code for a 32-bit assembly, recompile the code into a 64-bit assembly. If you don't have the source code, but you obtained the binary from a third-party provider, get the 64-bit version from that provider. For example, some vendors provide assemblies in packages that have both 32- and 64-bit versions. If you have the option, use the 64-bit version instead.
+ * If your map references an external assembly, you have to upload *both the assembly and the map* to your integration account. Make sure you [*upload your assembly first*](#add-assembly), and then upload the
logic-apps Logic Apps Workflow Actions Triggers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-workflow-actions-triggers.md
ms.suite: integration Previously updated : 04/05/2021 Last updated : 07/19/2021
machine-learning Reference Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/data-science-virtual-machine/reference-known-issues.md
Previously updated : 05/27/2021- Last updated : 07/19/2021 # Known issues and troubleshooting the Azure Data Science Virtual Machine
-This article helps you find and correct errors or failures you might come across when using the Azure Data Science Virtual Machine.
+This article helps you find and correct errors or failures you might come across when using the Azure Data Science
+Virtual Machine.
++
+## Ubuntu
+
+### Connection to desktop environment fails
+
+If you can connect to the DSVM over SSH terminal but not over x2go, you might have set the wrong session type in x2go.
+To connect to the DSVM's desktop environment, you need the session type in *x2go/session preferences/session* set to
+*XFCE*. Other desktop environments are currently not supported.
+
+### Fonts look wrong when connecting to DSVM using x2go
-## Prompted for password when running sudo command (Ubuntu)
+When you connect to x2go and some fonts look wrong, it might be related to a session setting in x2go. Before connecting
+to the DSVM, uncheck the "Set display DPI" checkbox in the "Input/Output" tab of the session preferences dialog.
-When running a `sudo` command on an Ubuntu machine, you might be asked to enter your password again to confirm that you
-are really the user who is logged in. This is expected behavior and the default in Linux systems such as Ubuntu.
-However, in some scenarios, a repeated authentication is not necessary and rather annoying.
+### Prompted for unknown password
+
+When you create a DSVM setting *Authentication type* to *SSH Public Key* (which is recommended over using password
+authentication), you will not be given a password. However, in some scenarios, some applications will still ask you for
+a password. Run `sudo passwd <user_name>` to create a new password for a certain user. With `sudo passwd`, you can
+create a new password for the root user.
+
+Running these command will not change the configuration of SSH, and allowed login mechanisms will be kept the same.
+
+### Prompted for password when running sudo command
+
+When running a `sudo` command on an Ubuntu machine, you might be asked to enter your password again and again to confirm
+that you are really the user who is logged in. This is expected behavior and the default in Linux systems such as
+Ubuntu. However, in some scenarios, a repeated authentication is not necessary and rather annoying.
To disable re-authentication for most cases, you can run the following command in a terminal.
-`echo -e "\n$USER ALL=(ALL) NOPASSWD: ALL\n" | sudo tee -a /etc/sudoers`
+ `echo -e "\n$USER ALL=(ALL) NOPASSWD: ALL\n" | sudo tee -a /etc/sudoers`
After restarting the terminal, sudo will not ask for another login and will consider the authentication from your session login as sufficient.
-## Accessing SQL Server (Windows)
+### Cannot use docker as non-root user
-When you try to connect to the pre-installed SQL Server instance, you might encounter a "login failed" error. To
-successfully connect to the SQL Server instance, you need to run the program you are connecting with, eg. SQL Server
-Management Studio (SSMS), in administrator mode. The administrator mode is required because by DSVM's default, only
-administrators are allowed to connect.
+In order to use docker as a non-root user, your user needs to be member of the docker group. You can run the
+`getent group docker` command to check which users belong to that group. To add your user to the docker group, run
+`sudo usermod -aG docker $USER`.
+
+### Docker containers cannot interact with the outside via network
-## Python package installation issues
+By default, docker adds new containers to the so-called "bridge network", which is `172.17.0.0/16`. If the subnet of
+that bridge network overlaps with the subnet the DSVM is in, no network communication between the host and the container
+is possible. In that case, for instance, web applications running in the container cannot be reached, and the container
+cannot update packages from apt.
-### Installing packages with pip breaks dependencies on Linux
+To fix the issue, you can change the default subnet for containers in the bridge network. By adding
-Use `sudo pip install` instead of `pip install` when installing packages.
+```json
+"default-address-pools": [
+ {
+ "base": "172.18.0.0/16",
+ "size": 24
+ }
+ ]
+```
-## Disk encryption issues
+to the JSON document contained in file `/etc/docker/daemon.json`, docker will assign another subnet to the bridge
+network, and the conflict should be resolved. (The file needs to be edited using sudo, eg. by running
+`sudo nano /etc/docker/daemon.json`.)
-### Disk encryption fails on the Ubuntu DSVM
+After the change, the docker service needs to be restarted by running `service docker restart`.
-Azure Disk Encryption (ADE) isn't currently supported on the Ubuntu DSVM. As a workaround, consider configuring [Server Side Encryption of Azure managed disks](../../virtual-machines/disk-encryption.md).
+To check if your changes have taken effect, you can run `docker network inspect bridge`. The value under
+*IPAM.Config.Subnet* should correspond to the address pool specified above.
-## Tool appears disabled
-### Hyper-V does not work on the Windows DSVM
+## Windows
-That Hyper-V initially doesn't work on Windows is expected behavior. For boot performance, we've disabled some services. To enable Hyper-V:
+### Accessing SQL Server
+
+When you try to connect to the pre-installed SQL Server instance, you might encounter a "login failed" error. To
+successfully connect to the SQL Server instance, you need to run the program you are connecting with, eg. SQL Server
+Management Studio (SSMS), in administrator mode. The administrator mode is required because by DSVM's default, only
+administrators are allowed to connect.
+
+### Hyper-V does not work
+
+That Hyper-V initially doesn't work on Windows is expected behavior. For boot performance, we've disabled some services.
+To enable Hyper-V:
1. Open the search bar on your Windows DSVM 1. Type in "Services,"
That Hyper-V initially doesn't work on Windows is expected behavior. For boot pe
Your final screen should look like this:
- ![Enable Hyper-V](./media/workaround/hyperv-enable-dsvm.png)
+
+
+![Enable Hyper-V](./media/workaround/hyperv-enable-dsvm.png)
machine-learning How To Deploy Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-local.md
The following code shows these steps:
```python from azureml.core.webservice import Webservice
-from azure.core.model import InferenceConfig
+from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment from azureml.core import Workspace from azureml.core.model import Model
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-create-secure-workspace.md
Previously updated : 06/21/2021 Last updated : 07/16/2021
In this tutorial, you accomplish the following tasks:
> [!div class="checklist"] > * Create an Azure Virtual Network (VNet) to __secure communications between services in the virtual network__.
-> * Create a Network Security Group (NSG) to __configure what network traffic is allowed into and out of the VNet__.
> * Create an Azure Storage Account (blob and file) behind the VNet. This service is used as __default storage for the workspace__. > * Create an Azure Key Vault behind the VNet. This service is used to __store secrets used by the workspace__. For example, the security information needed to access the storage account. > * Create an Azure Container Registry (ACR). This service is used as a repository for Docker images. __Docker images provide the compute environments needed when training a machine learning model or deploying a trained model as an endpoint__.
To create a virtual network, use the following steps:
:::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-review.png" alt-text="Screenshot of the review page":::
-## Create network security groups
-
-Use the following steps create a network security group (NSG) and add rules required for using Azure Machine Learning compute clusters and compute instances to train models:
-
-1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Network security group__. Select the __Network security group__ entry, and then select __Create__.
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __name__ for the new network security group.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/create-nsg.png" alt-text="Image of the basic network security group config":::
-
-1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
-
-### Apply security rules
-
-1. Once the network security group has been created, use the __Go to resource__ button and then select __Inbound security rules__. Select __+ Add__ to add a new rule.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-inbound-security-rules.png" alt-text="Add security rules":::
-
-1. Use the following values for the new rule, and then select __Add__ to add the rule to the network security group:
- * __Source__: Service Tag
- * __Source service tag__: BatchNodeManagement
- * __Source port ranges__: *
- * __Destination__: Any
- * __Service__: Custom
- * __Destination port ranges__: 29876-29877
- * __Protocol__: TCP
- * __Action__: Allow
- * __Priority__: 1040
- * __Name__: AzureBatch
- * __Description__: Azure Batch management traffic
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-batchnodemanagement.png" alt-text="Image of the batchnodemanagement rule":::
--
-1. Select __+ Add__ to add another rule. Use the following values for this rule, and then select __Add__ to add the rule:
- * __Source__: Service Tag
- * __Source service tag__: AzureMachineLearning
- * __Source port ranges__: *
- * __Destination__: Any
- * __Service__: Custom
- * __Destination port ranges__: 44224
- * __Protocol__: TCP
- * __Action__: Allow
- * __Priority__: 1050
- * __Name__: AzureML
- * __Description__: Azure Machine Learning traffic to compute cluster/instance
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-azureml.png" alt-text="Image of the azureml rule":::
-
-1. From the left navigation, select __Subnets__, and then select __+ Associate__. From the __Virtual network__ dropdown, select your network. Then select the __Training__ subnet. Finally, select __OK__.
-
- > [!TIP]
- > The rules added in this section only apply to training computes, so do not need to be associated with the scoring subnet.
-
- :::image type="content" source="./media/tutorial-create-secure-workspace/nsg-associate-subnet.png" alt-text="Image of the associate config":::
- ## Create a storage account 1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Storage account__. Select the __Storage Account__ entry, and then select __Create__.
A compute cluster is used by your training jobs. A compute instance provides a J
:::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-settings.png" alt-text="Screenshot of compute instance settings":::
+> [!TIP]
+> When you create a compute cluster or compute instance, Azure Machine Learning dynamically adds a Network Security Group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+>
+> * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
+> * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
+>
+> The following screenshot shows an example of these rules:
+>
+> :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
+ For more information on creating a compute cluster and compute cluster, including how to do so with Python and the CLI, see the following articles: * [Create a compute cluster](how-to-create-attach-compute-cluster.md)
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
| **MSP** | **Cloud Network Transformation Services** | **Managed ExpressRoute** | **Managed Virtual WAN** | **Managed Private Edge Zones** | **Managed Security** | | | | | | | |
-|[ANS Group UK](https://www.ans.co.uk/)|[Azure Managed Svc + ANS Glass 10wk implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.glassms)|[ExpressRoute & connectivity: 2 week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_er)|[Azure Virtual WAN + Fortinet: 2 weeks assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_vw)|||
+|[ANS Group UK](https://www.ans.co.uk/)|[Azure Managed Svc + ANS Glass 10wk implementation](https://azuremarketplace.microsoft.com/marketplace/apps/ans_group.glasssaas?tab=Overview)|[ExpressRoute & connectivity: 2 week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_er)|[Azure Virtual WAN + Fortinet: 2 weeks assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/ans_group.ans_vw)|||
|[Aryaka Networks](https://www.aryaka.com/azure-msp-vwan-managed-service-provider-launch-partner-aryaka/)||[Aryaka Azure Connect](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.cloudconnect_azure_19?tab=Overview)|[Aryaka Managed SD-WAN for Azure Networking Virtual](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/aryaka.aryaka_azure_virtual_wan?tab=Overview) | | | |[AXESDN](https://www.axesdn.com/en/azure-msp.html)||[AXESDN Managed Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_expressroute?tab=Overview)|[AXESDN Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/1584591601184.axesdn_managed_azure_virtualwan?tab=Overview) | | | |[BT](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)|[Network Transformation Consulting: 1-Hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/bt-americas-inc.network-transformation-consulting);[BT Cloud Connect Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-001?tab=Overview)|[BT Cloud Connect Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-003?tab=Overview)|[BT Cloud Connect Azure VWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bt-americas-inc.bt-cca-lh-002?tab=Overview)||| |[BUI](https://www.bui.co.za/)|[a2zManaged Cloud Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.a2zmanagement?tab=Overview)||[BUI Managed Azure vWAN using VMware SD-WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.bui_managed_vwan?tab=Overview)||[BUI CyberSoC](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bui.buicybersoc_msp?tab=Overview)| |[Coevolve](https://www.coevolve.com/services/azure-networking-services/)|||[Managed Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.coevolve-managed-azure-vwan?tab=Overview);[Managed VMware SD-WAN Virtual Edge](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/coevolveptylimited1581027739259.managed-vmware-sdwan-edge?tab=Overview)||| |[Colt](https://www.colt.net/why-colt/partner-hub/microsoft/)|[Network optimisation on Azure: 2-hr Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/colttechnologyservices.azure_networking)|||||
-|[Equinix](https://www.equinix.com/)|[Cloud Optimized WAN Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.cloudoptimizedwan?tab=Overview)|[ExpressRoute Connectivity Strategy Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.expressroutestrategy?tab=Overview); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)||||
+|[Equinix](https://www.equinix.com/)|[Cloud Optimized WAN Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.cloudoptimizedwan?tab=Overview)|[ExpressRoute Connectivity Strategy Workshop](https://azuremarketplace.microsoft.com/marketplace/consulting-services/equinix.cloud_optimized_wan_workshop); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)||||
|[Federated Wireless](https://www.federatedwireless.com/caas/)||||[Federated Wireless Connectivity-as-a-Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/federatedwireless1580839623708.fw_caas?tab=Overview)| |[HCL](https://www.hcltech.com/)|[HCL Cloud Network Transformation- 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.clo?tab=Overview)|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - 1 Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)| |[IIJ](https://www.iij.ad.jp/biz/cloudex/)|[ExpressRoute implementation: 1-Hr Briefing](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxm_consulting)|[ExpressRoute: 2-Wk Implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxmer_consulting)||||
Use the links in this section for more information about managed cloud networkin
|[KoçSistem](https://azure.kocsistem.com.tr/en)|[KoçSistem Managed Cloud Services for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.kocsistemcloudmanagementtool?tab=Overview)|[KoçSistem Azure ExpressRoute Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_express_route?tab=Overview)|[KoçSistem Azure Virtual WAN Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_virtual_wan?tab=Overview)||[KoçSistem Azure Security Center Managed Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_security_center?tab=Overview)| |[Liquid Telecom](https://liquidcloud.africa/)|[Cloud Readiness - 2 Hour Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/liquidtelecommunicationsoperationslimited.liquid_cloud_readiness_assessment);[Liquid Azure Expert Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.5dab29ab-bb14-4df8-8978-9a8608a41ad7?tab=Overview)|[Liquid Managed ExpressRoute for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.42cfee0b-8f07-4948-94b0-c9fc3e1ddc42?tab=Overview)|||| |[Lumen](https://www.lumen.com/en-us/solutions/hybrid-cloud.html)||[ExpressRoute Consulting Svcs: 8-wk Implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/centurylink2362604-2362604.centurylink_consultingservicesforexpressroute); [Lumen Landing Zone for ExpressRoute 1 Day](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/centurylinklimited.centurylink_landing_zone_for_azure_expressroute)||||
-|[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy_vedge?tab=Overview); [SD-WAN Virtual Edge offer by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)|
+|[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview); [SD-WAN Virtual Edge offer by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)|
|[Megaport](https://www.megaport.com/services/microsoft-expressroute/)||[Managed Routing Service for ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/megaport1582290752989.megaport_mcr?tab=Overview)|||| |[Nokia](https://www.nokia.com/networks/services/managed-services/)|||[NBConsult Nokia Nuage SDWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nbconsult1588859334197.nbconsult-nokia-nuage?tab=Overview); [Nuage SD-WAN 2.0 Azure Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.nuage_sd-wan_2-0_azure_virtual_wan?tab=Overview)|[Nokia 4G & 5G Private Wireless (NDAC)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nokiaofamericacorporation1591716055441.ndac_5g-ready_private_wireless?tab=Overview)|
-|[NTT Ltd](https://www.nttglobal.net/)|[Azure Cloud Discovery: 2-Week Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/capside.cloud-discovery-workshops-capside)|[NTT Managed ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_managed_expressroute_service?tab=Overview);[NTT Managed IP VPN Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_managed_ip_vpn_service?tab=Overview)|[NTT Managed SD-WAN Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_mng_sdwan_1?tab=Overview)|||
+|[NTT Ltd](https://www.nttglobal.net/)|[Azure Cloud Discovery: 2-Week Workshop](https://azuremarketplace.microsoft.com/marketplace/apps/capside.replica-azure-cloud-governance-capside?tab=Overview)|[NTT Managed ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_managed_expressroute_service?tab=Overview);[NTT Managed IP VPN Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_managed_ip_vpn_service?tab=Overview)|[NTT Managed SD-WAN Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/nttglobalnetworks1592424806122.ntt_mng_sdwan_1?tab=Overview)|||
|[NTT Data](https://us.nttdata.com/en/digital/cloud-transformation)|[Managed |[Oncore Cloud Services]( https://www.oncore.cloud/services/ue-for-expressroute/)|[Enterprise Cloud Foundations: Workshop (~10 days)](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/oncore_cloud_services-4944214.oncore_cloud_onboard_201810)|[UniversalEdge for Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/oncore_cloud_services-4944214.universaledge_for_expressroute?tab=Overview)|||| |[OpenSystems](https://open-systems.com/solutions/microsoft-azure-virtual-wan)|||[Managed secure SD-WAN leveraging Microsoft Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/open_systems_ag.sdwan_0820?tab=Overview)||
notification-hubs Create Notification Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/scripts/create-notification-hub-powershell.md
Title: Create an Azure notification hub using PowerShell | Microsoft Docs description: Learn how to use a PowerShell script to create an Azure notification hub. -+ editor: sethmanheim
na
ms.devlang: na Last updated 01/14/2020-+
open-datasets Dataset Bing Covid 19 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-bing-covid-19.md
For any questions or feedback about this or other datasets in the COVID-19 Data
> [!TIP] > **[Download the notebook instead.](https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureNotebooks&package=azure-storage&registryId=bing-covid-19-data)**.
-#### This notebook documents the URLs and sample code to access the [Bing COVID-19 Dataset](https://azure.microsoft.com/services/open-datasets/catalog/bing-covid-19-data/)
+#### This notebook documents the URLs and sample code to access the [Bing COVID-19 Dataset](https://github.com/microsoft/Bing-COVID-19-Data)
Use the following URLs to get specific file formats hosted on Azure Blob Storage:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.h
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
-https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html (use lines=True for json lines)
```python import pandas as pd
display(spark.sql('SELECT * FROM source LIMIT 10'))
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Covid Tracking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-covid-tracking.md
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.h
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
-https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html (use lines=True for json lines)
- ```python import pandas as pd import numpy as np
open-datasets Dataset Ecdc Covid Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-ecdc-covid-cases.md
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.h
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
-https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html (use lines=True for json lines)
- ```python import pandas as pd
See examples of how this dataset can be used:
## Next steps
-View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
+View the rest of the datasets in the [Open Datasets catalog](dataset-catalog.md).
open-datasets Dataset Oxford Covid Government Response Tracker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-oxford-covid-government-response-tracker.md
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_parquet.h
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
-https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html (use lines=True for json lines)
Start by loading the dataset file into a pandas dataframe and view some sample rows
peering-service Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/peering-service/faq.md
- Title: Azure Peering Service FAQ
-description: Learn about Microsoft Azure Peering Service FAQs
----- Previously updated : 05/18/2020---
-# Peering Service FAQ
-
-This article explains the most frequently asked questions about Azure Peering Service connections.
--
-**Q. Who are the target customers?**
-
-A. Target customers are enterprises that connect to Microsoft cloud by using the internet as transport.
-
-**Q. Can customers sign up for Peering Service with multiple providers?**
-
-A. Yes, customers can sign up for Peering Service with multiple providers in the same region or different regions, but not for the same prefix.
-
-**Q. Can customers select a unique ISP for their sites per geographical region?**
-
-A. Yes, customers can do so. Select the partner ISP per region that suits your business and operational needs.
-
-**Q. What is a Microsoft Edge PoP?**
-
-A. It's a physical location where Microsoft interconnects with other networks. In the Microsoft Edge PoP location, services such as Azure Front Door and Azure CDN are hosted. For more information, see [Azure CDN](../cdn/cdn-features.md).
-
-## Peering Service: Unique characteristics
-
-**Q. How is Peering Service different from normal internet access?**
-
-A. Partners who have registered with Microsoft Peering Service are working with Microsoft to offer optimized routing and reliable connectivity to Microsoft services.
-
-**Q. How is Peering Service different from ExpressRoute?**
-
-A. Azure ExpressRoute is a private, dedicated connection from one or multiple customer locations. While Peering Service offers optimized public connectivity and doesn't support any private connectivity, it also offers optimized connectivity for local internet breakouts.
-
-## Next steps
--- To learn about Peering Service, see [Peering Service overview](about.md).-- To find a service provider, see [Peering Service partners and locations](location-partners.md).-- To onboard a Peering Service connection, see [Onboarding Peering Service](onboarding-model.md).-- To register a Peering Service connection, see [Register a Peering Service connection - Azure portal](azure-portal.md).-- To measure telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/peering-service/location-partners.md
The table in this article provides information on the Peering Service connectivi
| [BBIX](https://www.bbix.net/en/service/) |Japan | | [CCL](https://concepts.co.nz/news/general-news/) |Oceania | | [Colt](https://www.colt.net/why-colt/strategic-alliances/microsoft-partnership/)|Europe, Asia|
-| [DE-CIX](https://www.de-cix.net/microsoft)|Europe, North America |
+| [DE-CIX](https://www.de-cix.net/)|Europe, North America |
| [IIJ](https://www.iij.ad.jp/en/) | Japan | | [Intercloud](https://intercloud.com/microsoft-saas-applications/)|Europe | | [Kordia](https://www.kordia.co.nz/cloudconnect) |Oceania |
The table in this article provides information on the Peering Service connectivi
## Next steps - To learn about Peering Service, see [Peering Service overview](about.md).-- To learn about Peering Service FAQs, see [Peering Service FAQ](faq.md).
+- To learn about Peering Service FAQs, see [Peering Service FAQ](faq.yml).
- To learn about partners onboarding and Peering Service configuration, see [Peering Service configuration](connection.md). - To learn about Peering Service connection, see [Peering Service connection](connection.md). - To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / portal | privatelink.adf.azure.com | adf.azure.com | | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net |
+| Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) / redisCache | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
purview Create A Custom Classification And Classification Rule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/create-a-custom-classification-and-classification-rule.md
To create a custom classification rule:
- **Minimum match threshold**: You can use this setting to set the minimum percentage of the distinct data value matches in a column that must be found by the scanner for the classification to be applied. The suggested value is 60%. You need to be careful with this setting. If you reduce the level below 60%, you might introduce false-positive classifications into your catalog. If you specify multiple data patterns, this setting is disabled and the value is fixed at 60%.
+> [!Note]
+> The Minimum match threshold must be at least 1%.
+ 1. You can now verify your rule and **create** it. :::image type="content" source="media/create-a-custom-classification-and-classification-rule/verify-rule.png" alt-text="Verify rule before creating" border="true":::
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-overview.md
To further control access to your search service, you can create inbound firewal
You can use the portal to [configure inbound access](service-configure-firewall.md).
-Alternatively, you can use the management REST APIs. Starting with API version 2020-03-13, with the [IpRule](/rest/api/searchmanagement/services/createorupdate#iprule) parameter, you can restrict access to your service by identifying IP addresses, individually or in a range, that you want to grant access to your search service.
+Alternatively, you can use the management REST APIs. Starting with API version 2020-03-13, with the [IpRule](/rest/api/searchmanagement/2020-08-01/services/create-or-update#iprule) parameter, you can restrict access to your service by identifying IP addresses, individually or in a range, that you want to grant access to your search service.
### Connect to a private endpoint (network isolation, no Internet traffic)
Watch this fast-paced video for an overview of the security architecture and eac
+ [Azure security fundamentals](../security/fundamentals/index.yml) + [Azure Security](https://azure.microsoft.com/overview/security)
-+ [Azure Security Center](../security-center/index.yml)
++ [Azure Security Center](../security-center/index.yml)
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-rbac.md
Previously updated : 07/15/2021 Last updated : 07/19/2021 # Use role-based authorization in Azure Cognitive Search Azure provides a global [role-based access control (RBAC) authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can use role authorization in the following ways:
-+ Grant search service admin rights that work against any client calling [Azure Resource Manager](../azure-resource-manager/management/overview.md). Roles range from full access (Owner), to read-only access to service information (Reader).
++ Grant admin rights that work against any client calling [Azure Resource Manager](../azure-resource-manager/management/overview.md). Roles range from full access (Owner), to read-only access to search service information (Reader). + (Preview only) Grant permissions for inbound data plane operations, such as creating or querying indexes.
Alternatively, you can use the Azure portal:
#### Step 3: Configure requests
-Use the Search REST API version 2021-04-30-Preview to set the authorization header on requests. You can set this header on any REST call to search service resources and operations.
+To test programmatically, revise your code to use a Search REST API (any supported version) and set the authorization header on requests. If you are using the Azure SDKs, check their beta releases to see if the authorization header is available. Depending on your application, additional configuration is required to register it with Azure Active Directory or to determine how to get and pass an authorization token.
-Note the subtle difference (04-01 versus 04-30) in preview version numbering between the Management and Search REST APIs.
-
-If you are using the portal, you can skip this step.
+If you are using the portal, you can skip application configuration. The portal is updated to support the new Search Index Data roles.
#### Step 4: Test
-Send requests to the reconfigured search service to verify role-based authorization for indexing and query tasks. If you chose **Role-based access control**, use a REST client to perform data plane operations and specify the 2021-04-30-preview REST API with an authorization header. Common tools for making REST calls include [Postman](search-get-started-rest.md) or [Visual Studio Code](search-get-started-vs-code.md).
+Send requests to the reconfigured search service to verify role-based authorization for indexing and query tasks.
Alternatively, you can use the Azure portal and the roles assigned to yourself to test:
security-center Security Center Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
Previously updated : 07/14/2021 Last updated : 07/19/2021
The following table provides a matrix of:
For information about when recommendations are generated for each of these protections, see [Endpoint Protection Assessment and Recommendations](security-center-endpoint-protection.md).
-| Endpoint Protection | Platforms | Security Center Installation | Security Center Discovery |
+| Solution | Supported platforms | Security Center installation | Security Center discovery |
|--|--|||
-| Microsoft Defender Antivirus | Windows Server 2016 or later | No, Built in to OS | Yes |
-| System Center Endpoint Protection (Microsoft Antimalware) | Windows Server 2012 R2, 2012, 2008 R2 (see note below) | Via Extension | Yes |
-| Trend Micro ΓÇô Deep Security | Windows Server Family | No | Yes |
-| Symantec v12.1.1100+ | Windows Server Family | No | Yes |
-| McAfee v10+ | Windows Server Family | No | Yes |
-| McAfee v10+ | Linux Server Family | No | Yes |
-| Sophos V9+ | Linux Server Family | No | Yes |
+| Microsoft Defender Antivirus | Windows Server 2016 or later | No (built into OS) | Yes |
+| System Center Endpoint Protection (Microsoft Antimalware) | Windows Server 2012 R2, 2012, 2008 R2 (see note below) | Via extension | Yes |
+| Trend Micro ΓÇô Deep Security | Windows Server (all) | No | Yes |
+| Symantec v12.1.1100+ | Windows Server (all) | No | Yes |
+| McAfee v10+ | Windows Server (all) | No | Yes |
+| McAfee v10+ | Linux (preview) | No | Yes |
+| Sophos V9+ | Linux (preview) | No | Yes |
| | | | | > [!NOTE]
sentinel Connect Azure Sql Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-sql-logs.md
 Title: Connect all Azure SQL database diagnostics and auditing logs to Azure Sentinel
-description: Learn how to use Azure Policy to enforce the connection of Azure SQL database diagnostics logs and security auditing logs to Azure Sentinel.
+ Title: Connect all Azure SQL Database diagnostics and auditing logs to Azure Sentinel
+description: Learn how to use Azure Policy to enforce the connection of Azure SQL Database diagnostics logs and security auditing logs to Azure Sentinel.
Last updated 04/21/2021
-# Connect Azure SQL database diagnostics and auditing logs
+# Connect Azure SQL Database diagnostics and auditing logs
Azure SQL is a fully managed, Platform-as-a-Service (PaaS) database engine that handles most database management functions, such as upgrading, patching, backups, and monitoring, without necessitating user involvement.
-The Azure SQL database connector lets you stream your databases' auditing and diagnostic logs into Azure Sentinel, allowing you to continuously monitor activity in all your instances.
+The Azure SQL Database connector lets you stream your databases' auditing and diagnostic logs into Azure Sentinel, allowing you to continuously monitor activity in all your instances.
- Connecting diagnostics logs allows you to send database diagnostics logs of different data types to your Azure Sentinel workspace.
Learn more about [Azure SQL Database diagnostic telemetry](../azure-sql/database
- To use Azure Policy to apply a log streaming policy to Azure SQL database and server resources, you must have the Owner role for the policy assignment scope.
-## Connect to Azure SQL database
+## Connect to an Azure SQL database
This connector uses Azure Policy to apply a single Azure SQL log streaming configuration to a collection of instances, defined as a scope. The Azure SQL Database connector sends two types of logs to Azure Sentinel: diagnostics logs (from SQL databases) and auditing logs (at the SQL server level). You can see the log types ingested from Azure SQL databases and servers on the left side of connector page, under **Data types**.
This connector uses Azure Policy to apply a single Azure SQL log streaming confi
## Next steps
-In this document, you learned how to use Azure Policy to connect Azure SQL database diagnostics and auditing logs to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
+In this document, you learned how to use Azure Policy to connect Azure SQL Database diagnostics and auditing logs to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
- Learn how to [get visibility into your data and potential threats](quickstart-get-visibility.md). - Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).
sentinel Dns Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/dns-normalization-schema.md
The following fields are generated by Log Analytics for each record, and you can
| **Field** | **Type** | **Description** | | | | |
-| <a name=timegenerated></a>**TimeGenerated** | datetime | The time the event was generated by the reporting device. |
+| <a name=timegenerated></a>**TimeGenerated** | Date/time | The time the event was generated by the reporting device. |
| **\_ResourceId** | guid | The Azure Resource ID of the reporting device or service, or the log forwarder resource ID for events forwarded using Syslog, CEF, or WEF. | | | | |
Event fields are common to all schemas, and describe the activity itself and the
| <a name ="eventproduct"></a>**EventProduct** | Mandatory | String | `DNS Server` | The product generating the event. This field may not be available in the source record, in which case it should be set by the parser. | | **EventProductVersion** | Optional | String | `12.1` | The version of the product generating the event. This field may not be available in the source record, in which case it should be set by the parser. | | **EventVendor** | Mandatory | String | `Microsoft` | The vendor of the product generating the event. This field may not be available in the source record, in which case it should be set by the parser. |
-| **EventSchemaVersion** | Mandatory | String | `0.1` | The version of the schema documented here is **0.1**. |
+| **EventSchemaVersion** | Mandatory | String | `0.1.1` | The version of the schema documented here is **0.1.1**. |
| **EventReportUrl** | Optional | String | | A URL provided in the event for a resource that provides more information about the event. | | <a name="dvc"></a>**Dvc** | Mandatory | String | `ContosoDc.Contoso.Azure` | A unique identifier of the device on which the event occurred. <br><br>This field may alias the [DvcId](#dvcid), [DvcHostname](#dvchostname), or [DvcIpAddr](#dvcipaddr) fields. For cloud sources, for which there is no apparent device, use the same value as the [Event Product](#eventproduct) field. | | <a name ="dvcipaddr"></a>**DvcIpAddr** | Recommended | IP Address | `45.21.42.12` | The IP Address of the device on which the process event occurred. |
The fields below are specific to DNS events. That said, many of them do have sim
| **DstIpAddr** | Optional | IP Address | `127.0.0.1` | The IP address of the server receiving the DNS request. For a regular DNS request, this value would typically be the reporting device, and in most cases set to **127.0.0.1**. | | **DstPortNumber** | Optional | Integer | `53` | Destination Port number | | **IpAddr** | | Alias | | Alias for SrcIpAddr |
-| <a name=query></a>**Query** | Mandatory | String | `www.malicious.com` | The domain that needs to be resolved. <br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](#additionalfields) field. |
+| <a name=query></a>**DnsQuery** | Mandatory | String | `www.malicious.com` | The domain that needs to be resolved. <br><br>While the DNS protocol allows for multiple queries in a single request, this scenario is rare, if it's found at all. If the request has multiple queries, store the first one in this field, and then and optionally keep the rest in the [AdditionalFields](#additionalfields) field. |
| **Domain** | | Alias || Alias to [Query](#query). |
-| **QueryType** | Optional | Integer | `28` | This field may contain [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml)). |
-| **QueryTypeName** | Mandatory | Enumerated | `AAAA` | The field may contain [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value. |
-| <a name=responsename></a>**ResponseName** | Optional | String | | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source agnostics analytics. Therefore the information model does not require parsing and normalization, and Azure Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).|
-| <a name=responsecodename></a>**ResponseCodeName** | Mandatory | Enumerated | `NXDOMAIN` | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. |
-| **ResponseCode** | Optional | Integer | `3` | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).|
+| **DnsQueryType** | Optional | Integer | `28` | This field may contain [DNS Resource Record Type codes](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml)). |
+| **DnsQueryTypeName** | Mandatory | Enumerated | `AAAA` | The field may contain [DNS Resource Record Type](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml) names. <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case as needed. If the source provides only a numerical query type code and not a query type name, the parser must include a lookup table to enrich with this value. |
+| <a name=responsename></a>**DnsResponseName** | Optional | String | | The content of the response, as included in the record.<br> <br> The DNS response data is inconsistent across reporting devices, is complex to parse, and has less value for source agnostics analytics. Therefore the information model does not require parsing and normalization, and Azure Sentinel uses an auxiliary function to provide response information. For more information, see [Handling DNS response](#handling-dns-response).|
+| <a name=responsecodename></a>**DnsResponseCodeName** | Mandatory | Enumerated | `NXDOMAIN` | The [DNS response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml). <br><br>**Note**: IANA does not define the case for the values, so analytics must normalize the case. If the source provides only a numerical response code and not a response code name, the parser must include a lookup table to enrich with this value. <br><br> If this record represents a request and not a response, set to **NA**. |
+| **DnsResponseCode** | Optional | Integer | `3` | The [DNS numerical response code](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).|
| **TransactionIdHex** | Recommended | String | | The DNS unique hex transaction ID. | | **NetworkProtocol** | Optional | String | `UDP` | The transport protocol used by the network resolution event. The value can be **UDP** or **TCP**, and is most commonly set to **UDP** for DNS. |
-| **QueryClass** | Optional | Integer | | The [DNS class ID](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable.|
-| **QueryClassName** | Optional | String | `"IN"` | The [DNS class name](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable. |
-| <a name=flags></a>**Flags** | Optional | List of strings | `["DR"]` | The flags field, as provided by the reporting device. If flag information is provided in multiple fields, concatenate them with comma as a separator. <br><br>Since DNS flags are complex to parse and are less often used by analytics, parsing and normalization are not required, and Azure Sentinel uses an auxiliary function to provide flags information. For more information, see [Handling DNS response](#handling-dns-response).|
+| **DnsQueryClass** | Optional | Integer | | The [DNS class ID](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable.|
+| **DnsQueryClassName** | Optional | String | `"IN"` | The [DNS class name](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml).<br> <br>In practice, only the **IN** class (ID 1) is used, making this field less valuable. |
+| <a name=flags></a>**DnsFlags** | Optional | List of strings | `["DR"]` | The flags field, as provided by the reporting device. If flag information is provided in multiple fields, concatenate them with comma as a separator. <br><br>Since DNS flags are complex to parse and are less often used by analytics, parsing and normalization are not required, and Azure Sentinel uses an auxiliary function to provide flags information. For more information, see [Handling DNS response](#handling-dns-response).|
| <a name=UrlCategory></a>**UrlCategory** | | String | `Educational \\ Phishing` | A DNS event source may also look up the category of the requested Domains. The field is called **_UrlCategory_** to align with the Azure Sentinel network schema. <br><br>**_DomainCategory_** is added as an alias that's fitting to DNS. | | **DomainCategory** | | Alias | | Alias to [UrlCategory](#UrlCategory). | | **ThreatCategory** | | String | | If a DNS event source also provides DNS security, it may also evaluate the DNS event. For example, it may search for the IP address or domain in a threat intelligence database, and may assign the domain or IP address with a Threat Category. |
The fields below are specific to DNS events. That said, many of them do have sim
| **DvcAction** | Optional | String | `"Blocked"` | If a DNS event source also provides DNS security, it may take an action on the request, such as blocking it. | | | | | | |
+### Additional aliases (deprecated)
+
+The following fields are aliases which are maintained for backward compatibility:
+- Query (alias to DnsQuery)
+- QueryType (alias to DnsQueryType)
+- QueryTypeName (alias to DnsQueryTypeName)
+- ResponseName (alias to DnsReasponseName)
+- ResponseCodeName (alias to DnsResponseCodeName)
+- ResponseCode (alias to DnsResponseCode)
+- QueryClass (alias to DnsQueryClass)
+- QueryClassName (alias to DnsQueryClassName)
+- Flags (alias to DnsFlags)
+ ### Additional entities Events evolve around entities such as users, hosts, process, or files. Each entity may require several fields to describe. For example, a host may have a name and an IP address. In addition, a single record may include multiple entities of the same type, for example a source and destination host.
For more information, see:
- [Azure Sentinel Authentication normalization schema reference (Public preview)](authentication-normalization-schema.md) - [Azure Sentinel data normalization schema reference](normalization-schema.md) - [Azure Sentinel Process Event normalization schema reference](process-events-normalization-schema.md)-- [Azure Sentinel Registry Event normalization schema reference (Public preview)](registry-event-normalization-schema.md)
+- [Azure Sentinel Registry Event normalization schema reference (Public preview)](registry-event-normalization-schema.md)
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-solution.md
This procedure describes how to ensure that your SAP system has the correct prer
1. Download and install one of the following SAP change requests from the Azure Sentinel GitHub repository, at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR:
- - **SAP versions 750 or higher**: Install the SAP change request *131 (NPLK900131)*
- - **SAP versions 740**: Install the SAP change request *132 (NPLK900132)*
+ - **SAP versions 750 or higher**: Install the SAP change request *141 (NPLK900141)*
+ - **SAP versions 740**: Install the SAP change request *142 (NPLK900142)*
When performing this step, ensure that you use binary mode to transfer the files to the SAP system and use the **STMS_IMPORT** SAP transaction code.
Learn more about the Azure Sentinel SAP solutions:
- [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md) - [Azure Sentinel SAP solution: built-in security content](sap-solution-security-content.md)
-For more information, see [Azure Sentinel solutions](sentinel-solutions.md).
+For more information, see [Azure Sentinel solutions](sentinel-solutions.md).
service-bus-messaging Service Bus Dotnet How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-dotnet-how-to-use-topics-subscriptions.md
This section shows you how to create a .NET Core console application to send mes
1. Replace code in the **Program.cs** with the following code. Here are the important steps from the code. 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
- 1. Invokes the [CreateSender](/dotnet/api/azure.messaging.servicebus.servicebusclient.createsender) method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
- 1. Creates a [ServiceBusMessageBatch](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch) object by using the [ServiceBusSender.CreateMessageBatchAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.createmessagebatchasync).
- 1. Add messages to the batch using the [ServiceBusMessageBatch.TryAddMessage](/dotnet/api/azure.messaging.servicebus.servicebusmessagebatch.tryaddmessage).
- 1. Sends the batch of messages to the Service Bus topic using the [ServiceBusSender.SendMessagesAsync](/dotnet/api/azure.messaging.servicebus.servicebussender.sendmessagesasync) method.
+ 1. Invokes the `CreateSender` method on the `ServiceBusClient` object to create a [ServiceBusSender](/dotnet/api/azure.messaging.servicebus.servicebussender) object for the specific Service Bus topic.
+ 1. Creates a `ServiceBusMessageBatch` object by using the `ServiceBusSender.CreateMessageBatchAsync` method.
+ 1. Add messages to the batch using the `ServiceBusMessageBatch.TryAddMessage` method.
+ 1. Sends the batch of messages to the Service Bus topic using the `ServiceBusSender.SendMessagesAsync` method.
For more information, see code comments. ```csharp
In this section, you'll create a .NET Core console application that receives mes
1. Replace code in the **Program.cs** with the following code. Here are the important steps from the code. Here are the important steps from the code: 1. Creates a [ServiceBusClient](/dotnet/api/azure.messaging.servicebus.servicebusclient) object using the connection string to the namespace.
- 1. Invokes the [CreateProcessor](/dotnet/api/azure.messaging.servicebus.servicebusclient.createprocessor) method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
- 1. Specifies handlers for the [ProcessMessageAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processmessageasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.processerrorasync) events of the `ServiceBusProcessor` object.
- 1. Starts processing messages by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.startprocessingasync) on the `ServiceBusProcessor` object.
- 1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.stopprocessingasync) on the `ServiceBusProcessor` object.
+ 1. Invokes the `CreateProcessor` method on the `ServiceBusClient` object to create a [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor) object for the specified Service Bus queue.
+ 1. Specifies handlers for the `ProcessMessageAsync`and `ProcessErrorAsync` events of the `ServiceBusProcessor` object.
+ 1. Starts processing messages by invoking the `StartProcessingAsync` on the `ServiceBusProcessor` object.
+ 1. When user presses a key to end the processing, invokes the `StopProcessingAsync` on the `ServiceBusProcessor` object.
For more information, see code comments.
service-bus-messaging Service Bus Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-samples.md
The Service Bus messaging samples demonstrate key features in [Service Bus messa
| Package | Samples location | | - | - |
-| Azure.Messaging.ServiceBus (latest) | /samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/ |
-| Microsoft.Azure.ServiceBus (legacy) | https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus |
+| Azure.Messaging.ServiceBus (latest) | [Code samples](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/) |
+| Microsoft.Azure.ServiceBus (legacy) | [GitHub location](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus) |
## Java samples | Package | Samples location | | - | - |
-| azure-messaging-servicebus (latest) | /samples/azure/azure-sdk-for-java/servicebus-samples/ |
-| azure-servicebus (legacy) | https://github.com/Azure/azure-service-bus/tree/master/samples/Java |
+| azure-messaging-servicebus (latest) | [Code samples](/samples/azure/azure-sdk-for-java/servicebus-samples/) |
+| azure-servicebus (legacy) | [GitHub location](https://github.com/Azure/azure-service-bus/tree/master/samples/Java) |
## Python samples | Package | Samples location | | -- | -- |
-| azure.servicebus | /samples/azure/azure-sdk-for-python/servicebus-samples/ |
+| azure.servicebus | [Code samples](/samples/azure/azure-sdk-for-python/servicebus-samples/) |
## TypeScript samples | Package | Samples location | | - | - |
-| @azure/service-bus | /samples/azure/azure-sdk-for-js/service-bus-typescript/ |
+| @azure/service-bus | [Code samples](/samples/azure/azure-sdk-for-js/service-bus-typescript/) |
## JavaScript samples | Package | Samples location | | - | - |
-| @azure/service-bus | /samples/azure/azure-sdk-for-js/service-bus-javascript/ |
+| @azure/service-bus | [Code samples](/samples/azure/azure-sdk-for-js/service-bus-javascript/) |
## Go samples | Package | Samples location | | - | - |
-| azure-service-bus-go | https://github.com/Azure/azure-service-bus-go/ |
+| azure-service-bus-go | [GitHub location](https://github.com/Azure/azure-service-bus-go/) |
## Management samples You can find management samples on GitHub at https://github.com/Azure/azure-service-bus/tree/master/samples/Management.
service-bus-messaging Topic Filters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/topic-filters.md
Title: Azure Service Bus topic filters | Microsoft Docs description: This article explains how subscribers can define which messages they want to receive from a topic by specifying filters. Previously updated : 02/17/2021 Last updated : 07/19/2021 # Topic filters and actions
Complex filter rules require processing capacity. In particular, the use of SQL
## Actions
-With SQL filter conditions, you can define an action that can annotate the message by adding, removing, or replacing properties and their values. The action [uses a SQL-like expression](service-bus-messaging-sql-filter.md) that loosely leans on the SQL UPDATE statement syntax. The action is done on the message after it has been matched and before the message is selected into the subscription. The changes to the message properties are private to the message copied into the subscription.
+With SQL filter conditions, you can define an action that can annotate the message by adding, removing, or replacing properties and their values. The action [uses a SQL-like expression](service-bus-messaging-sql-rule-action.md) that loosely leans on the SQL UPDATE statement syntax. The action is done on the message after it has been matched and before the message is selected into the subscription. The changes to the message properties are private to the message copied into the subscription.
## Usage patterns
sql-database Sql Database Backup Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-backup-database-cli.md
ms.devlang: azurecli --++ Last updated 03/27/2019
sql-database Sql Database Copy Database To New Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-copy-database-to-new-server-cli.md
ms.devlang: azurecli --++ Last updated 03/12/2019
sql-database Sql Database Import From Bacpac Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-database-import-from-bacpac-cli.md
ms.devlang: azurecli --++ Last updated 05/24/2019
sql-database Sql Managed Instance Restore Geo Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sql-database/scripts/sql-managed-instance-restore-geo-backup-cli.md
ms.devlang: azurecli --++ Last updated 07/03/2019
storage Data Lake Storage Supported Blob Storage Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-supported-blob-storage-features.md
The following table shows how each Blob storage feature is supported with Data L
|Lifecycle management policies (tiering)|Generally available|Not yet supported|[Manage the Azure Blob storage lifecycle](storage-lifecycle-management-concepts.md)| |Lifecycle management policies (delete blob)|Generally available|Generally available|[Manage the Azure Blob storage lifecycle](storage-lifecycle-management-concepts.md)| |Logging in Azure Monitor|Preview |Preview|[Monitoring Azure Storage](./monitor-blob-storage.md)|
-|Snapshots|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|[Blob snapshots](snapshots-overview.md)|
+|Snapshots|Preview|Preview|[Blob snapshots](snapshots-overview.md)|
|Static websites|Generally Available<div role="complementary" aria-labelledby="preview-form"></div>|Generally Available<div role="complementary" aria-labelledby="preview-form"></div>|[Static website hosting in Azure Storage](storage-blob-static-website.md)|
-|Immutable storage|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form"><sup>1</sup></div>|[Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md)|
+|Immutable storage|Preview<div role="complementary" aria-labelledby="preview-form">|Preview|[Store business-critical blob data with immutable storage](storage-blob-immutable-storage.md)|
|Container soft delete|Preview|Preview|[Soft delete for containers](soft-delete-container-overview.md)| |Azure Storage inventory|Preview|Preview|[Use Azure Storage inventory to manage blob data (preview)](blob-inventory.md)|
-|Custom domains|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form-2"><sup>2</sup></div>|[Map a custom domain to an Azure Blob storage endpoint](storage-custom-domain-name.md)|
+|Custom domains|Preview<div role="complementary" aria-labelledby="preview-form-1"><sup>1</sup></div>|Preview<div role="complementary" aria-labelledby="preview-form-1"><sup>1</sup></div>|[Map a custom domain to an Azure Blob storage endpoint](storage-custom-domain-name.md)|
|Blob soft delete|Preview|Preview|[Soft delete for blobs](./soft-delete-blob-overview.md)| |Blobfuse|Generally available|Generally available|[How to mount Blob storage as a file system with blobfuse](storage-how-to-mount-container-linux.md)| |Anonymous public access |Generally available|Generally available| See [Configure anonymous public read access for containers and blobs](anonymous-read-access-configure.md).|
The following table shows how each Blob storage feature is supported with Data L
|Point-in-time restore|Not yet supported|Not yet supported|[Point-in-time restore for block blobs](point-in-time-restore-overview.md)| |Blob index tags|Not yet supported|Not yet supported|[Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)|
-<div id="preview-form"><sup>1</sup>To use snapshots or immutable storage with Data Lake Storage Gen2, you need to enroll in the preview by completing this <a href=https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2EUNXd_ZNJCq_eDwZGaF5VUOUc3NTNQSUdOTjgzVUlVT1pDTzU4WlRKRy4u>form</a>. </div>
-<div id="preview-form-2"><sup>2</sup>A custom domain name can map only to the blob service or static website endpoint. The Data Lake storage endpoint is not supported.</a>. </div>
+<div id="preview-form-2"><sup>1</sup>A custom domain name can map only to the blob service or static website endpoint. The Data Lake storage endpoint is not supported.</a>. </div>
## See also
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/encryption-scope-overview.md
Previously updated : 06/01/2021 Last updated : 07/19/2021
When you disable an encryption scope, any subsequent read or write operations ma
When an encryption scope is disabled, you are no longer billed for it. Disable any encryption scopes that are not needed to avoid unnecessary charges.
-If your encryption scope is protected with a customer-managed key, and you delete the key in the key vault, the data will become inaccessible. Be sure to also disable the encryption scope to avoid being charged for it.
+If your encryption scope is protected with a customer-managed key, and you revoke the key in the key vault, the data will become inaccessible. Be sure to disable the encryption scope prior to revoking the key in key vault to avoid being charged for the encryption scope.
Keep in mind that customer-managed keys are protected by soft delete and purge protection in the key vault, and a deleted key is subject to the behavior defined for by those properties. For more information, see one of the following topics in the Azure Key Vault documentation:
Keep in mind that customer-managed keys are protected by soft delete and purge p
- [Azure Storage encryption for data at rest](../common/storage-service-encryption.md) - [Create and manage encryption scopes](encryption-scope-manage.md) - [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md)-- [What is Azure Key Vault?](../../key-vault/general/overview.md)
+- [What is Azure Key Vault?](../../key-vault/general/overview.md)
storage Snapshots Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/snapshots-overview.md
A snapshot is a read-only version of a blob that's taken at a point in time.
## About blob snapshots
+> [!IMPORTANT]
+> Snapshots in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+>
+> To enroll in the preview, see [this form](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2EUNXd_ZNJCq_eDwZGaF5VUOUc3NTNQSUdOTjgzVUlVT1pDTzU4WlRKRy4u).
A snapshot of a blob is identical to its base blob, except that the blob URI has a **DateTime** value appended to the blob URI to indicate the time at which the snapshot was taken. For example, if a page blob URI is `http://storagesample.core.blob.windows.net/mydrives/myvhd`, the snapshot URI is similar to `http://storagesample.core.blob.windows.net/mydrives/myvhd?snapshot=2011-03-09T01:42:34.9360000Z`.
storage Storage Blob Immutable Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-immutable-storage.md
Immutable storage for Azure Blob storage enables users to store business-critica
For information about how to set and clear legal holds or create a time-based retention policy using the Azure portal, PowerShell, or Azure CLI, see [Set and manage immutability policies for Blob storage](storage-blob-immutability-policies-manage.md).
+> [!IMPORTANT]
+> Immutable storage for Azure Blob storage in accounts that have the hierarchical namespace feature enabled is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+>
+> To enroll in the preview, see [this form](https://forms.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR2EUNXd_ZNJCq_eDwZGaF5VUOUc3NTNQSUdOTjgzVUlVT1pDTzU4WlRKRy4u).
## About immutable Blob storage
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azurite.md
description: The Azurite open-source emulator provides a free local environment
Previously updated : 07/15/2021 Last updated : 07/19/2021
# Use the Azurite emulator for local Azure Storage development
-The Azurite open-source emulator provides a free local environment for testing your Azure blob and queue storage applications. When you're satisfied with how your application is working locally, switch to using an Azure Storage account in the cloud. The emulator provides cross-platform support on Windows, Linux, and macOS.
+The Azurite open-source emulator provides a free local environment for testing your Azure blob, queue storage, and table storage applications. When you're satisfied with how your application is working locally, switch to using an Azure Storage account in the cloud. The emulator provides cross-platform support on Windows, Linux, and macOS.
Azurite is the future storage emulator platform. Azurite supersedes the [Azure Storage Emulator](storage-use-emulator.md). Azurite will continue to be updated to support the latest versions of Azure Storage APIs.
The extension supports the following Visual Studio Code commands. To open the co
- **Azurite: Clean** - Reset all Azurite services persistency data - **Azurite: Clean Blob Service** - Clean blob service - **Azurite: Clean Queue Service** - Clean queue service
+ - **Azurite: Clean Table Service** - Clean table service
- **Azurite: Close** - Close all Azurite services - **Azurite: Close Blob Service** - Close blob service - **Azurite: Close Queue Service** - Close queue service
+ - **Azurite: Close Table Service** - Close table service
- **Azurite: Start** - Start all Azurite services - **Azurite: Start Blob Service** - Start blob service - **Azurite: Start Queue Service** - Start queue service
+ - **Azurite: Start Table Service** - Start table service
To configure Azurite within Visual Studio Code, select the extensions pane. Select the **Manage** (gear) icon for **Azurite**. Select **Extension Settings**.
The following settings are supported:
- **Azurite: Queue Port** - The Queue service listening port. The default port is 10001. - **Azurite: Silent** - Silent mode disables the access log. The default value is **false**. - **Azurite: Skip Api Version Check** - Skip the request API version check. The default value is **false**.
+ - **Azurite: Table Host** - The Table service listening endpoint, by default setting is 127.0.0.1.
+ - **Azurite: Table Port** - The Table service listening port, by default 10002.
## Install and run Azurite by using NPM
docker run -p 10000:10000 -p 10001:10001 \
In the following example, the `-v c:/azurite:/data` parameter specifies *c:/azurite* as the Azurite persisted data location. The directory, *c:/azurite*, must be created before running the Docker command. ```console
-docker run -p 10000:10000 -p 10001:10001 \
+docker run -p 10000:10000 -p 10001:10001 - p 10002:10002 \
-v c:/azurite:/data mcr.microsoft.com/azure-storage/azurite ```
azurite --queuePort 0
The port in use is displayed during Azurite startup.
+### Table listening host
+
+**Optional** - By default, Azurite will listen to 127.0.0.1 as the local server. Use the `--tableHost` switch to set the address to your requirements.
+
+Accept requests on the local machine only:
+
+```console
+azurite --tableHost 127.0.0.1
+```
+
+Allow remote requests:
+
+```console
+azurite --tableHost 0.0.0.0
+```
+
+> [!CAUTION]
+> Allowing remote requests may make your system vulnerable to external attacks.
+
+### Table listening port configuration
+
+**Optional** - By default, Azurite will listen for the Table service on port 10002. Use the `--tablePort` switch to specify the listening port that you require.
+
+> [!NOTE]
+> After using a customized port, you need to update the connection string or corresponding configuration in your Azure Storage tools or SDKs.
+
+Customize the Table service listening port:
+
+```console
+azurite --tablePort 11111
+```
+
+Let the system auto select an available port:
+
+```console
+azurite --tablePort 0
+```
+
+The port in use is displayed during Azurite startup.
+ ### Workspace path **Optional** - Azurite stores data to the local disk during execution. Use the `-l` or `--location` switch to specify a path as the workspace location. By default, the current process working directory will be used. Note the lowercase 'l'.
storage Isv File Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/solution-integration/validated-partners/primary-secondary-storage/isv-file-services.md
This article compares several ISV solutions that provide files services in Azure
| **Nasuni** | **UniFS** is an enterprise file service with a simpler, low-cost, cloud alternative built on Microsoft Azure | - Primary file storage <br> - Departmental file shares <br> - Centralized file management <br> - multi-site collaboration with global file locking <br> - Windows Virtual Desktop <br> - Remote work/VDI file shares | | **NetApp** | **Cloud Volumes ONTAP** optimizes your cloud storage costs, and performance while enhancing data protection, security, and compliance. Includes enterprise-grade data management, availability, and durability | - Business applications <br> - Relational and NoSQL databases <br> - Big Data & Analytics <br> - Persistent data for containers <br> - CI/CD pipelines <br> - Disaster recovery for on-premises NetApp solutions | | **Panzura**| **CloudFS** is a hybrid enterprise global file system that enables accessing the same data set on premises or in the cloud | - Enterprise NAS replacement <br> - Global collaboration <br> - Cloud native access to unstructured data for Analytics, AI/ML. |
-| **Tiger Technology** | **Tiger Bridge** is a data management software solution. Provides tiering between an NTFS file system and Azure Blob Storage or Azure managed disks. Creates a single namespace with local file locking. | - Analytics <br> - Cloud archive <br> - Continuous data protection (CDP) <br> - Disaster Recovery for Windows servers <br> - Multi-sync sync and collaboration <br> - Remote workflows (VDI) |
+| **Tiger Technology** | **Tiger Bridge** is a data management software solution. Provides tiering between an NTFS file system and Azure Blob Storage or Azure managed disks. Creates a single namespace with local file locking. | - Cloud archive<br> - Continuous data protection (CDP) <br> - Disaster Recovery for Windows servers <br> - Multi-site sync and collaboration <br> - Remote workflows (VDI)<br> - Native access to cloud data for Analytics, AI, ML |
| **XenData** | **Cloud File Gateway** creates a highly scalable global file system using windows file servers | - Global sharing of engineering and scientific files <br> - Collaborative video editing | ## ISV solutions comparison
This article compares several ISV solutions that provide files services in Azure
|--|-|--||--|--| | **Azure AD support** | Yes (via ADDS) | Yes (via ADDS) | Yes (via ADDS) | Yes (via ADDS) | Yes (via ADDS) | | **Active directory support** | Yes | Yes | Yes | Yes | Yes |
-| **LDAP support** | Yes | Yes | No | No | Yes |
+| **LDAP support** | Yes | Yes | No | Yes | Yes |
### Management
This article compares several ISV solutions that provide files services in Azure
- Option to apply renames to the cloud target - Partial write to objects - Ransomware protection-- Multi-site sync / collaboration **XenData** - Cosmos DB service provides fast synchronization of multiple gateways, including application-specific owner files for global collaboration
Learn more:
- [Azure Disks](../../../../virtual-machines/managed-disks-overview.md) - [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) - [Verified partners for primary and secondary storage](./partner-overview.md)-- [Storage migration overview](../../../common/storage-migration-overview.md)
+- [Storage migration overview](../../../common/storage-migration-overview.md)
storsimple Storsimple 8000 Manage Volume Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storsimple/storsimple-8000-manage-volume-containers.md
ms.devlang: NA
NA Previously updated : 02/09/2021 Last updated : 07/16/2021
A volume container has the following attributes:
* **Volumes** ΓÇô The tiered or locally pinned StorSimple volumes that are contained within the volume container. * **Encryption** ΓÇô An encryption key that can be defined for each volume container. This key is used for encrypting the data that is sent from your StorSimple device to the cloud. A military-grade AES-256 bit key is used with the user-entered key. To secure your data, we recommend that you always enable cloud storage encryption. * **Storage account** ΓÇô The Azure storage account that is used to store the data. All the volumes residing in a volume container share this storage account. You can choose a storage account from an existing list, or create a new account when you create the volume container and then specify the access credentials for that account.
-* **Cloud bandwidth** ΓÇô The bandwidth consumed by the device when the data from the device is being sent to the cloud. You can enforce a bandwidth control by specifying a value between 1 Mbps and 1,000 Mbps when you create this container. If you want the device to consume all available bandwidth, set this field to **Unlimited**. You can also create and apply a bandwidth template to allocate bandwidth based on schedule.
+* **Cloud bandwidth** ΓÇô The bandwidth consumed by the device when the data from the device is being sent to the cloud. If you want the device to consume all available bandwidth, set this field to **Unlimited**. You can also create and apply a bandwidth template to allocate bandwidth based on a schedule.
The following procedures explain how to use the StorSimple **Volume containers** blade to complete the following common operations:
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-3-runtime.md
zipp=0.6.0
## Next steps - [Azure Synapse Analytics](../overview-what-is.md)-- [Apache Spark Documentation](https://spark.apache.org/docs/2.4.4/)
+- [Apache Spark Documentation](https://spark.apache.org/docs/3.0.2/)
- [Apache Spark Concepts](apache-spark-concepts.md)
synapse-analytics Gen2 Migration Schedule https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/gen2-migration-schedule.md
Title: Migrate your dedicated SQL pool (formerly SQL DW) to Gen2 description: Instructions for migrating an existing dedicated SQL pool (formerly SQL DW) to Gen2 and the migration schedule by region. --++ ms.assetid: 04b05dea-c066-44a0-9751-0774eb84c689
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
Last updated 4/30/2020--++
synapse-analytics Sql Data Warehouse Partner Data Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-management.md
This article highlights Microsoft partner companies with data management tools a
| ![Alation](./media/sql-data-warehouse-partner-data-management/alation-logo.png) |**Alation**<br>AlationΓÇÖs data catalog dramatically improves the productivity, increases the accuracy, and drives confident data-driven decision making for analysts. AlationΓÇÖs data catalog empowers everyone in your organization to find, understand, and govern data. |[Product page](https://www.alation.com/product/data-catalog/)<br> | | ![BI Builders (Xpert BI)](./media/sql-data-warehouse-partner-data-integration/bibuilders-logo.png) |**BI Builders (Xpert BI)**<br> Xpert BI provides an intuitive and searchable catalog for the line-of-business user to find, trust, and understand data and reports. The solution covers the whole data platform including Azure Synapse Analytics, ADLS Gen 2, Azure SQL Database, Analysis Services and Power BI, and also data flows and data movement end-to-end. Data stewards can update descriptions and tag data to follow regulatory requirements. Xpert BI can be integrated via APIs to other catalogs such as Azure Purview. It supplements traditional data catalogs with a business user perspective. |[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>| | ![Coffing Data Warehousing](./media/sql-data-warehouse-partner-data-management/coffing-data-warehousing-logo.png) |**Coffing Data Warehousing**<br>Coffing Data Warehousing provides Nexus Chameleon, a tool with 10 years of design dedicated to querying systems. Nexus is available as a query tool for dedicated SQL pool in Azure Synapse Analytics. Use Nexus to query in-house and cloud computers and join data across different platforms. Point-Click-Report! |[Product page](https://coffingdw.com/software/nexus/)<br> |
-| ![Inbrein](./media/sql-data-warehouse-partner-data-management/inbrein-logo.png) |**Inbrein MicroERD**<br>Inbrein MicroERD provides the tools that you need to create a precise data model, reduce data redundancy, improve productivity, and observe standards. By using its UI, which was developed based on extensive user experiences, a modeler can work on DB models easily and conveniently. You can continuously enjoy new and improved functions of MicroERD through prompt functional improvements and updates. |[Product page](http://microerd.com/)<br> |
+| ![Inbrein](./media/sql-data-warehouse-partner-data-management/inbrein-logo.png) |**Inbrein MicroERD**<br>Inbrein MicroERD provides the tools that you need to create a precise data model, reduce data redundancy, improve productivity, and observe standards. By using its UI, which was developed based on extensive user experiences, a modeler can work on DB models easily and conveniently. You can continuously enjoy new and improved functions of MicroERD through prompt functional improvements and updates. |Product page<br> |
| ![Infolibrarian](./media/sql-data-warehouse-partner-data-management/infolibrarian-logo.png) |**Infolibrarian (Metadata Management Server)**<br>InfoLibrarian catalogs, stores, and manages metadata to help you solve key pain points of data management. Infolibrarian provides metadata management, data governance, and asset management solutions for managing and publishing metadata from a diverse set of tools and technologies. |[Product page](http://www.infolibcorp.com/metadata-management/software-tools)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/infolibrarian.infolibrarian-metadata-management-server)<br> | | ![Kyligence](./media/sql-data-warehouse-partner-data-management/kyligence-logo.png) |**Kyligence**<br>Founded by the creators of Apache Kylin, Kyligence is on a mission to accelerate the productivity of its customers by automating data management, discovery, interaction, and insight generation ΓÇô all without barriers. Kyligence Cloud enables cluster deployment, enhances data access, and dramatically accelerates data analysis. Kyligence's AI-augmented Big Data analytics management platform makes the often-challenging task of building enterprise-scale data lakes fast and easy.|[Product page](https://kyligence.io/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/kyligence.kyligence)<br> | | ![Redpoint Global](./media/sql-data-warehouse-partner-data-management/redpoint-global-logo.png) |**RedPoint Data Management**<br>RedPoint Data Management enables marketers to apply all their data to drive cross-channel customer engagement while doing structured and unstructured data management. With RedPoint, you can maximize the value of your structured and unstructured data to deliver the hyper-personalized, contextual interactions needed to engage today's omni-channel customer. Drag-and-drop interface makes designing and executing data management processes easy. |[Product page](https://www.redpointglobal.com/customer-data-management)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/redpoint-global.redpoint-rpdm)<br> |
This article highlights Microsoft partner companies with data management tools a
## Next steps To learn more about other partners, see [Business Intelligence partners](sql-data-warehouse-partner-business-intelligence.md), [Data Integration partners](sql-data-warehouse-partner-data-integration.md), and [Machine Learning and AI partners](sql-data-warehouse-partner-machine-learning-ai.md).-----
synapse-analytics Sql Data Warehouse Predict https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
Title: Score machine learning models with PREDICT description: Learn how to score machine learning models using the T-SQL PREDICT function in dedicated SQL pool. -+ Last updated 07/21/2020-+
synapse-analytics Sql Data Warehouse Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-points.md
Title: User-defined restore points description: How to create a restore point for dedicated SQL pool (formerly SQL DW). -+ Last updated 07/03/2019-+
synapse-analytics Sql Data Warehouse Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-troubleshoot-connectivity.md
Title: Troubleshooting connectivity description: Troubleshooting connectivity in dedicated SQL pool (formerly SQL DW). -+ Last updated 03/27/2019-+
synapse-analytics Sql Data Warehouse Videos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-videos.md
Title: Videos description: Links to various video playlists for Azure Synapse Analytics. -+ Last updated 02/15/2019-+
synapse-analytics Develop Storage Files Spark Tables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-storage-files-spark-tables.md
Serverless SQL pool can automatically synchronize metadata from Apache Spark. A serverless SQL pool database will be created for each database existing in serverless Apache Spark pools.
-For each Spark external table based on Parquet and located in Azure Storage, an external table is created in a serverless SQL pool database. As such, you can shut down your Spark pools and still query Spark external tables from serverless SQL pool.
+For each Spark external table based on Parquet or CSV and located in Azure Storage, an external table is created in a serverless SQL pool database. As such, you can shut down your Spark pools and still query Spark external tables from serverless SQL pool.
When a table is partitioned in Spark, files in storage are organized by folders. Serverless SQL pool will use partition metadata and only target relevant folders and files for your query. Metadata synchronization is automatically configured for each serverless Apache Spark pool provisioned in the Azure Synapse workspace. You can start querying Spark external tables instantly.
-Each Spark parquet external table located in Azure Storage is represented with an external table in a dbo schema that corresponds to a serverless SQL pool database.
+Each Spark Parquet or CSV external table located in Azure Storage is represented with an external table in a dbo schema that corresponds to a serverless SQL pool database.
For Spark external table queries, run a query that targets an external [spark_table]. Before running the following example, make sure you have correct [access to the storage account](develop-storage-files-storage-access-control.md) where the files are located.
synapse-analytics Query Cosmos Db Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/query-cosmos-db-analytical-store.md
Database account master key is placed in server-level credential or database sco
## Sample dataset
-The examples in this article are based on data from the [European Centre for Disease Prevention and Control (ECDC) COVID-19 Cases](https://azure.microsoft.com/services/open-datasets/catalog/ecdc-covid-19-cases/) and [COVID-19 Open Research Dataset (CORD-19), doi:10.5281/zenodo.3715505](https://azure.microsoft.com/services/open-datasets/catalog/covid-19-open-research/).
+The examples in this article are based on data from the [European Centre for Disease Prevention and Control (ECDC) COVID-19 Cases](/azure/open-datasets/dataset-ecdc-covid-cases) and [COVID-19 Open Research Dataset (CORD-19), doi:10.5281/zenodo.3715505](https://azure.microsoft.com/services/open-datasets/catalog/covid-19-open-research/).
You can see the license and the structure of data on these pages. You can also download sample data for the [ECDC](https://pandemicdatalake.blob.core.windows.net/public/curated/covid-19/ecdc_cases/latest/ecdc_cases.json) and [CORD-19](https://azureopendatastorage.blob.core.windows.net/covid19temp/comm_use_subset/pdf_json/000b7d1517ceebb34e1e3e817695b6de03e2fa78.json) datasets.
While automatic schema inference capability in `OPENROWSET` provides a simple, e
The `OPENROWSET` function enables you to explicitly specify what properties you want to read from the data in the container and to specify their data types.
-Let's imagine that we've imported some data from the [ECDC COVID dataset](https://azure.microsoft.com/services/open-datasets/catalog/ecdc-covid-19-cases/) with the following structure into Azure Cosmos DB:
+Let's imagine that we've imported some data from the [ECDC COVID dataset](/azure/open-datasets/dataset-ecdc-covid-cases) with the following structure into Azure Cosmos DB:
```json {"date_rep":"2020-08-13","cases":254,"countries_and_territories":"Serbia","geo_id":"RS"}
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
As a first step, you need to configure data source and specify file format of re
Data sources represent connection string information that describes where your data is placed and how to authenticate to your data source.
-One example of data source definition that references public [ECDC COVID 19 Azure Open Data Set](https://azure.microsoft.com/services/open-datasets/catalog/ecdc-covid-19-cases/) is shown in the following example:
+One example of data source definition that references public [ECDC COVID 19 Azure Open Data Set](/azure/open-datasets/dataset-ecdc-covid-cases) is shown in the following example:
```sql CREATE EXTERNAL DATA SOURCE ecdc_cases WITH (
This role-based security access control might simplify management of your securi
- To learn how to connect serverless SQL pool to Power BI Desktop and create reports, see [Connect serverless SQL pool to Power BI Desktop and create reports](tutorial-connect-power-bi-desktop.md). - To learn how to use External tables in serverless SQL pool see [Use external tables with Synapse SQL](develop-tables-external-tables.md?tabs=sql-pool)-
virtual-machines Dcv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dcv2-series.md
Example use cases include: confidential multiparty data sharing, fraud detection
| Standard_DC8_v2 | 8 | 32 | 400 | 8 | 16000/128 | 2 | 168 | - DCsv2-series VMs are [generation 2 VMs](./generation-2.md#creating-a-generation-2-vm) and only support `Gen2` images.-- Currently available in the regions listed [here](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines&regions=all).
+- Currently available in the regions listed in [Azure Products by Region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines&regions=all).
- Previous generation of Confidential Compute VMs: [DC series](sizes-previous-gen.md#preview-dc-series) - Create DCsv2 VMs using the [Azure portal](./linux/quick-create-portal.md) or [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft-azure-compute.acc-virtual-machine-v2?tab=overview)
virtual-machines Dedicated Hosts Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-hosts-portal.md
Move the VM to a dedicated host using the [portal](https://portal.azure.com).
- For more information, see the [Dedicated hosts](dedicated-hosts.md) overview. -- There is sample template, found [here](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There is sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
- You can also deploy a dedicated host using the [Azure CLI](./linux/dedicated-hosts-cli.md) or [PowerShell](./windows/dedicated-hosts-powershell.md).
virtual-machines Disks Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-metrics.md
description: Examples of disk bursting metrics
Previously updated : 06/29/2021 Last updated : 07/19/2021
The following metrics help with observability into our [bursting](disk-bursting.
- **OS Disk Used Burst IO Credits Percentage**: The accumulated percentage of the IOPS burst used for the OS disk. Emitted on a 5 minute interval. ## Storage IO utilization metrics
-The following metrics help diagnose bottleneck in your Virtual Machine and Disk combination. These metrics are only available when using premium enabled VM. These metrics are available for all disk types except for Ultra.
+The following metrics help diagnose bottleneck in your Virtual Machine and Disk combination. These metrics are only available with the following configuration:
+- Only available on VM series that support premium storage.
+- Not available for ultra disks, all other disk types on these VM series can utilize these metrics.
Metrics that help diagnose disk IO capping:
virtual-machines Disks Pools Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools-deploy.md
description: Learn how to deploy an Azure disk pool.
Previously updated : 07/13/2021 Last updated : 07/19/2021
virtual-machines Disks Pools Deprovision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools-deprovision.md
Title: Deprovision an Azure disk pool (preview) description: Learn how to deprovision, stop, and delete and Azure disk pool. Previously updated : 07/13/2021 Last updated : 07/19/2021
virtual-machines Disks Pools Manage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools-manage.md
description: Learn how to add managed disks to an Azure disk pool or disable iSC
Previously updated : 07/13/2021 Last updated : 07/19/2021
virtual-machines Disks Pools Move Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools-move-resource.md
description: Learn how to move an Azure disk pool to a different subscription.
Previously updated : 07/13/2021 Last updated : 07/19/2021
virtual-machines Disks Pools Planning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools-planning.md
description: Learn how to get the most performance out of an Azure disk pool.
Previously updated : 07/13/2021 Last updated : 07/19/2021
virtual-machines Disks Pools Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools-troubleshoot.md
description: Troubleshoot issues with Azure disk pools. Learn about common failu
Previously updated : 07/13/2021 Last updated : 07/19/2021
virtual-machines Disks Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-pools.md
description: Learn about Azure disk pools (preview).
Previously updated : 07/13/2021 Last updated : 07/19/2021
Disk pools are currently available in the following regions:
- Australia East - Canada Central
+- Central US
- East US - West US 2 - Japan East - North Europe
+- West Europe
+- UK South
## Billing
virtual-machines Disks Shared Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared-enable.md
description: Configure an Azure managed disk with shared disks so that you can s
Previously updated : 06/29/2021 Last updated : 07/19/2021
To deploy a managed disk with the shared disk feature enabled, use the new prope
> [!IMPORTANT] > The value of `maxShares` can only be set or changed when a disk is unmounted from all VMs. See the [Disk sizes](#disk-sizes) for the allowed values for `maxShares`.
+# [Portal](#tab/azure-portal)
+
+1. Sign in to the Azure portal.
+1. Search for and Select **Disks**.
+1. Select **+ Create** to create a new disk.
+1. Fill in the details and select an appropriate region, then select **Change size**.
+
+ :::image type="content" source="media/disks-shared-enable/create-shared-disk-basics-pane.png" alt-text="Screenshot of the create a managed disk pane, change size highlighted.." lightbox="media/disks-shared-enable/create-shared-disk-basics-pane.png":::
+
+1. Select the premium SSD size that you want and select **OK**.
+
+ :::image type="content" source="media/disks-shared-enable/select-premium-shared-disk.png" alt-text="Screenshot of the disk SKU, premium SSD highlighted." lightbox="media/disks-shared-enable/select-premium-shared-disk.png":::
+
+1. Proceed through the deployment until you get to the **Advanced** pane.
+1. Select **Yes** for **Enable shared disk** and select the amount of **Max shares** you want.
+
+ :::image type="content" source="media/disks-shared-enable/enable-premium-shared-disk.png" alt-text="Screenshot of the Advanced pane, Enable shared disk highlighted and set to yes." lightbox="media/disks-shared-enable/enable-premium-shared-disk.png":::
+
+1. Select **Review + Create**.
++ # [Azure CLI](#tab/azure-cli) ```azurecli
Before using the following template, replace `[parameters('dataDiskName')]`, `[r
+### Deploy a standard SSD as a shared disk
+
+To deploy a managed disk with the shared disk feature enabled, use the new property `maxShares` and define a value greater than 1. This makes the disk shareable across multiple VMs.
+
+> [!IMPORTANT]
+> The value of `maxShares` can only be set or changed when a disk is unmounted from all VMs. See the [Disk sizes](#disk-sizes) for the allowed values for `maxShares`.
+
+# [Portal](#tab/azure-portal)
+
+You cannot currently deploy a shared standard SSD via the Azure portal. Use either the Azure CLI, the Azure PowerShell module, or an Azure Resource Manager template.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az disk create -g myResourceGroup -n mySharedDisk --size-gb 1024 -l westcentralus --sku StandardSSD_LRS --max-shares 2
+```
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+$dataDiskConfig = New-AzDiskConfig -Location 'WestCentralUS' -DiskSizeGB 1024 -AccountType StandardSSD_LRS -CreateOption Empty -MaxSharesCount 2
+
+New-AzDisk -ResourceGroupName 'myResourceGroup' -DiskName 'mySharedDisk' -Disk $dataDiskConfig
+```
+
+# [Resource Manager Template](#tab/azure-resource-manager)
+
+Replace the values in this Azure Resource Manager template with your own, before using it:
+
+```rest
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "dataDiskName": {
+ "type": "string",
+ "defaultValue": "mySharedDisk"
+ },
+ "dataDiskSizeGB": {
+ "type": "int",
+ "defaultValue": 1024
+ },
+ "maxShares": {
+ "type": "int",
+ "defaultValue": 2
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Compute/disks",
+ "name": "[parameters('dataDiskName')]",
+ "location": "[resourceGroup().location]",
+ "apiVersion": "2019-07-01",
+ "sku": {
+ "name": "StandardSSD_LRS"
+ },
+ "properties": {
+ "creationData": {
+ "createOption": "Empty"
+ },
+ "diskSizeGB": "[parameters('dataDiskSizeGB')]",
+ "maxShares": "[parameters('maxShares')]"
+ }
+ }
+ ]
+}
+```
+++ ### Deploy an ultra disk as a shared disk To deploy a managed disk with the shared disk feature enabled, change the `maxShares` parameter to a value greater than 1. This makes the disk shareable across multiple VMs.
To deploy a managed disk with the shared disk feature enabled, change the `maxSh
> [!IMPORTANT] > The value of `maxShares` can only be set or changed when a disk is unmounted from all VMs. See the [Disk sizes](#disk-sizes) for the allowed values for `maxShares`.
+# [Portal](#tab/azure-portal)
+
+1. Sign in to the Azure portal.
+1. Search for and Select **Disks**.
+1. Select **+ Create** to create a new disk.
+1. Fill in the details, then select **Change size**.
+1. Select ultra disk for the **Disk SKU**.
+
+ :::image type="content" source="media/disks-shared-enable/select-ultra-shared-disk.png" alt-text="Screenshot of the disk SKU, ultra disk highlighted.." lightbox="media/disks-shared-enable/select-ultra-shared-disk.png":::
+
+1. Select the disk size that you want and select **OK**.
+1. Proceed through the deployment until you get to the **Advanced** pane.
+1. Select **Yes** for **Enable shared disk** and select the amount of **Max shares** you want.
+1. Select **Review + Create**.
+
+ :::image type="content" source="media/disks-shared-enable/enable-ultra-shared-disk.png" alt-text="Screenshot of the Advanced pane, Enable shared disk highlighted." lightbox="media/disks-shared-enable/enable-ultra-shared-disk.png":::
# [Azure CLI](#tab/azure-cli)
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 06/29/2021 Last updated : 07/19/2021
# Share an Azure managed disk
+Azure shared disks is a feature for Azure managed disks that allow you to attach a managed disk to multiple virtual machines (VMs) simultaneously. Attaching a managed disk to multiple VMs allows you to either deploy new or migrate existing clustered applications to Azure.
+
+## How it works
+
+VMs in the cluster can read or write to their attached disk based on the reservation chosen by the clustered application using [SCSI Persistent Reservations](https://www.t10.org/members/w_spc3.htm) (SCSI PR). SCSI PR is an industry standard used by applications running on Storage Area Network (SAN) on-premises. Enabling SCSI PR on a managed disk allows you to migrate these applications to Azure as-is.
+
+Shared managed disks offer shared block storage that can be accessed from multiple VMs, these are exposed as logical unit numbers (LUNs). LUNs are then presented to an initiator (VM) from a target (disk). These LUNs look like direct-attached-storage (DAS) or a local drive to the VM.
+
+Shared managed disks do not natively offer a fully managed file system that can be accessed using SMB/NFS. You need to use a cluster manager, like Windows Server Failover Cluster (WSFC) or Pacemaker, that handles cluster node communication and write locking.
+
+## Limitations
++
+### Operating system requirements
+
+Shared disks support several operating systems. See the [Windows](#windows) or [Linux](#linux) sections for the supported operating systems.
+
+## Disk sizes
++
+## Sample workloads
+
+### Windows
+
+Azure shared disks are supported on Windows Server 2008 and newer. Most Windows-based clustering builds on WSFC, which handles all core infrastructure for cluster node communication, allowing your applications to take advantage of parallel access patterns. WSFC enables both CSV and non-CSV-based options depending on your version of Windows Server. For details, refer to [Create a failover cluster](/windows-server/failover-clustering/create-failover-cluster).
+
+Some popular applications running on WSFC include:
+
+- [Create an FCI with Azure shared disks (SQL Server on Azure VMs)](../azure-sql/virtual-machines/windows/failover-cluster-instance-azure-shared-disks-manually-configure.md)
+ - [Migrate your failover cluster instance to SQL Server on Azure VMs with shared disks](../azure-sql/migration-guides/virtual-machines/sql-server-failover-cluster-instance-to-sql-on-azure-vm.md)
+- Scale-out File Server (SoFS) [template] (https://aka.ms/azure-shared-disk-sofs-template)
+- SAP ASCS/SCS [template] (https://aka.ms/azure-shared-disk-sapacs-template)
+- File Server for General Use (IW workload)
+- Remote Desktop Server User Profile Disk (RDS UPD)
+
+### Linux
+
+Azure shared disks are supported on:
+- [SUSE SLE for SAP and SUSE SLE HA 15 SP1 and above](https://www.suse.com/c/azure-shared-disks-excercise-w-sles-for-sap-or-sle-ha/)
+- [Ubuntu 18.04 and above](https://discourse.ubuntu.com/t/ubuntu-high-availability-corosync-pacemaker-shared-disk-environments/14874)
+- [RHEL developer preview on any RHEL 8 version](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/deploying_red_hat_enterprise_linux_8_on_public_cloud_platforms/index?lb_target=production#azure-configuring-shared-block-storage_configuring-rhel-high-availability-on-azure)
+- [Oracle Enterprise Linux](https://docs.oracle.com/en/operating-systems/oracle-linux/8/availability/hacluster-1.html)
+
+Linux clusters can use cluster managers such as [Pacemaker](https://wiki.clusterlabs.org/wiki/Pacemaker). Pacemaker builds on [Corosync](http://corosync.github.io/corosync/), enabling cluster communications for applications deployed in highly available environments. Some common clustered filesystems include [ocfs2](https://oss.oracle.com/projects/ocfs2/) and [gfs2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/global_file_system_2/ch-overview-gfs2). You can use SCSI Persistent Reservation (SCSI PR) and/or STONITH Block Device (SBD) based clustering models for arbitrating access to the disk. When using SCSI PR, you can manipulate reservations and registrations using utilities such as [fence_scsi](http://manpages.ubuntu.com/manpages/eoan/man8/fence_scsi.8.html) and [sg_persist](https://linux.die.net/man/8/sg_persist).
+
+## Persistent reservation flow
+
+The following diagram illustrates a sample 2-node clustered database application that uses SCSI PR to enable failover from one node to the other.
+
+![Two node cluster. An application running on the cluster is handling access to the disk](media/virtual-machines-disks-shared-disks/shared-disk-updated-two-node-cluster-diagram.png)
+
+The flow is as follows:
+
+1. The clustered application running on both Azure VM1 and VM2 registers its intent to read or write to the disk.
+1. The application instance on VM1 then takes exclusive reservation to write to the disk.
+1. This reservation is enforced on your Azure disk and the database can now exclusively write to the disk. Any writes from the application instance on VM2 will not succeed.
+1. If the application instance on VM1 goes down, the instance on VM2 can now initiate a database failover and take-over of the disk.
+1. This reservation is now enforced on the Azure disk and the disk will no longer accept writes from VM1. It will only accept writes from VM2.
+1. The clustered application can complete the database failover and serve requests from VM2.
+
+The following diagram illustrates another common clustered workload consisting of multiple nodes reading data from the disk for running parallel processes, such as training of machine learning models.
+
+![Four node VM cluster, each node registers intent to write, application takes exclusive reservation to properly handle write results](media/virtual-machines-disks-shared-disks/shared-disk-updated-machine-learning-trainer-model.png)
+
+The flow is as follows:
+
+1. The clustered application running on all VMs registers the intent to read or write to the disk.
+1. The application instance on VM1 takes an exclusive reservation to write to the disk while opening up reads to the disk from other VMs.
+1. This reservation is enforced on your Azure disk.
+1. All nodes in the cluster can now read from the disk. Only one node writes back results to the disk, on behalf of all nodes in the cluster.
+
+### Ultra disks reservation flow
+
+Ultra disks offer an additional throttle, for a total of two throttles. Due to this, ultra disks reservation flow can work as described in the earlier section, or it can throttle and distribute performance more granularly.
++
+## Performance throttles
+
+### Premium SSD performance throttles
+
+With premium SSD, the disk IOPS and throughput is fixed, for example, IOPS of a P30 is 5000. This value remains whether the disk is shared across 2 VMs or 5 VMs. The disk limits can be reached from a single VM or divided across two or more VMs.
+
+### Ultra disk performance throttles
+
+Ultra disks have the unique capability of allowing you to set your performance by exposing modifiable attributes and allowing you to modify them. By default, there are only two modifiable attributes but, shared ultra disks have two additional attributes.
++
+|Attribute |Description |
+|||
+|DiskIOPSReadWrite |The total number of IOPS allowed across all VMs mounting the share disk with write access. |
+|DiskMBpsReadWrite |The total throughput (MB/s) allowed across all VMs mounting the shared disk with write access. |
+|DiskIOPSReadOnly* |The total number of IOPS allowed across all VMs mounting the shared disk as `ReadOnly`. |
+|DiskMBpsReadOnly* |The total throughput (MB/s) allowed across all VMs mounting the shared disk as `ReadOnly`. |
+
+\* Applies to shared ultra disks only
+
+The following formulas explain how the performance attributes can be set, since they are user modifiable:
+
+- DiskIOPSReadWrite/DiskIOPSReadOnly:
+ - IOPS limits of 300 IOPS/GiB, up to a maximum of 160 K IOPS per disk
+ - Minimum of 100 IOPS
+ - DiskIOPSReadWrite + DiskIOPSReadOnly is at least 2 IOPS/GiB
+- DiskMBpsRead Write/DiskMBpsReadOnly:
+ - The throughput limit of a single disk is 256 KiB/s for each provisioned IOPS, up to a maximum of 2000 MBps per disk
+ - The minimum guaranteed throughput per disk is 4KiB/s for each provisioned IOPS, with an overall baseline minimum of 1 MBps
+
+#### Examples
+
+The following examples depict a few scenarios that show how the throttling can work with shared ultra disks, specifically.
+
+##### Two nodes cluster using cluster shared volumes
+
+The following is an example of a 2-node WSFC using clustered shared volumes. With this configuration, both VMs have simultaneous write-access to the disk, which results in the `ReadWrite` throttle being split across the two VMs and the `ReadOnly` throttle not being used.
++
+##### Two node cluster without cluster share volumes
+
+The following is an example of a 2-node WSFC that isn't using clustered shared volumes. With this configuration, only one VM has write-access to the disk. This results in the `ReadWrite` throttle being used exclusively for the primary VM and the `ReadOnly` throttle only being used by the secondary.
++
+##### Four node Linux cluster
+
+The following is an example of a 4-node Linux cluster with a single writer and three scale-out readers. With this configuration, only one VM has write-access to the disk. This results in the `ReadWrite` throttle being used exclusively for the primary VM and the `ReadOnly` throttle being split by the secondary VMs.
++
+#### Ultra pricing
+
+Ultra shared disks are priced based on provisioned capacity, total provisioned IOPS (diskIOPSReadWrite + diskIOPSReadOnly) and total provisioned Throughput MBps (diskMBpsReadWrite + diskMBpsReadOnly). There is no extra charge for each additional VM mount. For example, an ultra shared disk with the following configuration (diskSizeGB: 1024, DiskIOPSReadWrite: 10000, DiskMBpsReadWrite: 600, DiskIOPSReadOnly: 100, DiskMBpsReadOnly: 1) is charged with 1024 GiB, 10100 IOPS, and 601 MBps regardless of whether it is mounted to two VMs or five VMs.
## Next steps
virtual-machines Hpc Compute Infiniband Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md
This extension installs InfiniBand OFED drivers on InfiniBand and SR-IOV-enabled ('r' sizes) [H-series](../sizes-hpc.md) and [N-series](../sizes-gpu.md) VMs running Linux. Depending on the VM family, the extension installs the appropriate drivers for the Connect-X NIC.
-Instructions on manual installation of the OFED drivers are available [here](../workloads/hpc/enable-infiniband.md#manual-installation).
+Instructions on manual installation of the OFED drivers are available in [Enable InfiniBand on HPC VMs](../workloads/hpc/enable-infiniband.md#manual-installation).
An extension is also available to install InfiniBand drivers for [Windows VMs](hpc-compute-infiniband-windows.md).
virtual-machines Hpccompute Amd Gpu Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpccompute-amd-gpu-windows.md
This article provides an overview of the VM extension to deploy AMD GPU drivers on Windows [NVv4-series](../nvv4-series.md) VMs. When you install AMD drivers using this extension, you are accepting and agreeing to the terms of the [AMD End-User License Agreement](https://amd.com/radeonsoftwarems). During the installation process, the VM may reboot to complete the driver setup.
-Instructions on manual installation of the drivers and the current supported versions are available [here](../windows/n-series-amd-driver-setup.md).
+Instructions on manual installation of the drivers and the current supported versions are available. For more information, see [Azure N-series AMD GPU driver setup for Windows](../windows/n-series-amd-driver-setup.md).
## Prerequisites
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
This extension installs NVIDIA GPU drivers on Linux N-series VMs. Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers using this extension, you are accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://go.microsoft.com/fwlink/?linkid=874330). During the installation process, the VM may reboot to complete the driver setup.
-Instructions on manual installation of the drivers and the current supported versions are available [here](../linux/n-series-driver-setup.md).
+Instructions on manual installation of the drivers and the current supported versions are available. For more information, see [Azure N-series GPU driver setup for Linux](../linux/n-series-driver-setup.md).
An extension is also available to install NVIDIA GPU drivers on [Windows N-series VMs](hpccompute-gpu-windows.md). ## Prerequisites
virtual-machines Hpccompute Gpu Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/hpccompute-gpu-windows.md
This extension installs NVIDIA GPU drivers on Windows N-series VMs. Depending on the VM family, the extension installs CUDA or GRID drivers. When you install NVIDIA drivers using this extension, you are accepting and agreeing to the terms of the [NVIDIA End-User License Agreement](https://go.microsoft.com/fwlink/?linkid=874330). During the installation process, the VM may reboot to complete the driver setup.
-Instructions on manual installation of the drivers and the current supported versions are available [here](../windows/n-series-driver-setup.md).
+Instructions on manual installation of the drivers and the current supported versions are available. For more information, see [Azure N-series NVIDIA GPU driver setup for Windows](../windows/n-series-driver-setup.md).
An extension is also available to install NVIDIA GPU drivers on [Linux N-series VMs](hpccompute-gpu-linux.md). ## Prerequisites
virtual-machines Iaas Antimalware Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/iaas-antimalware-windows.md
It is not supported on the Windows Server 2008 operating system, and also is not
Windows Defender is the built-in Antimalware enabled in Windows Server 2016. The Windows Defender Interface is also enabled by default on some Windows Server 2016 SKU's. The Azure VM Antimalware extension can still be added to a Windows Server 2016 Azure VM with Windows Defender, but in this scenario the extension will apply any optional configuration policies to be used by Windows Defender, the extension will not deploy any additional antimalware service.
-You can read more about this update [here](/archive/blogs/azuresecurity/update-to-azure-antimalware-extension-for-cloud-services).
+For more information, see [Update to Azure Antimalware Extension for Cloud Services](/archive/blogs/azuresecurity/update-to-azure-antimalware-extension-for-cloud-services).
### Internet connectivity
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/extensions/update-linux-agent.md
cd WALinuxAgent-2.2.14
### 2. Install the Azure Linux Agent For version 2.2.x, use:
-You may need to install the package `setuptools` first--see [here](https://pypi.python.org/pypi/setuptools). Then run:
+You may need to install the package `setuptools` first--see [setuptools](https://pypi.python.org/pypi/setuptools). Then run:
```bash sudo python setup.py install
virtual-machines Cloud Init Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/cloud-init-deep-dive.md
When cloud-init is included in a generalized image, and a VM is created from tha
## Understand Cloud-Init configuration
-Configuring a VM to run on a platform, means cloud-init needs to apply multiple configurations, as an image consumer, the main configurations you will be interacting with is `User data` (customData), which supports multiple formats, these are documented [here](https://cloudinit.readthedocs.io/en/latest/topics/format.html#user-data-formats). You also have the ability to add and run scripts (/var/lib/cloud/scripts) for additional configuration, below discusses this in more detail.
+Configuring a VM to run on a platform, means cloud-init needs to apply multiple configurations, as an image consumer, the main configurations you will be interacting with is `User data` (customData), which supports multiple formats. For more information, see [User-Data Formats & cloud-init 21.2 documentation](https://cloudinit.readthedocs.io/en/latest/topics/format.html#user-data-formats). You also have the ability to add and run scripts (/var/lib/cloud/scripts) for additional configuration, below discusses this in more detail.
Some configurations are already baked into Azure Marketplace images that come with cloud-init, such as:
virtual-machines Dedicated Hosts Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/dedicated-hosts-cli.md
az group delete -n myDHResourceGroup
- You can also create dedicated hosts using the [Azure portal](../dedicated-hosts-portal.md). -- There is sample template, found [here](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There is sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
virtual-machines Image Builder Devops Task https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-devops-task.md
Use the resource group where the temporary image template artifact will be store
The location is the region where the Image Builder will run. Only a set number of [regions](../image-builder-overview.md#regions) are supported. The source images must be present in this location. For example, if you are using Shared Image Gallery, a replica must exist in that region. ### Managed Identity (Required)
-Image Builder requires a Managed Identity, which it uses to read source custom images, connect to Azure Storage, and create custom images. See [here](../image-builder-overview.md#permissions) for more details.
+Image Builder requires a Managed Identity, which it uses to read source custom images, connect to Azure Storage, and create custom images. See [Learn about Azure Image Builder](../image-builder-overview.md#permissions) for more details.
### VNET Support
The Image Template resource artifact is in the resource group specified initiall
## Next steps For more information, see [Azure Image Builder overview](../image-builder-overview.md).+
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/image-builder-json.md
The API requires a 'SourceType' that defines the source for the image build, cur
> When using existing Windows custom images, you can run the Sysprep command up to 3 times on a single Windows 7 or Windows Server 2008 R2 image, or 1001 times on a single Windows image for later versions; for more information, see the [sysprep](/windows-hardware/manufacture/desktop/sysprep--generalize--a-windows-installation#limits-on-how-many-times-you-can-run-sysprep) documentation. ### PlatformImage source
-Azure Image Builder supports Windows Server and client, and Linux Azure Marketplace images, see [here](../image-builder-overview.md#os-support) for the full list.
+Azure Image Builder supports Windows Server and client, and Linux Azure Marketplace images, see [Learn about Azure Image Builder](../image-builder-overview.md#os-support) for the full list.
```json "source": {
az resource invoke-action \
## Next steps There are sample .json files for different scenarios in the [Azure Image Builder GitHub](https://github.com/azure/azvmimagebuilder).+
virtual-machines Tutorial Availability Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/tutorial-availability-sets.md
Advance to the next tutorial to learn about virtual machine scale sets.
> [!div class="nextstepaction"] > [Create a virtual machine scale set](tutorial-create-vmss.md)
-* To learn more about availability zones, visit the [Availability Zones documentation](../../availability-zones/az-overview.md).
-* More documentation about both availability sets and availability zones is also available [here](../availability.md).
-* To try out availability zones, visit [Create a Linux virtual machine in an availability zone with the Azure CLI](./create-cli-availability-zone.md)
+* To learn more about availability zones, visit the [Availability Zones documentation](../../availability-zones/az-overview.md).
+* More documentation about both availability sets and availability zones is also available at [Availability options for Azure Virtual Machines](../availability.md).
+* To try out availability zones, visit [Create a Linux virtual machine in an availability zone with the Azure CLI](./create-cli-availability-zone.md)
virtual-machines Migration Classic Resource Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/migration-classic-resource-manager-overview.md
The following configurations are not currently supported.
| Compute |Boot diagnostics with Premium storage |Disable Boot Diagnostics feature for the VMs before continuing with migration. You can re-enable boot diagnostics in the Resource Manager stack after the migration is complete. Additionally, blobs that are being used for screenshot and serial logs should be deleted so you are no longer charged for those blobs. | | Compute | Cloud services that contain more than one availability set or multiple availability sets. |This is currently not supported. Please move the Virtual Machines to the same availability set before migrating. | | Compute | VM with Azure Security Center extension | Azure Security Center automatically installs extensions on your Virtual Machines to monitor their security and raise alerts. These extensions usually get installed automatically if the Azure Security Center policy is enabled on the subscription. To migrate the Virtual Machines, disable the security center policy on the subscription, which will remove the Security Center monitoring extension from the Virtual Machines. |
-| Compute | VM with backup or snapshot extension | These extensions are installed on a Virtual Machine configured with the Azure Backup service. While the migration of these VMs is not supported, follow the guidance [here](/azure/virtual-machines/migration-classic-resource-manager-faq#i-backed-up-my-classic-vms-in-a-vault-can-i-migrate-my-vms-from-classic-mode-to-resource-manager-mode-and-protect-them-in-a-recovery-services-vault) to keep backups that were taken prior to migration. |
+| Compute | VM with backup or snapshot extension | These extensions are installed on a Virtual Machine configured with the Azure Backup service. While the migration of these VMs is not supported, follow the guidance in [Frequently asked questions about classic to Azure Resource Manager migration](/azure/virtual-machines/migration-classic-resource-manager-faq#i-backed-up-my-classic-vms-in-a-vault-can-i-migrate-my-vms-from-classic-mode-to-resource-manager-mode-and-protect-them-in-a-recovery-services-vault) to keep backups that were taken prior to migration. |
| Compute | VM with Azure Site Recovery extension | These extensions are installed on a Virtual Machine configured with the Azure Site Recovery service. While the migration of storage used with Site Recovery will work, current replication will be impacted. You need to disable and enable VM replication after storage migration. | | Network |Virtual networks that contain virtual machines and web/worker roles |This is currently not supported. Please move the Web/Worker roles to their own Virtual Network before migrating. Once the classic Virtual Network is migrated, the migrated Azure Resource Manager Virtual Network can be peered with the classic Virtual Network to achieve similar configuration as before.| | Network | Classic Express Route circuits |This is currently not supported. These circuits need to be migrated to Azure Resource Manager before beginning IaaS migration. To learn more, see [Moving ExpressRoute circuits from the classic to the Resource Manager deployment model](../expressroute/expressroute-move.md).|
The following configurations are not currently supported.
* [Use CLI to migrate IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-cli.md) * [Community tools for assisting with migration of IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-community-tools.md) * [Review most common migration errors](migration-classic-resource-manager-errors.md)
-* [Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-faq.yml)
+* [Review the most frequently asked questions about migrating IaaS resources from classic to Azure Resource Manager](migration-classic-resource-manager-faq.yml)
virtual-machines Premium Storage Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/premium-storage-performance.md
Table below summarizes the cost breakdown of this scenario for Standard and Prem
*Linux Distros*
-With Azure Premium Storage, you get the same level of Performance for VMs running Windows and Linux. We support many flavors of Linux distros, and you can see the complete list [here](linux/endorsed-distros.md). It is important to note that different distros are better suited for different types of workloads. You will see different levels of performance depending on the distro your workload is running on. Test the Linux distros with your application and choose the one that works best.
+With Azure Premium Storage, you get the same level of Performance for VMs running Windows and Linux. We support many flavors of Linux distros. For more information, see [Linux distributions endorsed on Azure](linux/endorsed-distros.md). It is important to note that different distros are better suited for different types of workloads. You will see different levels of performance depending on the distro your workload is running on. Test the Linux distros with your application and choose the one that works best.
When running Linux with Premium Storage, check the latest updates about required drivers to ensure high performance.
Learn more about the available disk types:
For SQL Server users, read articles on Performance Best Practices for SQL Server: * [Performance Best Practices for SQL Server in Azure Virtual Machines](../azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md)
-* [Azure Premium Storage provides highest performance for SQL Server in Azure VM](https://cloudblogs.microsoft.com/sqlserver/2015/04/23/azure-premium-storage-provides-highest-performance-for-sql-server-in-azure-vm/)
+* [Azure Premium Storage provides highest performance for SQL Server in Azure VM](https://cloudblogs.microsoft.com/sqlserver/2015/04/23/azure-premium-storage-provides-highest-performance-for-sql-server-in-azure-vm/)
virtual-machines Shared Image Galleries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/shared-image-galleries.md
We always recommend you to overprovision the number of replicas due to factors l
[Azure Zone Redundant Storage (ZRS)](https://azure.microsoft.com/blog/azure-zone-redundant-storage-in-public-preview/) provides resilience against an Availability Zone failure in the region. With the general availability of Shared Image Gallery, you can choose to store your images in ZRS accounts in regions with Availability Zones.
-You can also choose the account type for each of the target regions. The default storage account type is Standard_LRS, but you can choose Standard_ZRS for regions with Availability Zones. Check the regional availability of ZRS [here](../storage/common/storage-redundancy.md).
+You can also choose the account type for each of the target regions. The default storage account type is Standard_LRS, but you can choose Standard_ZRS for regions with Availability Zones. For more information on regional availability of ZRS, see [Data redundancy](../storage/common/storage-redundancy.md).
![Graphic showing ZRS](./media/shared-image-galleries/zrs.png)
virtual-machines Dedicated Hosts Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/dedicated-hosts-powershell.md
Remove-AzResourceGroup -Name $rgName
## Next steps -- There is sample template, found [here](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
+- There is sample template, available at [Azure quickstart templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md), that uses both zones and fault domains for maximum resiliency in a region.
- You can also deploy dedicated hosts using the [Azure portal](../dedicated-hosts-portal.md).
virtual-machines Image Builder Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/image-builder-virtual-desktop.md
This article is intended to be a copy and paste exercise.
## Prerequisites
-You must have the latest Azure PowerShell CmdLets installed, see [here](/powershell/azure/overview) for install details.
+You must have the latest Azure PowerShell CmdLets installed, see [Overview of Azure PowerShell](/powershell/azure/overview) for install details.
```PowerShell # check you are registered for the providers, ensure RegistrationState is set to 'Registered'.
$getStatus.LastRunStatusMessage
$getStatus.LastRunStatusRunSubState ``` ## Create a VM
-Now the build is finished you can build a VM from the image, use the examples from [here](/powershell/module/az.compute/new-azvm#examples).
+Now the build is finished you can build a VM from the image, use the examples from [New-AzVM (Az.Compute)](/powershell/module/az.compute/new-azvm#examples).
## Clean up
Remove-AzResourceGroup $imageResourceGroup -Force
## Next steps
-You can try more examples [on GitHub](https://github.com/azure/azvmimagebuilder/tree/master/quickquickstarts).
+You can try more examples [on GitHub](https://github.com/azure/azvmimagebuilder/tree/master/quickquickstarts).
+
virtual-machines N Series Amd Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/windows/n-series-amd-driver-setup.md
Previous supported driver version for Windows builds up to 1909 is [20.Q4](https
1. Connect by Remote Desktop to each NVv4-series VM.
-2. If you need to uninstall the previous driver version then download the AMD cleanup utility [here](https://download.microsoft.com/download/4/f/1/4f19b714-9304-410f-9c64-826404e07857/AMDCleanupUtilityni.exe) Please do not use the utility that comes with the previous version of the driver.
+2. If you need to uninstall the previous driver version then download the [AMD cleanup utility](https://download.microsoft.com/download/4/f/1/4f19b714-9304-410f-9c64-826404e07857/AMDCleanupUtilityni.exe) Please do not use the utility that comes with the previous version of the driver.
3. Download and install the latest driver.
virtual-machines Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/configure.md
The base Ubuntu Server 16.04 LTS, 18.04 LTS, and 20.04 LTS VM images in the Mark
- Scripts used in the creation of the Ubuntu 18.04 and 20.04 LTS based HPC VM images from a base Ubuntu Marketplace image are on the [azhpc-images repo](https://github.com/Azure/azhpc-images/tree/master/ubuntu). > [!NOTE]
-> Mellanox OFED 5.1 and above do not support ConnectX3-Pro InfiniBand cards on SR-IOV enabled N-series VM sizes with FDR InfiniBand (e.g. NCv3). Please use LTS Mellanox OFED version 4.9-0.1.7.0 or older on the N-series VM's with ConnectX3-Pro cards. Please see more details [here](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed).
+> Mellanox OFED 5.1 and above do not support ConnectX3-Pro InfiniBand cards on SR-IOV enabled N-series VM sizes with FDR InfiniBand (e.g. NCv3). Please use LTS Mellanox OFED version 4.9-0.1.7.0 or older on the N-series VM's with ConnectX3-Pro cards. For more information, see [Linux InfiniBand Drivers](https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed).
### SUSE Linux Enterprise Server VM images SLES 12 SP3 for HPC, SLES 12 SP3 for HPC (Premium), SLES 12 SP1 for HPC, SLES 12 SP1 for HPC (Premium), SLES 12 SP4 and SLES 15 VM images in the Marketplace are supported. These VM images come pre-loaded with the Network Direct drivers for RDMA (on the non-SR-IOV VM sizes) and Intel MPI version 5.1. Learn more about [setting up MPI](setup-mpi.md) on the VMs.
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/hb-hc-known-issues.md
More details on this are available on this [TechCommunity article](https://techc
## InfiniBand driver installation on non-SR-IOV VMs
-Currently H16r, H16mr and NC24r are not SR-IOV enabled. Some details on the InfiniBand stack bifurcation are [here](../../sizes-hpc.md#rdma-capable-instances).
+Currently H16r, H16mr and NC24r are not SR-IOV enabled. For more information on the InfiniBand stack bifurcation, see [Azure VM sizes - HPC](../../sizes-hpc.md#rdma-capable-instances).
InfiniBand can be configured on the SR-IOV enabled VM sizes with the OFED drivers while the non-SR-IOV VM sizes require ND drivers. This IB support is available appropriately for [CentOS, RHEL, and Ubuntu](configure.md). ## Duplicate MAC with cloud-init with Ubuntu on H-series and N-series VMs
virtual-machines Hbv2 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/hbv2-series-overview.md
Process pinning will work on HBv2-series VMs because we expose the underlying si
| Orchestrator Support | CycleCloud, Batch, AKS; [cluster configuration options](../../sizes-hpc.md#cluster-configuration-options) | > [!NOTE]
-> Windows Server 2012 R2 is not supported on HBv2 and other VMs with more than 64 (virtual or physical) cores. See [here](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) for more details.
+> Windows Server 2012 R2 is not supported on HBv2 and other VMs with more than 64 (virtual or physical) cores. See [Supported Windows guest operating systems for Hyper-V on Windows Server](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) for more details.
## Next steps - Learn more about [AMD EPYC architecture](https://bit.ly/2Epv3kC) and [multi-chip architectures](https://bit.ly/2GpQIMb). For more detailed information, see the [HPC Tuning Guide for AMD EPYC Processors](https://bit.ly/2T3AWZ9). - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).-- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
+- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/hbv3-series-overview.md
When paired in a striped array, the NVMe SSD provides up to 7 GB/s reads and 3 G
| Orchestrator Support | Azure CycleCloud, Azure Batch, AKS; [cluster configuration options](../../sizes-hpc.md#cluster-configuration-options) | > [!NOTE]
-> Windows Server 2012 R2 is not supported on HBv3 and other VMs with more than 64 (virtual or physical) cores. See [here](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) for more details.
+> Windows Server 2012 R2 is not supported on HBv3 and other VMs with more than 64 (virtual or physical) cores. See [Supported Windows guest operating systems for Hyper-V on Windows Server](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) for more details.
## Next steps - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).-- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
+- For a higher level architectural view of running HPC workloads, see [High Performance Computing (HPC) on Azure](/azure/architecture/topics/high-performance-computing/).
virtual-machines Setup Mpi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/hpc/setup-mpi.md
make -j 8 && make install
``` > [!NOTE]
-> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. More details [here](hb-hc-known-issues.md#accelerated-networking-on-hb-hc-hbv2-and-ndv2) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM.
+> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. For more information, see [Troubleshooting known issues with HPC and GPU VMs](hb-hc-known-issues.md#accelerated-networking-on-hb-hc-hbv2-and-ndv2) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM.
## HPC-X
virtual-machines Oracle Database Backup Strategies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-database-backup-strategies.md
Azure File shares can also be protected through Azure Backup to Recovery service
#### Azure Files NFS v4.1 (Preview)
-Azure file shares can be mounted in Linux distributions using the Network File System (NFS) v4.1 protocol. While in Preview there are a number of limitations to supported features, which are documented [here](../../../storage/files/storage-files-how-to-mount-nfs-shares.md).
+Azure file shares can be mounted in Linux distributions using the Network File System (NFS) v4.1 protocol. While in Preview there are a number of limitations to supported features. For more information, see [Mount an Azure NFS file share (preview)](../../../storage/files/storage-files-how-to-mount-nfs-shares.md).
While in preview Azure Files NFS v4.1 is also restricted to the following [regions](../../../storage/files/storage-files-how-to-mount-nfs-shares.md): - East US (LRS and ZRS)
Azure file shares can be mounted in Linux distributions using the SMB kernel cli
Azure Files SMB is generally available in all Azure regions, and shows the same performance characteristics as NFS v3.0 and v4.1 protocols, and so is currently the recommended method to provide backup storage media to Azure Linux VMs.
-There are two supported versions of SMB available, SMB 2.1 and SMB 3.0, with the latter recommended as it supports encryption in transit. However, different Linux kernels versions have differing support for SMB 2.1 and 3.0 and you should check the table [here](../../../storage/files/storage-how-to-use-files-linux.md) to ensure your application supports SMB 3.0.
+There are two supported versions of SMB available, SMB 2.1 and SMB 3.0, with the latter recommended as it supports encryption in transit. However, different Linux kernels versions have differing support for SMB 2.1 and 3.0. For more information, see [Mount SMB Azure file share on Linux](../../../storage/files/storage-how-to-use-files-linux.md) to ensure your application supports SMB 3.0.
Because Azure Files is designed to be a multi-user file share service, there are certain characteristics you should tune to make it more suitable as a backup storage media. Turning off caching and setting the user and group IDs for files created are recommended.
Azure Backup is now providing an [enhanced pre-script and post-script framework]
- [Create Oracle Database quickstart](oracle-database-quick-create.md) - [Back up Oracle Database to Azure Files](oracle-database-backup-azure-storage.md) - [Back up Oracle Database using Azure Backup service](oracle-database-backup-azure-backup.md)--
virtual-machines Oracle Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-design.md
Based on your network bandwidth requirements, there are various gateway types fo
- Use Virtual Machines with [Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md) for better network performance. - For certain Linux distributions, consider enabling [TRIM/UNMAP support](/previous-versions/azure/virtual-machines/linux/configure-lvm#trimunmap-support). - Install [Oracle Enterprise Manager](https://www.oracle.com/technetwork/oem/enterprise-manager/overview/https://docsupdatetracker.net/index.html) on a separate Virtual Machine.-- Huge pages are not enabled on linux by default. Consider enabling huge pages and set `use_large_pages = ONLY` on the Oracle DB. This may help increase performance. More information can be found [here](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/USE_LARGE_PAGES.html#GUID-1B0F4D27-8222-439E-A01D-E50758C88390).
+- Huge pages are not enabled on linux by default. Consider enabling huge pages and set `use_large_pages = ONLY` on the Oracle DB. This may help increase performance. For more information, see [USE_LARGE_PAGES](https://docs.oracle.com/en/database/oracle/oracle-database/12.2/refrn/USE_LARGE_PAGES.html#GUID-1B0F4D27-8222-439E-A01D-E50758C88390).
### Disk types and configurations
virtual-machines Oracle Reference Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-reference-architecture.md
Patching your virtual machine operating system can be automated using [Azure Aut
## Architecture and design considerations - Consider using hyperthreaded [memory optimized virtual machine](../../sizes-memory.md) with [constrained core vCPUs](../../../virtual-machines/constrained-vcpu.md) for your Oracle Database VM to save on licensing costs and maximize performance. Use multiple premium or ultra disks (managed disks) for performance and availability.-- When using managed disks, the disk/device name may change on reboots. It's recommended that you use the device UUID instead of the name to ensure your mounts persist across reboots. More information can be found [here](/previous-versions/azure/virtual-machines/linux/configure-raid#add-the-new-file-system-to-etcfstab).
+- When using managed disks, the disk/device name may change on reboots. It's recommended that you use the device UUID instead of the name to ensure your mounts persist across reboots. For more information, see [Configure software RAID on a Linux VM](/previous-versions/azure/virtual-machines/linux/configure-raid#add-the-new-file-system-to-etcfstab).
- Use availability zones to achieve high availability in-region. - Consider using ultra disks (when available) or premium disks for your Oracle database. - Consider setting up a standby Oracle database in another Azure region using Oracle Data Guard.
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
These capabilities are possible because Azure NetApp Files is based on NetApp®
## Licensing Oracle Database & software on Azure
-Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table is not applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled (as stated in the policy document). The policy details can be found [here](http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
+Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table is not applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled (as stated in the policy document). The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf).
Oracle databases generally require higher memory and IO. For this reason, [Memory Optimized VMs](../../sizes-memory.md) are recommended for these workloads. To optimize your workloads further, [Constrained Core vCPUs](../../constrained-vcpu.md) are recommended for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count. When migrating Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in the [Oracle on Azure FAQ](https://www.oracle.com/cloud/technologies/oracle-azure-faq.html)
virtual-machines Oracle Weblogic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/oracle/oracle-weblogic.md
There are four offers available to meet different scenarios: [single node withou
_These offers are Bring-Your-Own-License_. They assume you've already got the appropriate licenses with Oracle and are properly licensed to run offers in Azure.
-The offers support a range of operating system, Java, and WLS versions through base images (such as WebLogic Server 14 and JDK 11 on Oracle Linux 7.6). These base images are also available on Azure on their own. The base images are suitable for customers that require complex, customized Azure deployments. The current set of base images is available [here](https://azuremarketplace.microsoft.com/marketplace/apps?search=WebLogic%20Server%20Base%20Image&page=1).
+The offers support a range of operating system, Java, and WLS versions through base images (such as WebLogic Server 14 and JDK 11 on Oracle Linux 7.6). These base images are also available on Azure on their own. The base images are suitable for customers that require complex, customized Azure deployments. The current set of base images is available in the [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=WebLogic%20Server%20Base%20Image&page=1).
_If you're interested in working closely on your migration scenarios with the engineering team developing these offers, select the [CONTACT ME](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview) button_ on the [marketplace offer overview page](https://azuremarketplace.microsoft.com/marketplace/apps/oracle.oraclelinux-wls-cluster?tab=Overview). Program managers, architects, and engineers will reach back out to you shortly and start close collaboration. The opportunity to collaborate on a migration scenario is free while the offers are under active development.
virtual-machines Redhat Imagelist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/redhat-imagelist.md
RHEL-SAP | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP HANA and Busi
| | 7.6 | LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee. | | 76sap-gen2| LVM | Linux Agent | RHEL 7.6 for SAP HANA and Business Apps. Generation 2 image. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee. | | 7.7 | LVM | Linux Agent | RHEL 7.7 for SAP HANA and Business Apps. Attached to E4S repositories, will charge a premium for SAP and RHEL as well as the base compute fee.
-RHEL-SAP-HANA (To be removed in November 2020) | 6.7 | RAW | Linux Agent | RHEL 6.7 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available [here](https://access.redhat.com/articles/3751271).
-| | 7.2 | LVM | Linux Agent | RHEL 7.2 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available [here](https://access.redhat.com/articles/3751271).
-| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available [here](https://access.redhat.com/articles/3751271).
+RHEL-SAP-HANA (To be removed in November 2020) | 6.7 | RAW | Linux Agent | RHEL 6.7 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
+| | 7.2 | LVM | Linux Agent | RHEL 7.2 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
+| | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP HANA. Outdated in favor of the RHEL-SAP images. This image will be removed in November 2020. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
RHEL-SAP-APPS | 6.8 | RAW | Linux Agent | RHEL 6.8 for SAP Business Applications. Outdated in favor of the RHEL-SAP images. | | 7.3 | LVM | Linux Agent | RHEL 7.3 for SAP Business Applications. Outdated in favor of the RHEL-SAP images. | | 7.4 | LVM | Linux Agent | RHEL 7.4 for SAP Business Applications.
rhel-byos |rhel-lvm74| LVM | Linux Agent | RHEL 7.4 BYOS images, not atta
| |rhel-lvm82-gen2 | LVM | Linux Agent | RHEL 8.2 Generation 2 BYOS images , not attached to any source of updates, will not charge a RHEL premium. > [!NOTE]
-> The RHEL-SAP-HANA product offering is considered end of life by Red Hat. Existing deployments will continue to work normally, but Red Hat recommends that customers migrate from the RHEL-SAP-HANA images to the RHEL-SAP-HA images which includes the SAP HANA repositories as well as the HA add-on. More details about Red Hat's SAP cloud offerings are available [here](https://access.redhat.com/articles/3751271).
+> The RHEL-SAP-HANA product offering is considered end of life by Red Hat. Existing deployments will continue to work normally, but Red Hat recommends that customers migrate from the RHEL-SAP-HANA images to the RHEL-SAP-HA images which includes the SAP HANA repositories as well as the HA add-on. More details about Red Hat's SAP cloud offerings are available at [SAP offerings on certified cloud providers](https://access.redhat.com/articles/3751271).
## Next steps * Learn more about the [Red Hat images in Azure](./redhat-images.md).
virtual-machines Redhat Images https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/redhat-images.md
For RHEL 7.x images, there are a few different image types. The following table
## RHEL 8 image types >[!NOTE]
-> Red Hat recommends using Grubby to configure kernel command line parameters in RHEL 8+. More details are available [here](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel).
+> Red Hat recommends using Grubby to configure kernel command line parameters in RHEL 8+. For more information, see [Chapter 5. Configuring kernel command-line parameters Red Hat Enterprise Linux 8](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/configuring-kernel-command-line-parameters_managing-monitoring-and-updating-the-kernel).
Details for RHEL 8 image types are below.
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/redhat/redhat-rhui.md
Red Hat Enterprise Linux (RHEL) Pay-As-You-Go (PAYG) images come preconfigured to access Azure RHUI. No additional configuration is needed. To get the latest updates, run `sudo yum update` after your RHEL instance is ready. This service is included as part of the RHEL PAYG software fees.
-Additional information on RHEL images in Azure, including publishing and retention policies, is available [here](./redhat-images.md).
+Additional information on RHEL images in Azure, including publishing and retention policies, is available [Overview of Red Hat Enterprise Linux images in Azure](./redhat-images.md).
Information on Red Hat support policies for all versions of RHEL can be found on the [Red Hat Enterprise Linux Life Cycle](https://access.redhat.com/support/policy/updates/errata) page.
RedHat:RHEL:7.6:7.6.2019062116
Extended Update Support (EUS) repositories are available to customers who may want to lock their RHEL VMs to a certain RHEL minor release after provisioning the VM. You can version-lock your RHEL VM to a specific minor version by updating the repositories to point to the Extended Update Support repositories. You can also undo the EUS version-locking operation. >[!NOTE]
-> EUS is not supported on RHEL Extras. This means that if you are installing a package that is usually available from the RHEL Extras channel, you will not be able to do so while on EUS. The Red Hat Extras Product Life Cycle is detailed [here](https://access.redhat.com/support/policy/updates/extras/).
+> EUS is not supported on RHEL Extras. This means that if you are installing a package that is usually available from the RHEL Extras channel, you will not be able to do so while on EUS. The Red Hat Extras Product Life Cycle is detailed on the [Red Hat Enterprise Linux Extras Product Life Cycle - Red Hat Customer Portal](https://access.redhat.com/support/policy/updates/extras/) page.
At the time of this writing, EUS support has ended for RHEL <= 7.4. See the "Red Hat Enterprise Linux Extended Maintenance" section in the [Red Hat documentation](https://access.redhat.com/support/policy/updates/errata/#Long_Support) for more details. * RHEL 7.4 EUS support ends August 31, 2019
If you're using a network configuration to further restrict access from RHEL PAY
>The new Azure US Government images,as of January 2020, will be using Public IP mentioned under Azure Global header above. >[!NOTE]
->Also, note that Azure Germany is deprecated in favor of public Germany regions. Recommendation for Azure Germany customers is to start pointing to public RHUI using the steps [here](#manual-update-procedure-to-use-the-azure-rhui-servers).
+>Also, note that Azure Germany is deprecated in favor of public Germany regions. Recommendation for Azure Germany customers is to start pointing to public RHUI using the steps on the [Red Hat Update Infrastructure](#manual-update-procedure-to-use-the-azure-rhui-servers) page.
## Azure RHUI Infrastructure
virtual-machines Dbms_Guide_Sapase https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/dbms_guide_sapase.md
The HADR Users Guide details the setup and configuration of a 2 node SAP ASE ΓÇ£
> The only supported configuration on Azure is using Fault Manager without Floating IP. The Floating IP Address method will not work on Azure. ### Third node for disaster recovery
-Beyond using SAP ASE Always-On for local high availability, you might want to extend the configuration to an asynchronously replicated node in another Azure region. Documentation for such a scenario can be found [here](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/installation-procedure-for-sybase-16-3-patch-level-3-always-on/ba-p/368199).
+Beyond using SAP ASE Always-On for local high availability, you might want to extend the configuration to an asynchronously replicated node in another Azure region. For more information, see [Installation Procedure for Sybase 16. 3 Patch Level 3 Always-on + DR on Suse 12.3](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/installation-procedure-for-sybase-16-3-patch-level-3-always-on/ba-p/368199).
## SAP ASE database encryption & SSL SAP Software provisioning Manager (SWPM) is giving an option to encrypt the database during installation. If you want to use encryption, it is recommended to use SAP Full Database Encryption. See details documented in:
If you deployed the VM in a Cloud-Only scenario without cross-premises connectiv
> >
-More details related to the DNS name can be found [here][virtual-machines-azurerm-versus-azuresm].
Setting the SAP profile parameter icm/host_name_full to the DNS name of the Azure VM the link might look similar to:
A Monthly newsletter is published through [SAP support note #2381575](https://la
## Next steps
-Check the article [SAP workloads on Azure: planning and deployment checklist](./sap-deployment-checklist.md)
+Check the article [SAP workloads on Azure: planning and deployment checklist](./sap-deployment-checklist.md)
virtual-machines Dbms_Guide_Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/dbms_guide_sqlserver.md
Looking into the documentation, you can see that the functionality with the more
### Azure Backup for SQL Server VMs This new method of SQL Server backups is offered as of June 2018 as public preview by Azure Backup services. The method to backup SQL Server is the same as other third-party tools are using, namely the SQL Server VSS/VDI interface to stream backups to a target location. In this case, the target location is Azure Recovery Service vault.
-A more than detailed description of this backup method, which adds numerous advantages of central backup configurations, monitoring, and administration is available [here](../../../backup/backup-azure-sql-database.md).
+A more than detailed description of this backup method, which adds numerous advantages of central backup configurations, monitoring, and administration is available on the [Back up SQL Server databases to Azure](../../../backup/backup-azure-sql-database.md) page.
### Third-party backup solutions
There are many recommendations in this guide and we recommend you read it more t
## Next steps Read the article -- [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md)
+- [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md)
virtual-machines Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-vm-operations.md
Deploy the VMs in Azure by using:
- Azure PowerShell cmdlets. - The Azure CLI.
-You also can deploy a complete installed SAP HANA platform on the Azure VM services through the [SAP Cloud platform](https://cal.sap.com/). The installation process is described in [Deploy SAP S/4HANA or BW/4HANA on Azure](./cal-s4h.md) or with the automation released [here](https://github.com/AzureCAT-GSI/SAP-HANA-ARM).
+You also can deploy a complete installed SAP HANA platform on the Azure VM services through the [SAP Cloud platform](https://cal.sap.com/). The installation process is described in [Deploy SAP S/4HANA or BW/4HANA on Azure](./cal-s4h.md) or with the automation released on [GitHub](https://github.com/AzureCAT-GSI/SAP-HANA-ARM).
>[!IMPORTANT] > In order to use M208xx_v2 VMs, you need to be careful selecting your
instance is running. Initially two VM types can be used to run SAP HANA DT 2.0:
- M64-32ms - E32sv3
-See VM type description [here](../../sizes-memory.md)
+For more information on the VM type description, see [Azure VM sizes - Memory](../../sizes-memory.md)
Given the basic idea of DT 2.0, which is about offloading "warm" data in order to save costs it makes sense to use corresponding VM sizes. There is no strict rule though regarding the possible combinations. It depends on the specific customer workload.
All combinations of SAP HANA-certified M-series VMs with supported DT 2.0 VMs (M
Installing DT 2.0 on a dedicated VM requires network throughput between the DT 2.0 VM and the SAP HANA VM of 10 Gb minimum. Therefore it's mandatory to place all VMs within the same Azure Vnet and enable Azure accelerated networking.
-See additional information about Azure accelerated networking [here](../../../virtual-network/create-vm-accelerated-networking-cli.md)
+See additional information about Azure accelerated networking [Create an Azure VM with Accelerated Networking using Azure CLI](../../../virtual-network/create-vm-accelerated-networking-cli.md)
### VM Storage for SAP HANA DT 2.0
Azure VM types, which are supported for DT 2.0 the maximum disk IO throughput li
It is required to attach multiple Azure disks to the DT 2.0 VM and create a software raid (striping) on OS level to achieve the max limit of disk throughput per VM. A single Azure disk cannot provide the throughput to reach the max VM limit in this regard. Azure Premium storage is mandatory to run DT 2.0. -- Details about available Azure disk types can be found [here](../../disks-types.md)-- Details about creating software raid via mdadm can be found [here](/previous-versions/azure/virtual-machines/linux/configure-raid)-- Details about configuring LVM to create a striped volume for max throughput can be found [here](/previous-versions/azure/virtual-machines/linux/configure-lvm)
+- Details about available Azure disk types can be found on the [Select a disk type for Azure IaaS VMs - managed disks](../../disks-types.md) page
+- Details about creating software raid via mdadm can be found on the [Configure software RAID on a Linux VM](/previous-versions/azure/virtual-machines/linux/configure-raid) page
+- Details about configuring LVM to create a striped volume for max throughput can be found on the [Configure LVM on a virtual machine running Linux](/previous-versions/azure/virtual-machines/linux/configure-lvm) page
Depending on size requirements, there are different options to reach the max throughput of a VM. Here are possible data volume disk configurations for every DT 2.0 VM type to achieve the upper VM throughput limit. The E32sv3 VM should be considered as an entry level for smaller workloads. In case it
Regarding the size of the log volume a recommended starting point is a heuristic
Azure disk types depending on cost and throughput requirements. For the log volume, high I/O throughput is required. In case of using the VM type M64-32ms it is mandatory to enable [Write Accelerator](../../how-to-enable-write-accelerator.md). Azure Write Accelerator provides optimal disk write latency for the transaction log (only available for M-series). There are some items to consider though like the maximum number of disks per VM type. Details about Write Accelerator can be
-found [here](../../how-to-enable-write-accelerator.md)
+found on the [Azure Write Accelerator](../../how-to-enable-write-accelerator.md) page
Here are a few examples about sizing the log volume:
Get familiar with the articles as listed
- [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md) - [Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on Red Hat Enterprise Linux](./sap-hana-scale-out-standby-netapp-files-rhel.md) - [High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server](./sap-hana-high-availability.md)-- [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](./sap-hana-high-availability-rhel.md)
+- [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](./sap-hana-high-availability-rhel.md)
virtual-machines High Availability Guide Rhel Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-multi-sid.md
This documentation assumes that:
Update file `/etc/fstab` with the file systems for the additional SAP systems that you are deploying to the cluster.
- * If using Azure NetApp Files, follow the instructions [here](./high-availability-guide-rhel-netapp-files.md#prepare-for-sap-netweaver-installation)
- * If using GlusterFS cluster, follow the instructions [here](./high-availability-guide-rhel.md#prepare-for-sap-netweaver-installation)
+ * If using Azure NetApp Files, follow the instructions on the [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md#prepare-for-sap-netweaver-installation) page
+ * If using GlusterFS cluster, follow the instructions on the [Azure VMs high availability for SAP NW on RHEL](./high-availability-guide-rhel.md#prepare-for-sap-netweaver-installation) page
### Install ASCS / ERS
The tests that are presented are in a two node, multi-SID cluster with three SAP
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
+* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo subscription-manager attach --pool=&lt;pool id&gt; </code></pre>
- By attaching a pool to an Azure Marketplace PAYG RHEL image, you will be effectively double-billed for your RHEL usage: once for the PAYG image, and once for the RHEL entitlement in the pool you attach. To mitigate this, Azure now provides BYOS RHEL images. More information is available [here](../redhat/byos.md).
+ By attaching a pool to an Azure Marketplace PAYG RHEL image, you will be effectively double-billed for your RHEL usage: once for the PAYG image, and once for the RHEL entitlement in the pool you attach. To mitigate this, Azure now provides BYOS RHEL images. For more information, see [Red Hat Enterprise Linux bring-your-own-subscription Azure images](../redhat/byos.md).
1. **[A]** Enable RHEL for SAP repos. This step is not required, if using RHEL SAP HA-enabled images.
virtual-machines High Availability Guide Suse Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid.md
This documentation assumes that:
Update file `/etc/auto.direct` with the file systems for the additional SAP systems that you are deploying to the cluster.
- * If using NFS file server, follow the instructions [here](./high-availability-guide-suse.md#prepare-for-sap-netweaver-installation)
- * If using Azure NetApp Files, follow the instructions [here](./high-availability-guide-suse-netapp-files.md#prepare-for-sap-netweaver-installation)
+ * If using NFS file server, follow the instructions on the [Azure VMs high availability for SAP NetWeaver on SLES](./high-availability-guide-suse.md#prepare-for-sap-netweaver-installation) page
+ * If using Azure NetApp Files, follow the instructions on the [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](./high-availability-guide-suse-netapp-files.md#prepare-for-sap-netweaver-installation) page
You will need to restart the `autofs` service to mount the newly added shares.
The tests that are presented are in a two node, multi-SID cluster with three SAP
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
+* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/planning-guide.md
Throughout the document, we use the following terms:
### <a name="e55d1e22-c2c8-460b-9897-64622a34fdff"></a>Resources
-The entry point for SAP workload on Azure documentation is found [here](./get-started.md). Starting with this entry point you find many articles that cover the topics of:
+The entry point for SAP workload on Azure documentation is found at [Get started with SAP on Azure VMs](./get-started.md). Starting with this entry point you find many articles that cover the topics of:
- SAP NetWeaver and Business One on Azure - SAP DBMS guides for various DBMS systems in Azure
As a rough decision tree to decide whether an SAP system fits into Azure Virtual
![Decision tree to decide ability to deploy SAP on Azure][planning-guide-figure-700]
-1. The most important information to start with is the SAPS requirement for a given SAP system. The SAPS requirements need to be separated out into the DBMS part and the SAP application part, even if the SAP system is already deployed on-premises in a 2-tier configuration. For existing systems, the SAPS related to the hardware in use often can be determined or estimated based on existing SAP benchmarks. The results can be found [here](https://sap.com/about/benchmark.html). For newly deployed SAP systems, you should have gone through a sizing exercise, which should determine the SAPS requirements of the system.
+1. The most important information to start with is the SAPS requirement for a given SAP system. The SAPS requirements need to be separated out into the DBMS part and the SAP application part, even if the SAP system is already deployed on-premises in a 2-tier configuration. For existing systems, the SAPS related to the hardware in use often can be determined or estimated based on existing SAP benchmarks. The results can be found on the [About SAP Standard Application Benchmarks](https://sap.com/about/benchmark.html) page. For newly deployed SAP systems, you should have gone through a sizing exercise, which should determine the SAPS requirements of the system.
1. For existing systems, the I/O volume and I/O operations per second on the DBMS server should be measured. For newly planned systems, the sizing exercise for the new system also should give rough ideas of the I/O requirements on the DBMS side. If unsure, you eventually need to conduct a Proof of Concept. 1. Compare the SAPS requirement for the DBMS server with the SAPS the different VM types of Azure can provide. The information on SAPS of the different Azure VM types is documented in SAP Note [1928533]. The focus should be on the DBMS VM first since the database layer is the layer in an SAP NetWeaver system that does not scale out in the majority of deployments. In contrast, the SAP application layer can be scaled out. If none of the SAP supported Azure VM types can deliver the required SAPS, the workload of the planned SAP system can't be run on Azure. You either need to deploy the system on-premises or you need to change the workload volume for the system. 1. As documented [here (Linux)][virtual-machines-sizes-linux] and [here (Windows)][virtual-machines-sizes-windows], Azure enforces an IOPS quota per disk independent whether you use Standard Storage or Premium Storage. Dependent on the VM type, the number of data disks, which can be mounted varies. As a result, you can calculate a maximum IOPS number that can be achieved with each of the different VM types. Dependent on the database file layout, you can stripe disks to become one volume in the guest OS. However, if the current IOPS volume of a deployed SAP system exceeds the calculated limits of the largest VM type of Azure and if there is no chance to compensate with more memory, the workload of the SAP system can be impacted severely. In such cases, you can hit a point where you should not deploy the system on Azure.
Read the articles:
- [Azure Virtual Machines deployment for SAP NetWeaver](./deployment-guide.md) - [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_general.md)-- [SAP HANA infrastructure configurations and operations on Azure](/- azure/virtual-machines/workloads/sap/hana-vm-operations)
+- [SAP HANA infrastructure configurations and operations on Azure](/- azure/virtual-machines/workloads/sap/hana-vm-operations)
virtual-machines Sap Hana Availability Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-hana-availability-across-regions.md
In these cases, you can set up what SAP calls an [SAP HANA multitier system repl
![Diagram of three VMs over two regions](./media/sap-hana-availability-two-region/three_vm_HSR_async_2regions_ha_and_dr.PNG) SAP introduced [multi-target system replication](https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.03/en-US/0b2c70836865414a8c65463180d18fec.html) with HANA 2.0 SPS3. Multi-target system replication brings some advantages in update scenarios. For example, the DR site (Region 2) is not impacted when the secondary HA site is down for maintenance or updates.
-You can find out more about HANA multi-target system replication [here](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/ba457510958241889a459e606bbcf3d3.html).
+You can find out more about HANA multi-target system replication at the [SAP Help Portal](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/ba457510958241889a459e606bbcf3d3.html).
Possible architecture with multi-target replication would look like: ![Diagram of three VMs over two regions milti-target](./media/sap-hana-availability-two-region/saphanaavailability_hana_system_2region_HA_and_DR_multitarget_3VMs.PNG)
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
It is supported to use a smaller VM as target instance in the disaster recovery
- Re-sizing across VM families can be a problem when the Different VMs are collected in one Azure Availability Set or when the re-sizing should happen between the M-Series family and Mv2 family of VMs - CPU and memory consumption for the database instance being able to receive the stream of changes with minimal delay and enough CPU and memory resources to apply these changes with minimal delay to the data
-More details on limitations of different VM sizes can be found [here](../../sizes.md)
+More details on limitations of different VM sizes can be found on the [VM sizes](../../sizes.md) page
Another supported method of deploying a DR target is to have a second DBMS instance installed on a VM that runs a non-production DBMS instance of a non-production SAP instance. This can be a bit more challenging since you need to figure out what on memory, CPU resources, network bandwidth, and storage bandwidth is needed for the particular target instances that should function as main instance in the DR scenario. Especially in HANA it is highly recommended that you are configuring the instance that functions as DR target on a shared host so that the data is not pre-loaded into the DR target instance.
Read next steps in the [Azure Virtual Machines planning and implementation for S
-
+
virtual-network Manage Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/manage-public-ip-address-prefix.md
The following section details the parameters required when creating a static pub
|Idle timeout (minutes)|No|How many minutes to keep a TCP or HTTP connection open without relying on clients to send keep-alive messages. | |DNS name label|No|Must be unique within the Azure region you create the name in (across all subscriptions and all customers). Azure automatically registers the name and IP address in its DNS so you can connect to a resource with the name. Azure appends a default subnet *location.cloudapp.azure.com* to the name you provide to create the fully qualified DNS name. For more information, see [Use Azure DNS with an Azure public IP address](../dns/dns-custom-domain.md?toc=%2fazure%2fvirtual-network%2ftoc.json#public-ip-address).|
-Instead you may use the CLI and PowerShell commands below with the **--public-ip-prefix (CLI)** and **-PublicIpPrefix (PowerShell)** parameters, to create a public IP address resource.
+Alternatively, you may use the CLI and PowerShell commands below with the **--public-ip-prefix (CLI)** and **-PublicIpPrefix (PowerShell)** parameters, to create a public IP address resource from a prefix.
|Tool|Command| |||
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/public-ip-addresses.md
In Azure Resource Manager, a [public IP](virtual-network-public-ip-address.md) a
For Virtual Machine Scale Sets, use [Public IP Prefixes](public-ip-address-prefix.md).
+## At-a-glance
+
+The following table shows the property a public IP can be associated to a resource and the allocation methods. Note that public IPv6 support isn't available for all resource types at this time.
+
+| Top-level resource | IP Address association | Dynamic IPv4 | Static IPv4 | Dynamic IPv6 | Static IPv6 |
+| | | | | | |
+| Virtual machine |Network interface |Yes | Yes | Yes | Yes |
+| Internet-facing Load balancer |Front-end configuration |Yes | Yes | Yes |Yes |
+| Virtual Network gateway (VPN) |Gateway IP configuration |Yes (non-AZ only) |Yes (AZ only) | No |No |
+| Virtual Network gateway (ER) |Gateway IP configuration |Yes | No | Yes (preview) |No |
+| NAT gateway |Gateway IP configuration |No |Yes | No |No |
+| Application gateway |Front-end configuration |Yes (V1 only) |Yes (V2 only) | No | No |
+| Azure Firewall | Front-end configuration | No | Yes | No | No |
+| Bastion Host | Public IP configuration | No | Yes | No | No |
+ ## IP address version Public IP addresses can be created with an IPv4 or IPv6 address. You may be given the option to create a dual-stack deployment with a IPv4 and IPv6 address.
Standard SKU public IP addresses:
Basic SKU addresses: -- Assigned with the dynamic or static allocation method (IPv6 basic addresses can only use dynamic allocation method).
+- For IPv4: Can be assigned using the dynamic or static allocation method. For IPv6: Can only be assigned using the dynamic allocation method.
- Have an adjustable inbound originated flow idle timeout of 4-30 minutes, with a default of 4 minutes, and fixed outbound originated flow idle timeout of 4 minutes. - Are open by default. Network security groups are recommended but optional for restricting inbound or outbound traffic. - Don't support Availability Zone scenarios. Use standard SKU public IP for Availability Zone scenarios in applicable regions. To learn more about availability zones, see [Availability zones overview](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and [Standard Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
Basic SKU addresses:
## IP address assignment
-Standard and Basic public IPv4 addresses and Standard public IPv6 addresses support **static** assignment. The resource is assigned an IP address at the time it's created. The IP address is released when the resource is deleted.
+ Standard public IPv4, Basic public IPv4, and Standard public IPv6 addresses all support **static** assignment. The resource is assigned an IP address at the time it's created. The IP address is released when the resource is deleted.
> [!NOTE] > Even when you set the allocation method to **static**, you cannot specify the actual IP address assigned to the public IP address resource. Azure assigns the IP address from a pool of available IP addresses in the Azure location the resource is created in.
Static public IP addresses are commonly used in the following scenarios:
* Your Azure resources communicate with other apps or services that use an IP address-based security model. * You use TLS/SSL certificates linked to an IP address.
-Basic public IPv4 and IPv6 addresses support a **dynamic** assignment. The IP address **isn't** given to the resource at the time of creation when selecting dynamic.
-
-The IP is assigned when you associate the public IP address with a resource. The IP address is released when you stop, or delete the resource.
-
-For example, a public IP resource is released from a resource named **Resource A**. **Resource A** receives a different IP on start-up if the public IP resource is reassigned.
-
-Any associated IP address is released if the allocation method is changed from **static** to **dynamic**. Set the allocation method to **static** to ensure the IP address remains the same.
+Basic public IPv4 and IPv6 addresses support a **dynamic** assignment. The IP address **isn't** given to the resource at the time of creation when selecting dynamic. The IP is assigned when you associate the public IP address with a resource. The IP address is released when you stop, or delete the resource. For example, a public IP resource is released from a resource named **Resource A**. **Resource A** receives a different IP on start-up if the public IP resource is reassigned. Any associated IP address is released if the allocation method is changed from **static** to **dynamic**. Set the allocation method to **static** to ensure the IP address remains the same.
> [!NOTE] > Azure allocates public IP addresses from a range unique to each region in each Azure cloud. You can download the list of ranges (prefixes) for the Azure [Public](https://www.microsoft.com/download/details.aspx?id=56519), [US government](https://www.microsoft.com/download/details.aspx?id=57063), [China](https://www.microsoft.com/download/details.aspx?id=57062), and [Germany](https://www.microsoft.com/download/details.aspx?id=57064) clouds.
There are other attributes that can be used for a public IP address.
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## At-a-glance
-
-The following table shows the property a public IP can be associated to a resource and the allocation methods.
-
-Public IPv6 support isn't available for all resource types at this time.
-
-| Top-level resource | IP Address association | Dynamic IPv4 | Static IPv4 | Dynamic IPv6 | Static IPv6 |
-| | | | | | |
-| Virtual machine |Network interface |Yes | Yes | Yes | Yes |
-| Internet-facing Load balancer |Front-end configuration |Yes | Yes | Yes |Yes |
-| Virtual Network gateway (VPN) |Gateway IP configuration |Yes (non-AZ only) |Yes (AZ only) | No |No |
-| Virtual Network gateway (ER) |Gateway IP configuration |Yes | No | Yes* |No |
-| NAT gateway |Gateway IP configuration |No |Yes | No |No |
-| Application gateway |Front-end configuration |Yes (V1 only) |Yes (V2 only) | No | No |
-| Azure Firewall | Front-end configuration | No | Yes | No | No |
-| Bastion Host | Public IP configuration | No | Yes | No | No |
- ## Limits The limits for IP addressing are listed in the full set of [limits for networking](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#networking-limits) in Azure. The limits are per region and per subscription. [Contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to increase above the default limits based on your business needs.
Public IP addresses have a nominal charge. To learn more about IP address pricin
* VPN gateways cannot be used in a virtual network with IPv6 enabled, either directly or peered with "UseRemoteGateway". * Public IPv6 addresses are locked at an idle timeout of 4 minutes. * Azure doesn't support IPv6 communication for containers.
-* Use of IPv6-only virtual machines or virtual machines scale sets aren't supported. Each NIC must include at least one IPv4 IP configuration.
+* Use of IPv6-only virtual machines or virtual machines scale sets aren't supported. Each NIC must include at least one IPv4 IP configuration (dual-stack).
* When adding IPv6 to existing IPv4 deployments, IPv6 ranges can't be added to a virtual network with existing resource navigation links. * Forward DNS for IPv6 is supported for Azure public DNS. Reverse DNS isn't supported. * Routing Preference and cross-region load-balancing isn't supported.
+For more information on IPv6 in Azure, see [here](https://docs.microsoft.com/azure/virtual-network/ipv6-overview).
+ ## Next steps * Learn about [Private IP Addresses in Azure](private-ip-addresses.md) * [Deploy a VM with a static public IP using the Azure portal](virtual-network-deploy-static-pip-arm-portal.md)
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/service-tags-overview.md
The columns indicate whether the tag:
- Supports [regional](https://azure.microsoft.com/regions) scope. - Is usable in [Azure Firewall](../firewall/service-tags.md) rules.
-By default, service tags reflect the ranges for the entire cloud. Some service tags also allow more granular control by restricting the corresponding IP ranges to a specified region. For example, the service tag **Storage** represents Azure Storage for the entire cloud, but **Storage.WestUS** narrows the range to only the storage IP address ranges from the WestUS region. The following table indicates whether each service tag supports such regional scope.
+By default, service tags reflect the ranges for the entire cloud. Some service tags also allow more granular control by restricting the corresponding IP ranges to a specified region. For example, the service tag **Storage** represents Azure Storage for the entire cloud, but **Storage.WestUS** narrows the range to only the storage IP address ranges from the WestUS region. The following table indicates whether each service tag supports such regional scope. Note that the direction listed for each tag is a recommendation. For example, the AzureCloud tag may be used to allow inbound traffic. However, we don't recommend this in most scenarios since this means allowing traffic from all Azure IP's, including those used by other Azure customers.
| Tag | Purpose | Can use inbound or outbound? | Can be regional? | Can use with Azure Firewall? | | | -- |::|::|::|
virtual-network Tutorial Restrict Network Access To Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/tutorial-restrict-network-access-to-resources.md
To test network access to a storage account, deploy a VM to each subnet.
1. Open the downloaded rdp file. When prompted, select **Connect**.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/rdp-connect.png" alt-text="Screenshot of connection screen for private virtual machine":::
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/rdp-connect.png" alt-text="Screenshot of connection screen for private virtual machine.":::
1. Enter the user name and password you specified when creating the VM. You may need to select **More choices**, then **Use a different account** to specify the credentials you entered when you created the VM. For the email field, enter the "Administrator account: username" credentials you specified earlier. Select **OK** to sign into the VM.
- :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/credential-screen.png" alt-text="Screenshot of credential screen for private virtual machine":::
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/credential-screen.png" alt-text="Screenshot of credential screen for private virtual machine.":::
> [!NOTE] > You may receive a certificate warning during the sign-in process. If you receive the warning, select **Yes** or **Continue**, to proceed with the connection.
To test network access to a storage account, deploy a VM to each subnet.
1. You should receive the following error message:
- ![Access denied error](./media/tutorial-restrict-network-access-to-resources/access-denied-error.png)
-
+ :::image type="content" source="./media/tutorial-restrict-network-access-to-resources/access-denied-error.png" alt-text="Screenshot of access denied error message.":::
+ >[!NOTE] > The access is denied because your computer is not in the *Private* subnet of the *MyVirtualNetwork* virtual network.
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-network-service-endpoints-overview.md
For FAQs, see [Virtual Network Service Endpoint FAQs](./virtual-networks-faq.md#
- [Secure an Azure Storage account to a virtual network](../storage/common/storage-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json) - [Secure an Azure SQL Database to a virtual network](../azure-sql/database/vnet-service-endpoint-rule-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) - [Secure an Azure Synapse Analytics to a virtual network](../azure-sql/database/vnet-service-endpoint-rule-overview.md?toc=%2fazure%2fsql-data-warehouse%2ftoc.json)-- [Azure service integration in virtual networks](virtual-network-for-azure-services.md)
+- [Compare Private Endpoints and Service Endpoints](https://docs.microsoft.com/azure/virtual-network/vnet-integration-for-azure-services#compare-private-endpoints-and-service-endpoints)
- [Virtual Network Service Endpoint Policies](./virtual-network-service-endpoint-policies-overview.md) - [Azure Resource Manager template](https://azure.microsoft.com/resources/templates/vnet-2subnets-service-endpoints-storage-integration)
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-networks-faq.md
Yes. Azure reserves 5 IP addresses within each subnet. These are x.x.x.0-x.x.x.3
- x.x.x.0: Network address - x.x.x.1: Reserved by Azure for the default gateway - x.x.x.2, x.x.x.3: Reserved by Azure to map the Azure DNS IPs to the VNet space-- x.x.x.255: Network broadcast address
+- x.x.x.255: Network broadcast address for subnets of size /25 and larger. This will be a different address in smaller subnets.
### How small and how large can VNets and subnets be?
-The smallest supported IPv4 subnet is /29, and the largest is /8 (using CIDR subnet definitions). IPv6 subnets must be exactly /64 in size.
+The smallest supported IPv4 subnet is /29, and the largest is /2 (using CIDR subnet definitions). IPv6 subnets must be exactly /64 in size.
### Can I bring my VLANs to Azure using VNets? No. VNets are Layer-3 overlays. Azure does not support any Layer-2 semantics.
vpn-gateway Vpn Gateway Ipsecikepolicy Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-ipsecikepolicy-rm-powershell.md
The following table lists the supported cryptographic algorithms and key strengt
| IKEv2 Integrity | SHA384, SHA256, SHA1, MD5 | | DH Group | DHGroup24, ECP384, ECP256, DHGroup14, DHGroup2048, DHGroup2, DHGroup1, None | | IPsec Encryption | GCMAES256, GCMAES192, GCMAES128, AES256, AES192, AES128, DES3, DES, None |
-| IPsec Integrity | GCMASE256, GCMAES192, GCMAES128, SHA256, SHA1, MD5 |
+| IPsec Integrity | GCMAES256, GCMAES192, GCMAES128, SHA256, SHA1, MD5 |
| PFS Group | PFS24, ECP384, ECP256, PFS2048, PFS2, PFS1, None | QM SA Lifetime | (**Optional**: default values are used if not specified)<br>Seconds (integer; **min. 300**/default 27000 seconds)<br>KBytes (integer; **min. 1024**/default 102400000 KBytes) | | Traffic Selector | UsePolicyBasedTrafficSelectors** ($True/$False; **Optional**, default $False if not specified) |
The following table lists the supported cryptographic algorithms and key strengt
> 3. In the table above: > * IKEv2 corresponds to Main Mode or Phase 1 > * IPsec corresponds to Quick Mode or Phase 2
-> * DH Group specifies the Diffie-Hellmen Group used in Main Mode or Phase 1
-> * PFS Group specified the Diffie-Hellmen Group used in Quick Mode or Phase 2
+> * DH Group specifies the Diffie-Hellman Group used in Main Mode or Phase 1
+> * PFS Group specified the Diffie-Hellman Group used in Quick Mode or Phase 2
> 4. IKEv2 Main Mode SA lifetime is fixed at 28,800 seconds on the Azure VPN gateways > 5. Setting "UsePolicyBasedTrafficSelectors" to $True on a connection will configure the Azure VPN gateway to connect to policy-based VPN firewall on premises. If you enable PolicyBasedTrafficSelectors, you need to ensure your VPN device has the matching traffic selectors defined with all combinations of your on-premises network (local network gateway) prefixes to/from the Azure virtual network prefixes, instead of any-to-any. For example, if your on-premises network prefixes are 10.1.0.0/16 and 10.2.0.0/16, and your virtual network prefixes are 192.168.0.0/16 and 172.16.0.0/16, you need to specify the following traffic selectors: > * 10.1.0.0/16 <====> 192.168.0.0/16