Updates from: 09/08/2022 01:08:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
Previously updated : 08/17/2022 Last updated : 09/17/2022
Microsoft recommends passwordless authentication methods such as Windows Hello, FIDO2 security keys, and the Microsoft Authenticator app because they provide the most secure sign-in experience. Although a user can sign-in using other common methods such as a username and password, passwords should be replaced with more secure authentication methods.
-![Table of the strengths and preferred authentication methods in Azure AD](media/concept-authentication-methods/authentication-methods.png)
Azure AD Multi-Factor Authentication (MFA) adds additional security over only using a password when a user signs in. The user can be prompted for additional forms of authentication, such as to respond to a push notification, enter a code from a software or hardware token, or respond to an SMS or phone call.
The following table outlines the security considerations for the available authe
| Windows Hello for Business | High | High | High | | Microsoft Authenticator app | High | High | High | | FIDO2 security key | High | High | High |
+| Certificate-based authentication (preview)| High | High | High |
| OATH hardware tokens (preview) | Medium | Medium | High | | OATH software tokens | Medium | Medium | High | | SMS | Medium | High | Medium |
The following table outlines when an authentication method can be used during a
| Windows Hello for Business | Yes | MFA\* | | Microsoft Authenticator app | Yes | MFA and SSPR | | FIDO2 security key | Yes | MFA |
+| Certificate-based authentication (preview) | Yes | No |
| OATH hardware tokens (preview) | No | MFA and SSPR | | OATH software tokens | No | MFA and SSPR | | SMS | Yes | MFA and SSPR | | Voice call | No | MFA and SSPR | | Password | Yes | |
-> \* Windows Hello for Business, by itself, does not serve as a step-up MFA credential. For example, an MFA Challenge from Sign-in Frequency or SAML Request containing forceAuthn=true. Windows Hello for Business can serve as a step-up MFA credential by being used in FIDO2 authentication. This requires users to be enabled for FIDO2 authentication to work sucessfully.
+> \* Windows Hello for Business, by itself, does not serve as a step-up MFA credential. For example, an MFA Challenge from Sign-in Frequency or SAML Request containing forceAuthn=true. Windows Hello for Business can serve as a step-up MFA credential by being used in FIDO2 authentication. This requires users to be enabled for FIDO2 authentication to work successfully.
All of these authentication methods can be configured in the Azure portal, and increasingly using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).
To learn more about how each authentication method works, see the following sepa
* [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) * [Microsoft Authenticator app](concept-authentication-authenticator-app.md) * [FIDO2 security key](concept-authentication-passwordless.md#fido2-security-keys)
+* [Certificate-based authentication](concept-certificate-based-authentication.md)
* [OATH hardware tokens (preview)](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview) * [OATH software tokens](concept-authentication-oath-tokens.md#oath-software-tokens) * [SMS sign-in](howto-authentication-sms-signin.md) and [verification](concept-authentication-phone-options.md#mobile-phone-verification)
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
Previously updated : 11/22/2021 Last updated : 09/07/2022 -+ # Configurable token lifetimes in the Microsoft identity platform (preview)
active-directory Howto Create Self Signed Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md
Azure Active Directory (Azure AD) supports two types of authentication for servi
For testing, you can use a self-signed public certificate instead of a Certificate Authority (CA)-signed certificate. This article shows you how to use Windows PowerShell to create and export a self-signed certificate. > [!CAUTION]
-> Using a self-signed certificate is only recommended for development, not production.
+> Self-signed certificates are not trusted by default and they can be difficult to maintain. Also, they may use outdated hash and cipher suites that may not be strong. For better security, purchase a certificate signed by a well-known certificate authority.
You configure various parameters for the certificate. For example, the cryptographic and hash algorithms, the certificate validity period, and your domain name. Then export the certificate with or without its private key depending on your application needs.
active-directory Application Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-properties.md
Previously updated : 09/22/2021 Last updated : 09/06/2022 #Customer intent: As an administrator of an Azure AD tenant, I want to learn more about the properties of an enterprise application that I can configure.
active-directory Application Sign In Problem Application Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-problem-application-error.md
Previously updated : 07/11/2017 Last updated : 09/06/2022
In this scenario, Azure Active Directory (Azure AD) signs the user in. But the application displays an error message and doesn't let the user finish the sign-in flow. The problem is that the app didn't accept the response that Azure AD issued.
-There are several possible reasons why the app didn't accept the response from Azure AD. If the error message doesn't clearly identify what's missing from the response, try the following:
+There are several possible reasons why the app didn't accept the response from Azure AD. If there is an error message or code displayed, use the following resources to diagnose the error:
+
+* [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md)
+
+* [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
++
+If the error message doesn't clearly identify what's missing from the response, try the following:
- If the app is the Azure AD gallery, verify that you followed the steps in [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md).
To add an attribute in the Azure AD configuration that will be sent in the Azure
The next time that the user signs in to the app, Azure AD will send the new attribute in the SAML response.
-## The app doesn't identify the user
+## The app cannot identify the user
Signing in to the app fails because the SAML response is missing an attribute such as a role. Or it fails because the app expects a different format or value for the **NameID** (User Identifier) attribute. If you're using [Azure AD automated user provisioning](../app-provisioning/user-provisioning.md) to create, maintain, and remove users in the app, verify that the user has been provisioned to the SaaS app. For more information, see [No users are being provisioned to an Azure AD Gallery application](../app-provisioning/application-provisioning-config-problem-no-users-provisioned.md).
-## Add an attribute to the Azure AD app configuration
+### Add an attribute to the Azure AD app configuration
To change the User Identifier value, follow these steps:
To change the User Identifier value, follow these steps:
8. Under **User attributes**, select the unique identifier for the user from the **User Identifier** drop-down list.
-## Change the NameID format
+### Change the NameID format
If the application expects another format for the **NameID** (User Identifier) attribute, see [Editing nameID](../develop/active-directory-saml-claims-customization.md#editing-nameid) to change the NameID format.
To change the signing algorithm, follow these steps:
## Next steps
-[How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md).
+* [How to debug SAML-based single sign-on to applications in Azure AD](./debug-saml-sso-issues.md).
+
+* [Azure AD Authentication and authorization error codes](../develop/reference-aadsts-error-codes.md)
+
+* [Troubleshooting consent prompt errors](application-sign-in-unexpected-user-consent-error.md)
active-directory Application Sign In Unexpected User Consent Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-error.md
Previously updated : 07/11/2017 Last updated : 09/06/2022
This error occurs when a user who is not a Global Administrator attempts to use
This error can also occur when a user is prevented from consenting to an application due to Microsoft detecting that the permissions request is risky. In this case, an audit event will also be logged with a Category of "ApplicationManagement", Activity Type of "Consent to application" and Status Reason of "Risky application detected".
-Another scenario in which this error might occur is when the user assignment is required for the application, but no administrator consent was provided. In this case, the administrator must first provide administrator consent.
+Another scenario in which this error might occur is when the user assignment is required for the application, but no administrator consent was provided. In this case, the administrator must first provide tenant-wide admin consent for the application.
## Policy prevents granting permissions error * **AADSTS90093:** An administrator of <tenantDisplayName> has set a policy that prevents you from granting <name of app> the permissions it is requesting. Contact an administrator of <tenantDisplayName>, who can grant permissions to this app on your behalf.
-This error occurs when a Global Administrator turns off the ability for users to consent to applications, then a non-administrator user attempts to use an application that requires consent. This error can be resolved by an administrator granting access to the application on behalf of their organization.
+This error can occur when a Global Administrator turns off the ability for users to consent to applications, then a non-administrator user attempts to use an application that requires consent. This error can be resolved by an administrator granting access to the application on behalf of their organization.
## Intermittent problem error
This error occurs when a Global Administrator turns off the ability for users to
This error indicates that an intermittent service side issue has occurred. It can be resolved by attempting to consent to the application again.
-## Resource not available error
-* **AADSTS65005:** The app <clientAppDisplayName> requested permissions to access a resource <resourceAppDisplayName> that is not available.
-
-Contact the application developer.
## Resource not available in tenant error * **AADSTS65005:** <clientAppDisplayName> is requesting access to a resource <resourceAppDisplayName> that is not available in your organization <tenantDisplayName>.
-Ensure that this resource is available or contact an administrator of <tenantDisplayName>.
+Ensure that these resources that provide the permissions requested are available in your tenant or contact an administrator of <tenantDisplayName>. Otherwise, there is a misconfiguration in how the application requests resources, and you should contact the application developer.
## Permissions mismatch error
End-users will not be able to grant consent to apps that have been detected as r
[Apps, permissions, and consent in Azure Active Directory (v1 endpoint)](../develop/quickstart-register-app.md)<br> [Scopes, permissions, and consent in the Azure Active Directory (v2.0 endpoint)](../develop/v2-permissions-and-consent.md)+
+[Unexpected consent prompt when signing in to an application](application-sign-in-unexpected-user-consent-prompt.md)
active-directory Application Sign In Unexpected User Consent Prompt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-sign-in-unexpected-user-consent-prompt.md
Previously updated : 07/11/2017 Last updated : 09/07/2022
# Unexpected consent prompt when signing in to an application
-Many applications that integrate with Azure Active Directory require permissions to various resources in order to run. When these resources are also integrated with Azure Active Directory, permissions to access them is requested using the Azure AD consent framework.
+Many applications that integrate with Azure Active Directory require permissions to various resources in order to run. When these resources are also integrated with Azure Active Directory, permissions to access them is requested using the Azure AD consent framework. These requests result in a consent prompt being shown the first time an application is used, which is often a one-time operation.
-This results in a consent prompt being shown the first time an application is used, which is often a one-time operation.
+In certain scenarios, additional consent prompts can appear when a user attempts to sign-in. In this article, we will diagnose the reason for the unexpected consent prompts showing, and how to troubleshoot.
> [!VIDEO https://www.youtube.com/embed/a1AjdvNDda4] ## Scenarios in which users see consent prompts
-Additional prompts can be expected in various scenarios:
+Further prompts can be expected in various scenarios:
-* The application has been configured to require assignment. User consent is not currently supported for apps which require assignment. If you configure an application to require assignment, be sure to also grant tenant-wide admin consent so that assigned user can sign in.
+* The application has been configured to require assignment. Individual user consent is not currently supported for apps which require assignment; thus the permissions must be granted by an admin for the whole directory. If you configure an application to require assignment, be sure to also grant tenant-wide admin consent so that assigned user can sign-in.
-* The set of permissions required by the application has changed.
+* The set of permissions required by the application has changed by the developer and needs to be granted again.
* The user who originally consented to the application was not an administrator, and now a different (non-admin) user is using the application for the first time.
-* The user who originally consented to the application was an administrator, but they did not consent on-behalf of the entire organization.
+* The user who originally consented to the application was an administrator, but they didn't consent on-behalf of the entire organization.
-* The application is using [incremental and dynamic consent](../azuread-dev/azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent) to request additional permissions after consent was initially granted. This is often used when optional features of an application additional require permissions beyond those required for baseline functionality.
+* The application is using [incremental and dynamic consent](../azuread-dev/azure-ad-endpoint-comparison.md#incremental-and-dynamic-consent) to request further permissions after consent was initially granted. Incremental and dynamic consent is often used when optional features of an application require permissions beyond those required for baseline functionality.
* Consent was revoked after being granted initially.
-* The developer has configured the application to require a consent prompt every time it is used (note: this is not best practice).
+* The developer has configured the application to require a consent prompt every time it is used (note: this behavior isn't best practice).
> [!NOTE] > Following Microsoft's recommendations and best practices, many organizations have disabled or limited users' permission to grant consent to apps. If an application forces users to grant consent every time they sign in, most users will be blocked from using these applications even if an administrator grants tenant-wide admin consent. If you encounter an application which is requiring user consent even after admin consent has been granted, check with the app publisher to see if they have a setting or option to stop forcing user consent on every sign in.
+## Troubleshooting steps
+
+### Compare permissions requested and granted for the applications
+
+To ensure the permissions granted for the application are up-to-date, you can compare the permissions that are being requested by the application with the permissions already granted in the tenant.
+
+1. Sign-in to the Azure portal with an administrator account.
+2. Navigate to **Enterprise applications**.
+3. Select the application in question from the list.
+4. Under Security in the left-hand navigation, choose **Permissions**
+5. View the list of already granted permissions from the table on the Permissions page
+6. To view the requested permissions, click on the **Grant admin consent** button. (NOTE: This will open a consent prompt listing all of the requested permissions. Don't click accept on the consent prompt unless you are sure you want to grant tenant-wide admin consent.)
+7. Within the consent prompt, expand the listed permissions and compare with the table on the permissions page. If any are present in the consent prompt but not the permissions page, that permission has yet to be consented to. Unconsented permissions may be the cause for unexpected consent prompts showing for the application.
+
+### View user assignment settings
+
+If the application requires assignment, individual users can't consent for themselves. To check if assignment is required for the application, do the following:
+
+1. Sign-in to the Azure portal with an administrator account.
+2. Navigate to **Enterprise applications**.
+3. Select the application in question from the list.
+4. Under Manage in the left-hand navigation, choose **Properties**.
+5. Check to see if **Assignment required?** is set to **Yes**.
+6. If set to yes, then an admin must consent to the permissions on behalf of the entire organization.
+
+### Review tenant-wide user consent settings
+
+Determining whether an individual user can consent to an application can be configured by every organization, and may differ from directory to directory. Even if every permission doesn't require admin consent by default, your organization may have disabled user consent entirely, preventing an individual user to consent for themselves for an application. To view your organization's user consent settings, do the following:
+
+1. Sign-in to the Azure portal with an administrator account.
+2. Navigate to **Enterprise applications**.
+3. Under Security in the left-hand navigation, choose **Consent and permissions**.
+4. View the user consent settings. If set to *Do not allow user consent*, users will never be able to consent on behalf of themselves for an application.
+ ## Next steps * [Apps, permissions, and consent in Azure Active Directory (v1.0 endpoint)](../develop/quickstart-register-app.md) * [Scopes, permissions, and consent in the Azure Active Directory (v2.0 endpoint)](../develop/v2-permissions-and-consent.md)+
+* [Unexpected error when performing consent to an application](application-sign-in-unexpected-user-consent-error.md)
active-directory Configure User Consent Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md
Previously updated : 08/31/2021 Last updated : 09/06/2022
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
Here's some information about workflow notifications:
>[!Note] >An administrator who believes that an approved user should not be active can remove the active group assignment in Privileged Identity Management. Although resource administrators are not notified of pending requests unless they are an approver, they can view and cancel pending requests for all users by viewing pending requests in Privileged Identity Management.
+## Troubleshoot
+
+### Permissions are not granted after activating a role
+
+When you activate a role in Privileged Identity Management, the activation may not instantly propagate to all portals that require the privileged role. Sometimes, even if the change is propagated, web caching in a portal may result in the change not taking effect immediately. If your activation is delayed, here is what you should do.
+
+1. Sign out of the Azure portal and then sign back in.
+1. In Privileged Identity Management, verify that you are listed as the member of the role.
+ ## Next steps - [Extend or renew group assignments in Privileged Identity Management](pim-resource-roles-renew-extend.md)
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
az provider register --namespace Microsoft.NetApp --wait
> [!NOTE] > This can take some time to complete.
-When you create an Azure NetApp account for use with AKS, you need to create the account in the **node** resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named *myAKSCluster* in the resource group name *myResourceGroup*:
-
-```azurecli-interactive
-az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-```
-
-```output
-MC_myResourceGroup_myAKSCluster_eastus
-```
-
-Create an Azure NetApp Files account in the **node** resource group and same region as your AKS cluster using [az netappfiles account create][az-netappfiles-account-create]. The following example creates an account named *myaccount1* in the *MC_myResourceGroup_myAKSCluster_eastus* resource group and *eastus* region:
+When you create an Azure NetApp account for use with AKS, you can create the account in an existing resource group or create a new one in the same region as the AKS cluster.
+The following example creates an account named *myaccount1* in the *myResourceGroup* resource group and *eastus* region:
```azurecli az netappfiles account create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --resource-group myResourceGroup \
--location eastus \ --account-name myaccount1 ```
Create a new capacity pool by using [az netappfiles pool create][az-netappfiles-
```azurecli az netappfiles pool create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --resource-group myResourceGroup \
--location eastus \ --account-name myaccount1 \ --pool-name mypool1 \
az netappfiles pool create \
Create a subnet to [delegate to Azure NetApp Files][anf-delegate-subnet] using [az network vnet subnet create][az-network-vnet-subnet-create]. *This subnet must be in the same virtual network as your AKS cluster.* ```azurecli
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+RESOURCE_GROUP=myResourceGroup
VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv) VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv) SUBNET_NAME=MyNetAppSubnet
Volumes can either be provisioned statically or dynamically. Both options are co
Create a volume by using [az netappfiles volume create][az-netappfiles-volume-create]. ```azurecli
-RESOURCE_GROUP=MC_myResourceGroup_myAKSCluster_eastus
+RESOURCE_GROUP=myResourceGroup
LOCATION=eastus ANF_ACCOUNT_NAME=myaccount1 POOL_NAME=mypool1
az netappfiles volume create \
List the details of your volume using [az netappfiles volume show][az-netappfiles-volume-show] ```azurecli
-az netappfiles volume show --resource-group $RESOURCE_GROUP --account-name $ANF_ACCOUNT_NAME --pool-name $POOL_NAME --volume-name "myvol1"
+az netappfiles volume show \
+ --resource-group $RESOURCE_GROUP \
+ --account-name $ANF_ACCOUNT_NAME \
+ --pool-name $POOL_NAME \
+ --volume-name "myvol1" -o JSON
``` ```output
api-management Authorizations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md
For public preview the following limitations exist:
- Authorizations feature is not supported in the following regions: swedencentral, australiacentral, australiacentral2, jioindiacentral. - Supported identity providers can be found in [this](https://github.com/Azure/APIManagement-Authorizations/blob/main/docs/identityproviders.md) GitHub repository.-- Maximum configured number of authorization providers per API Management instance: 50-- Maximum configured number of authorizations per authorization provider: 500
+- Maximum configured number of authorization providers per API Management instance: 1,000
+- Maximum configured number of authorizations per authorization provider: 10,000
- Maximum configured number of access policies per authorization: 100 - Maximum requests per minute per authorization: 100 - Authorization code PKCE flow with code challenge isn't supported.
applied-ai-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-layout.md
Try extracting data from forms and documents using the Form Recognizer Studio. Y
The layout model extracts text, selection marks, tables, paragraphs, and paragraph types (`roles`) from your documents.
-### Text lines and words
+### Paragraphs <sup>🆕</sup>
-Layout API extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines, if detected, along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
+
+```json
+"paragraphs": [
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
+ }
+]
+```
+### Paragraph roles<sup> 🆕</sup>
+
+The Layout model may flag certain paragraphs with their specialized type or `role` as predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
+
+| **Predicted role** | **Description** |
+| | |
+| `title` | The main heading(s) in the page |
+| `sectionHeading` | One or more subheading(s) on the page |
+| `footnote` | Text near the bottom of the page |
+| `pageHeader` | Text near the top edge of the page |
+| `pageFooter` | Text near the bottom edge of the page |
+| `pageNumber` | Page number |
```json {
- "words": [
- {
- "content": "CONTOSO",
- "polygon": [
- 76,
- 30,
- 118,
- 32,
- 118,
- 43,
- 76,
- 43
- ],
- "confidence": 1,
- "span": {
- "offset": 0,
- "length": 7
- }
- }
+ "paragraphs": [
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "role": "title",
+ "content": "NEWS TODAY"
+ },
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "role": "sectionHeading",
+ "content": "Mirjam Nilsson"
+ }
] } ```
+### Pages
+
+The pages collection is the very first object you see in the service response.
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": [],
+ "kind": "document"
+ }
+]
+```
+### Text lines and words
+
+Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
+
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
### Selection marks Layout API also extracts selection marks from documents. Extracted selection marks appear within the `pages` collection for each page. They include the bounding `polygon`, `confidence`, and selection `state` (`selected/unselected`). Any associated text if extracted is also included as the starting index (`offset`) and `length` that references the top level `content` property that contains the full text from the document.
Layout API also extracts selection marks from documents. Extracted selection mar
"selectionMarks": [ { "state": "unselected",
- "polygon": [
- 217,
- 862,
- 254,
- 862,
- 254,
- 899,
- 217,
- 899
- ],
+ "polygon": [],
"confidence": 0.995, "span": { "offset": 1421,
Layout API also extracts selection marks from documents. Extracted selection mar
} ] }-- ```- ### Tables and table headers Layout API extracts tables in the `pageResults` section of the JSON output. Documents can be scanned, photographed, or digitized. Extracted table information includes the number of columns and rows, row span, and column span. Each cell with its bounding `polygon` is output along with information whether it's recognized as a `columnHeader` or not. The API also works with rotated tables. Each table cell contains the row and column index and bounding polygon coordinates. For the cell text, the model outputs the `span` information containing the starting index (`offset`). The model also outputs the `length` within the top level `content` that contains the full text from the document.
Layout API extracts tables in the `pageResults` section of the JSON output. Docu
"columnIndex": 0, "columnSpan": 4, "content": "(In millions, except earnings per share)",
- "boundingRegions": [
- {
- "pageNumber": 1,
- "polygon": [
- 36,
- 184,
- 843,
- 183,
- 843,
- 209,
- 36,
- 207
- ]
- }
- ],
- "spans": [
- {
- "offset": 511,
- "length": 40
- }
- ]
+ "boundingRegions": [],
+ "spans": []
}, ] }
- .
- .
- .
] } ```-
-### Paragraphs
-
-The Layout model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
-
-```json
-{
- "paragraphs": [
- {
- "spans": [
- {
- "offset": 0,
- "length": 21
- }
- ],
- "boundingRegions": [
- {
- "pageNumber": 1,
- "polygon": [
- 75,
- 30,
- 118,
- 31,
- 117,
- 68,
- 74,
- 67
- ]
- }
- ],
- "content": "Tuesday, Sep 20, YYYY"
- }
- ]
-}
-
-```
-
-### Paragraph roles
-
-The Layout model may flag certain paragraphs with their specialized type or `role` as predicted by the model. They're best used with unstructured documents to help understand the layout of the extracted content for a richer semantic analysis. The following paragraph roles are supported:
-
-| **Predicted role** | **Description** |
-| | |
-| `title` | The main heading(s) in the page |
-| `sectionHeading` | One or more subheading(s) on the page |
-| `footnote` | Text near the bottom of the page |
-| `pageHeader` | Text near the top edge of the page |
-| `pageFooter` | Text near the bottom edge of the page |
-| `pageNumber` | Page number |
-
-```json
-{
- "paragraphs": [
- {
- "spans": [
- {
- "offset": 22,
- "length": 10
- }
- ],
- "boundingRegions": [
- {
- "pageNumber": 1,
- "polygon": [
- 139,
- 10,
- 605,
- 8,
- 605,
- 56,
- 139,
- 58
- ]
- }
- ],
- "role": "title",
- "content": "NEWS TODAY"
- }
- ]
-}
-
-```
- ### Select page numbers or ranges for text extraction For large multi-page documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction.
applied-ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-read.md
Form Recognizer v3.0 includes the new Read Optical Character Recognition (OCR) model. The Read OCR model extracts typeface and handwritten text including mixed languages in documents. The Read OCR model can detect lines, words, locations, and languages and is the core of all other Form Recognizer models. Layout, general document, custom, and prebuilt models all use the Read OCR model as a foundation for extracting texts from documents.
+## Supported document types
+ > [!NOTE] > > * Only API Version 2022-06-30-preview supports Microsoft Word, Excel, PowerPoint, and HTML file formats in addition to all other document types supported by the GA versions.
-> * For these file formats, Read API ignores the pages parameter and extracts all pages by default. Each embedded image counts as 1 page unit and each worksheet, slide, and page (up to 3000 characters) count as 1 page.
-
-## Supported document types
+> * For the preview of Office and HTML file formats, Read API ignores the pages parameter and extracts all pages by default. Each embedded image counts as 1 page unit and each worksheet, slide, and page (up to 3000 characters) count as 1 page.
| **Model** | **Images** | **PDF** | **TIFF** | **Word** | **Excel** | **PowerPoint** | **HTML** | | | | | | | | | |
-| Read | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Read | Γ£ô | Γ£ô | Γ£ô | Γ£ô (preview) | Γ£ô (preview) | Γ£ô (preview) | Γ£ô (preview) |
### Data extraction
Form Recognizer v3.0 version supports several languages for the read model. *See
## Data detection and extraction
+### Paragraphs <sup>🆕</sup>
+
+The Read model extracts all identified blocks of text in the `paragraphs` collection as a top level object under `analyzeResults`. Each entry in this collection represents a text block and includes the extracted text as`content`and the bounding `polygon` coordinates. The `span` information points to the text fragment within the top level `content` property that contains the full text from the document.
+
+```json
+"paragraphs": [
+ {
+ "spans": [],
+ "boundingRegions": [],
+ "content": "While healthcare is still in the early stages of its Al journey, we are seeing pharmaceutical and other life sciences organizations making major investments in Al and related technologies.\" TOM LAWRY | National Director for Al, Health and Life Sciences | Microsoft"
+ }
+]
+```
+### Language detection <sup>🆕</sup>
+
+Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`.
+
+```json
+"languages": [
+ {
+ "spans": [
+ {
+ "offset": 0,
+ "length": 131
+ }
+ ],
+ "locale": "en",
+ "confidence": 0.7
+ },
+]
+```
+### Microsoft Office and HTML support (preview) <sup>🆕</sup>
+Use the parameter `api-version=2022-06-30-preview when using the REST API or the corresponding SDKs of that API version to preview the support for Microsoft Word, Excel, PowerPoint, and HTML files.
++
+The page units in the model output are computed as shown:
+
+ **File format** | **Computed page unit** | **Total pages** |
+| | | |
+|Word (preview) | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
+|Excel (preview) | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
+|PowerPoint (preview)| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
+|HTML (preview)| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+ ### Pages
-With the added support for Microsoft Word, Excel, PowerPoint, and HTML files, the page units in the model output are computed as shown:
+The page units in the model output are computed as shown:
**File format** | **Computed page unit** | **Total pages** | | | | | |Images | Each image = 1 page unit | Total images | |PDF | Each page in the PDF = 1 page unit | Total pages in the PDF |
-|Word | Up to 3,000 characters = 1 page unit, Each embedded image = 1 page unit | Total pages of up to 3,000 characters each + Total embedded images |
-|Excel | Each worksheet = 1 page unit, Each embedded image = 1 page unit | Total worksheets + Total images
-|PowerPoint| Each slide = 1 page unit, Each embedded image = 1 page unit | Total slides + Total images
-|HTML| Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each |
+|TIFF | Each image in the TIFF = 1 page unit | Total images in the PDF |
+
+```json
+"pages": [
+ {
+ "pageNumber": 1,
+ "angle": 0,
+ "width": 915,
+ "height": 1190,
+ "unit": "pixel",
+ "words": [],
+ "lines": [],
+ "spans": [],
+ "kind": "document"
+ }
+]
+```
### Text lines and words Read extracts print and handwritten style text as `lines` and `words`. The model outputs bounding `polygon` coordinates and `confidence` for the extracted words. The `styles` collection includes any handwritten style for lines if detected along with the spans pointing to the associated text. This feature applies to [supported handwritten languages](language-support.md).
-For Microsoft Word, Excel, PowerPoint, and HTML file formats, Read will extract all embedded text as is. For any embedded images, it will run OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries will include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
-
-### Language detection
-
-Read adds [language detection](language-support.md#detected-languages-read-api) as a new feature for text lines. Read will predict all detected languages for text lines along with the `confidence` in the `languages` collection under `analyzeResult`.
-
+For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, Read will extract all embedded text as is. For any embedded images, it will run OCR on the images to extract text and append the text from each image as an added entry to the `pages` collection. These added entries will include the extracted text lines and words, their bounding polygons, confidences, and the spans pointing to the associated text.
+
+```json
+"words": [
+ {
+ "content": "While",
+ "polygon": [],
+ "confidence": 0.997,
+ "span": {}
+ },
+],
+"lines": [
+ {
+ "content": "While healthcare is still in the early stages of its Al journey, we",
+ "polygon": [],
+ "spans": [],
+ }
+]
+```
### Select page (s) for text extraction For large multi-page PDF documents, use the `pages` query parameter to indicate specific page numbers or page ranges for text extraction. > [!NOTE]
-> For Microsoft Word, Excel, PowerPoint, and HTML file formats, the Read API ignores the pages parameter and extracts all pages by default.
+> For the preview of Microsoft Word, Excel, PowerPoint, and HTML file support, the Read API ignores the pages parameter and extracts all pages by default.
## Next steps
applied-ai-services Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/sdk-overview.md
recommendations: false
<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD023 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
# What is the Form Recognizer SDK?
Form Recognizer SDK supports the following languages and platforms:
|[JavaScript/4.0.0-beta.6](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-javascript#set-up)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)| [Azure SDK for JavaScript](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0-beta.6/https://docsupdatetracker.net/index.html) | [2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | |[Python/3.2.0b6](quickstarts/get-started-v3-sdk-rest-api.md?pivots=programming-language-python#set-up) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)| [Azure SDK for Python](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0b6/https://docsupdatetracker.net/index.html)| [2022-06-30-preview, 2022-01-30-preview, 2021-09-30-preview, **v2.1-ga**, v2.0](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=form+recognizer) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli)
-## How to use the Form Recognizer SDK in your applications
+## Changelog and release history
+
+#### Form Recognizer SDK beta August 2022 preview release
+
+>[!NOTE]
+> The 4.0.0-beta.5 (C#), 4.0.0-beta.6 (Java), 4.0.0-beta.6 (JavaScript) and 3.2.0b6 (Python) previews contain the same updates and bug fixes but the versioning is no longer in sync across all programming languages.
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.5 (2022-08-09)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
+
+[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.6 (2022-08-10)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
+
+ [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.6 (2022-08-09)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
+
+ [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
+
+### [Python](#tab/python)
+
+**Version 3.2.0b6 (2022-08-09)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+
+ [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+++
+### Form Recognizer SDK beta June 2022 preview release
+
+>[!NOTE]
+> The 4.0.0-beta.4 (C# and JavaScript), 4.0.0-beta.5 (Java), and 3.2.0b5 (Python) previews contain the same updates and bug fixes but the versioning is no longer in sync across all programming languages.
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.4 (2022-06-08)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.FormRecognizer_4.0.0-beta.4/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.4)
+
+[**SDK reference documentation**](/dotnet/api/azure.ai.formrecognizer?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.5 (2022-06-07)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav)
+
+ [**Package (Maven)**](https://search.maven.org/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.5/jar)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.4 (2022-06-07)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.4/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.4)
+
+ [**SDK reference documentation**](/javascript/api/@azure/ai-form-recognizer/?view=azure-node-preview&preserve-view=true)
+
+### [Python](#tab/python)
+
+**Version 3.2.0b5 (2022-06-07**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b5/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b5/)
+
+ [**SDK reference documentation**](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true)
+++
+## Use Form Recognizer SDK in your applications
The Form Recognizer SDK enables the use and management of the Form Recognizer service in your application. The SDK builds on the underlying Form Recognizer REST API allowing you to easily use those APIs within your programming language paradigm. Here's how you use the Form Recognizer SDK for your preferred language:
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## August 2022
+#### Form Recognizer SDK beta August 2022 preview release
+
+>[!NOTE]
+> The 4.0.0-beta.5 (C#), 4.0.0-beta.6 (Java), 4.0.0-beta.6 (JavaScript) and 3.2.0b6 (Python) previews contain the same updates and bug fixes but the versioning is no longer in sync across all programming languages.
+
+This release includes the following updates:
+
+### [**C#**](#tab/csharp)
+
+**Version 4.0.0-beta.5 (2022-08-09)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/formrecognizer/Azure.AI.FormRecognizer/CHANGELOG.md#400-beta5-2022-08-09)
+
+[**Package (NuGet)**](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.0.0-beta.5)
+
+[**SDK reference documentation**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet-preview&preserve-view=true)
+
+### [**Java**](#tab/java)
+
+**Version 4.0.0-beta.6 (2022-08-10)**
+
+[**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav#400-beta6-2022-08-10)
+
+ [**Package (Maven)**](https://oss.sonatype.org/#nexus-search;quick~azure-ai-formrecognizer)
+
+ [**SDK reference documentation**](/java/api/overview/azure/ai-formrecognizer-readme?view=azure-java-preview&preserve-view=true)
+
+### [**JavaScript**](#tab/javascript)
+
+**Version 4.0.0-beta.6 (2022-08-09)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/ai-form-recognizer_4.0.0-beta.6/sdk/formrecognizer/ai-form-recognizer/CHANGELOG.md)
+
+ [**Package (npm)**](https://www.npmjs.com/package/@azure/ai-form-recognizer/v/4.0.0-beta.6)
+
+ [**SDK reference documentation**](/javascript/api/overview/azure/ai-form-recognizer-readme?view=azure-node-preview&preserve-view=true)
+
+### [Python](#tab/python)
+
+**Version 3.2.0b6 (2022-08-09)**
+
+ [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b6/sdk/formrecognizer/azure-ai-formrecognizer/CHANGELOG.md)
+
+ [**Package (PyPi)**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+
+ [**SDK reference documentation**](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b6/)
+++ ### Form Recognizer v3.0 generally available **Form Recognizer REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
##### Form Recognizer service updates
+* [**prebuilt-read**](concept-read.md). Read OCR model is now also available in Form Recognizer with paragraphs and language detection as the two new features. Form Recognizer Read targets advanced document scenarios aligned with the broader document intelligence capabilities in Form Recognizer.
+
+* [**prebuilt-layout**](concept-layout.md). The Layout model extracts paragraphs and whether the extracted text is a paragraph, title, section heading, footnote, page header, page footer, or page number.
+ * [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields will now resolve to the existing fields TotalTax and Line/Tax respectively. * [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* **AI quality improvements**
- * [**custom-neural**](concept-custom-neural.md). Improved accuracy for table detection and extraction.
+ * [**prebuilt-read**](concept-read.md). Enhanced support for single characters, handwritten dates, amounts, names, other entities commonly found in receipts and invoices as well as improved processing of digital PDF documents.
- * [**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells. As well improved paragraph grouping detection and logical identification of headers and titles.
+ * [**prebuilt-layout**](concept-layout.md). Support for better detection of cropped tables, borderless tables, and improved recognition of long spanning cells.
* [**prebuilt-document**](concept-general-document.md). Improved value and check box detection.
+ * [**custom-neural**](concept-custom-neural.md). Improved accuracy for table detection and extraction.
++ ## June 2022 ### [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio) June Update
The **2022-06-30-preview** release presents extensive updates across the feature
* [**Prebuilt ID document model**](concept-id-document.md). The ID document model now extracts DateOfIssue, Height, Weight, EyeColor, HairColor, and DocumentDiscriminator from US driver's licenses. _See_ [field extraction](concept-id-document.md). * [**Read model now supports common Microsoft Office document types**](concept-read.md). Document types like Word (docx) and PowerPoint (ppt) are now supported with the Read API. See [page extraction](concept-read.md#pages).
-#### Form Recognizer SDK beta preview release
-
-The latest beta release version of the Azure Form Recognizer SDKs incorporates new features, minor feature updates and bug fixes.
+#### Form Recognizer SDK beta June 2022 preview release
>[!NOTE] > The 4.0.0-beta.4 (C# and JavaScript), 4.0.0-beta.5 (Java), and 3.2.0b5 (Python) previews contain the same updates and bug fixes but the versioning is no longer in sync across all programming languages.
Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/d
#### Form Recognizer SDK beta preview release
-The latest beta release version of the Azure Form Recognizer SDKs incorporates new features, minor feature updates and bug fixes.
- >[!NOTE] > The 4.0.0-beta.3 (C# and JavaScript), 4.0.0-beta.4 (Java), and 3.2.0b4 (Python) previews contain the same updates and bug fixes but the versioning is no longer in sync across all programming languages.
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
# Start/Stop VMs during off-hours overview
+> [!NOTE]
+> Start/Stop VM during off-hours, version 1 is currently being deprecated and will be unavailable from the marketplace soon. We recommend that you start using version 2, which is now generally available.
+The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement.
++ The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios. > [!NOTE]
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| [Azure Storage: Azure Data Lake Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Storage: Disk Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Storage: Blob Storage](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Storage: Managed Disks](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Storage: Managed Disks](migrate-vm.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
| [Azure Virtual Machine Scale Sets](migrate-vm.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | [Azure Virtual Machines](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) | | Virtual Machines:ΓÇ»[Av2-Series](migrate-vm.md) | ![An icon that signifies this service is zonal.](media/icon-zonal.svg) |
availability-zones Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-vm.md
Because zonal VMs are created across the availability zones, all migration optio
### When to use redeployment
-Use the redeployment option if you have good Infrastructure as Code (IaC) practices setup to manage infrastructure. The redeployment option gives you more control, and the ability to automate various processes within your deployment pipelines.
+Use the redeployment option if you have set up good Infrastructure as Code (IaC) practices to manage infrastructure. This redeployment option gives you more control and the ability to automate various processes within your deployment pipelines.
### Redeployment considerations - When you redeploy your VM and VMSS resources, the underlying resources such as managed disk and IP address for the VM are created in the same availability zone. You must use a Standard SKU public IP address and load balancer to create zone-redundant network resources.
+- Existing managed disks without availability zone support can't be attached to a VM with availability zone support. To attach existing managed disks to a VM with availability zone support, you'll need to take a snapshot of the current disks, and then create your VM with the new managed disks attached.
+ - For zonal deployments that require reasonably low network latency and good performance between application tier and data tier, use [proximity placement groups](../virtual-machines/co-location.md). Proximity groups can force grouping of different VM resources under a single network spine. For an example of an SAP workload that uses proximity placement groups, see [Azure proximity placement groups for optimal network latency with SAP applications](../virtual-machines/workloads/sap/sap-proximity-placement-scenarios.md) + ### How to redeploy
-To redeploy, you'll need to recreate your VM and VMSS resources. To ensure high-availability of your compute resources, it's recommended that you select multiple zones for your new VMs and VMSS.
+If you want to migrate the data on your current managed disks when creating a new VM, follow the directions in [Migrate your managed disks](#migrate-your-managed-disks).
-To learn how create VMs in an availability zone, see:
+If you only want to create new VM with new managed disks in an availability zone, see:
- [Create VM using Azure CLI](../virtual-machines/linux/create-cli-availability-zone.md) - [Create VM using Azure PowerShell](../virtual-machines/windows/create-PowerShell-availability-zone.md)
To learn how create VMs in an availability zone, see:
To learn how to create VMSS in an availability zone, see [Create a virtual machine scale set that uses Availability Zones](../virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md).
+### Migrate your managed disks
+
+In this section, you'll migrate the data from your current managed disks to either zone-redundant storage (ZRS) managed disks or zonal managed disks.
+
+#### Step 1: Create your snapshot
+
+The easiest and cleanest way to create a snapshot is to do so while the VM is offline. See [Create snapshots while the VM is offline](../virtual-machines/backup-and-disaster-recovery-for-azure-iaas-disks.md#create-snapshots-while-the-vm-is-offline). If you choose this approach, some downtime should be expected. To create a snapshot of your VM using the Azure portal, PowerShell, or Azure CLI, see [Create a snapshot of a virtual hard disk](../virtual-machines/snapshot-copy-managed-disk.md)
+
+If you'll be taking a snapshot of a disk that's attached to a running VM, read the guidance in [Create snapshots while the VM is running](../virtual-machines/backup-and-disaster-recovery-for-azure-iaas-disks.md#create-snapshots-while-the-vm-is-running) before proceeding.
+
+>[!NOTE]
+> The source managed disks remain intact with their current configurations and you'll continue to be billed for them. To avoid this, you must manually delete the disks once you've finished your migration and confirmed the new disks are working. For more information, see [Find and delete unattached Azure managed and unmanaged disks](../virtual-machines/windows/find-unattached-disks.md).
++
+#### Step 2: Migrate the data on your managed disks
+
+Now that you have snapshots of your original disks, you can use them to create either ZRS managed disks or zonal managed disks.
+##### Migrate your data to zonal managed disks
+
+To migrate a non-zonal managed disk to zonal:
+
+1. Create a zonal managed disk from the source disk snapshot. The zone parameter should match your zonal VM. To create a zonal managed disk from the snapshot, you can use [Azure CLI](../virtual-machines/scripts/create-managed-disk-from-snapshot.md)(example below), [PowerShell](../virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot.md), or the Azure Portal.
+
+ ```azurecli
+ az disk create --resource-group $resourceGroupName --name $diskName --location $location --zone $zone --sku $storageType --size-gb $diskSize --source $snapshotId
+ ```
+++
+##### Migrate your data to ZRS managed disks
+
+>[!IMPORTANT]
+> Zone-redundant storage (ZRS) for managed disks has some restrictions. For more information see [Limitations](../virtual-machines/disks-deploy-zrs.md?tabs=portal#limitations).
+
+1. Create a ZRS managed disk from the source disk snapshot by using the following Azure CLI snippet:
+
+ ```azurecli
+ # Create a new ZRS Managed Disks using the snapshot Id and the SKU supported
+ storageType=Premium_ZRS
+ location=westus2
+
+ az disk create --resource-group $resourceGroupName --name $diskName --sku $storageType --size-gb $diskSize --source $snapshotId
+
+ ```
+
+#### Step 3: Create a new VM with your new disks
+
+Now that you have migrated your data to ZRS managed disks or zonal managed disks, create a new VM with these new disks set as the OS and data disks:
+
+```azurecli
+
+ az vm create -g MyResourceGroup -n MyVm --attach-os-disk newZonalOSDiskCopy --attach-data-disks newZonalDataDiskCopy --os-type linux
+
+```
++ ## Migration Option 2: Azure Resource Mover ### When to use Azure Resource Mover
-Use Azure Resource Mover for an easy way to move VMs or encrypted VMs from one region without availability zones to another with availability zones. If you want to learn more about the benefits of using Azure Resource Mover, see [Why use Azure Resource Mover?](../resource-mover/overview.md#why-use-resource-mover).
+Use Azure Resource Mover for an easy way to move VMs or encrypted VMs from one region without availability zones to another with availability zone support. If you want to learn more about the benefits of using Azure Resource Mover, see [Why use Azure Resource Mover?](../resource-mover/overview.md#why-use-resource-mover).
### Azure Resource Mover considerations
azure-arc Preview Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/preview-testing.md
Previously updated : 06/28/2022 Last updated : 09/07/2022 #Customer intent: As a data professional, I want to validate upcoming releases.
azure-functions Create First Function Arc Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-cli.md
Title: 'Quickstart: Create a function app on Azure Arc' description: Get started with Azure Functions on Azure Arc by deploying your first function app. Previously updated : 05/10/2021 Last updated : 09/02/2022 ms.devlang: azurecli
On your local computer:
# [C\#](#tab/csharp)
-+ [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245.
-+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
++ [.NET 6.0 SDK](https://dotnet.microsoft.com/download)++ [Azure Functions Core Tools version 4.x](functions-run-local.md?tabs=v4%2Ccsharp#install-the-azure-functions-core-tools)++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [JavaScript](#tab/nodejs) + [Node.js](https://nodejs.org/) version 12. Node.js version 10 is also supported.
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245.
-+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
++ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cnode#install-the-azure-functions-core-tools). ++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [Python](#tab/python) + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245.
-+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
++ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cpython#install-the-azure-functions-core-tools) ++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later+
+# [PowerShell](#tab/powershell)
+++ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows)++ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cpowershell#install-the-azure-functions-core-tools) ++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later++ PowerShell 7 requires version 1.2.5 of the connectedk8s Azure CLI extension, or a later version. It also requires version 0.1.3 of the appservice-kube Azure CLI extension, or a later version. Make sure you install the correct version of both of these extensions as you complete this quickstart article.
In Azure Functions, a function project is the unit of deployment and execution f
```console func init LocalFunctionProj --python ```
+
+ # [PowerShell](#tab/powershell)
++
+ ```console
+ func init LocalFunctionProj --powershell
+ ```
In Azure Functions, a function project is the unit of deployment and execution f
Before you can deploy your function code to your new App Service Kubernetes environment, you need to create two more resources: -- A [Storage account](../storage/common/storage-account-create.md), which is currently required by tooling and isn't part of the environment.
+- A [Storage account](../storage/common/storage-account-create.md). While this article creates a storage account, in some cases a storage account may not be required. For more information, see [Azure Arc-enabled clusters](storage-considerations.md#azure-arc-enabled-clusters) in the storage considerations article.
- A function app, which provides the context for executing your function code. The function app runs in the App Service Kubernetes environment and maps to your local function project. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources. > [!NOTE]
az storage account create --name <STORAGE_NAME> --location westeurope --resource
``` > [!NOTE]
-> A storage account is currently required by Azure Functions tooling.
+> In some cases, a storage account may not be required. For more information, see [Azure Arc-enabled clusters](storage-considerations.md#azure-arc-enabled-clusters) in the storage considerations article.
In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account, which is [supported by Functions](storage-considerations.md#storage-account-requirements). The `--location` value is a standard Azure region.
Run the [az functionapp create](/cli/azure/functionapp#az-functionapp-create) co
# [C\#](#tab/csharp) ```azurecli
-az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 3 --runtime dotnet
+az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime dotnet
``` # [JavaScript](#tab/nodejs) ```azurecli
-az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 3 --runtime node --runtime-version 12
+az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime node --runtime-version 12
``` # [Python](#tab/python) ```azurecli
-az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 3 --runtime python --runtime-version 3.8
+az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime python --runtime-version 3.8
```+
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az functionapp create --resource-group myResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime powershell --runtime-version 7.0
+```
+ In this example, replace `<CUSTOM_LOCATION_ID>` with the ID of the custom location you determined for the App Service Kubernetes environment. Also, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. [!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
-Because it can take some time for a full deployment to complete on an Azure Arc-enabled Kubernetes cluster, you may want to re-run the following command to verify your published functions:
+Because it can take some time for a full deployment to complete on an Azure Arc-enabled Kubernetes cluster, you may want to rerun the following command to verify your published functions:
```command func azure functionapp list-functions
Now that you have your function app running in a container an Azure Arc-enabled
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python)
+# [PowerShell](#tab/powershell)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-powershell)
+
azure-functions Create First Function Arc Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-custom-container.md
On your local computer:
# [C\#](#tab/csharp)
-+ [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245.
-+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
++ [.NET 6.0 SDK](https://dotnet.microsoft.com/download)++ [Azure Functions Core Tools version 4.x](functions-run-local.md?tabs=v4%2Ccsharp#install-the-azure-functions-core-tools)++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later + [Docker](https://docs.docker.com/install/) + [Docker ID](https://hub.docker.com/signup) # [JavaScript](#tab/nodejs)
-+ [Node.js](https://nodejs.org/) version 12. Node.js version 10 is also supported.
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245.
-+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
++ [Node.js](https://nodejs.org/) version 12 (Node.js version 10 is also supported)++ [Azure Functions Core Tools version 4.x](functions-run-local.md?tabs=v4%2Cnode#install-the-azure-functions-core-tools)++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later + [Docker](https://docs.docker.com/install/) + [Docker ID](https://hub.docker.com/signup) # [Python](#tab/python) + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version)
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3245.
-+ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
++ [Azure Functions Core Tools version 4.x](functions-run-local.md?tabs=v4%2Cpython#install-the-azure-functions-core-tools)++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later + [Docker](https://docs.docker.com/install/) + [Docker ID](https://hub.docker.com/signup)
+# [PowerShell](#tab/powershell)
+++ [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows)++ [Azure Functions Core Tools version 4.x](functions-run-local.md?tabs=v4%2Cpowershell#install-the-azure-functions-core-tools) ++ [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later++ [Docker](https://docs.docker.com/install/) ++ [Docker ID](https://hub.docker.com/signup)++ PowerShell 7 requires recent versions of two Azure CLI extensions. Make sure you install the correct versions of the following extensions as you complete this quickstart article:
+ + `connectedk8s` version 1.2.5, or a later version
+ + `appservice-kube` version 0.1.3 or a later version
+ [!INCLUDE [functions-arc-create-environment](../../includes/functions-arc-create-environment.md)]
In Azure Functions, a function project is the context for one or more individual
```console func init LocalFunctionProj --python --docker ```
+ # [PowerShell](#tab/powershell)
++
+ ```console
+ func init LocalFunctionProj --powershell --docker
+ ```
+ The `--docker` option generates a `Dockerfile` for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime.
Docker Hub is a container registry that hosts images and provides image and cont
Before you can deploy your container to your new App Service Kubernetes environment, you need to create two more resources: -- A [Storage account](../storage/common/storage-account-create.md), which is currently required by tooling and isn't part of the environment.
+- A [Storage account](../storage/common/storage-account-create.md). While this article creates a storage account, in some cases a storage account may not be required. For more information, see [Azure Arc-enabled clusters](storage-considerations.md#azure-arc-enabled-clusters) in the storage considerations article.
- A function app, which provides the context for running your container. The function app runs in the App Service Kubernetes environment and maps to your local function project. A function app lets you group functions as a logical unit for easier management, deployment, and sharing of resources. > [!NOTE]
az storage account create --name <STORAGE_NAME> --location westeurope --resource
``` > [!NOTE]
-> A storage account is currently required by Azure Functions tooling.
+> In some cases, a storage account may not be required. For more information, see [Azure Arc-enabled clusters](storage-considerations.md#azure-arc-enabled-clusters) in the storage considerations article.
In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account, which is [supported by Functions](storage-considerations.md#storage-account-requirements). The `--location` value is a standard Azure region.
Run the [az functionapp create](/cli/azure/functionapp#az-functionapp-create) co
# [C\#](#tab/csharp) ```azurecli
-az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 3 --runtime dotnet --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
+az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime dotnet --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
``` # [JavaScript](#tab/nodejs) ```azurecli
-az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 3 --runtime node --runtime-version 12 --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
+az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime node --runtime-version 12 --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
``` # [Python](#tab/python) ```azurecli
-az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 3 --runtime python --runtime-version 3.8 --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
+az functionapp create --resource-group MyResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime python --runtime-version 3.8 --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
+```
+# [PowerShell](#tab/powershell)
+
+```azurecli
+az functionapp create --resource-group myResourceGroup --name <APP_NAME> --custom-location <CUSTOM_LOCATION_ID> --storage-account <STORAGE_NAME> --functions-version 4 --runtime powershell --runtime-version 7.0
```+ In this example, replace `<CUSTOM_LOCATION_ID>` with the ID of the custom location you determined for the App Service Kubernetes environment. Also, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, `<APP_NAME>` with a globally unique name appropriate to you, and `<DOCKER_ID>` with your Docker Hub ID.
Now that you have your function app running in a container an Azure Arc-enabled
> [!div class="nextstepaction"] > [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-python)
+# [PowerShell](#tab/powershell)
+
+> [!div class="nextstepaction"]
+> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-powershell)
+
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
You can explicitly set a specific host ID for your function app in the applicati
When the collision occurs between slots, you may need to mark this setting as a slot setting. To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+## Azure Arc-enabled clusters
+
+When your function app is deployed to an Azure Arc-enabled Kubernetes cluster, a storage account may not be required by your function app. In this case, a storage account is only required by Functions when your function app uses a trigger that requires storage. The following table indicates which triggers may require a storage account and which don't.
+
+| Not required | May require storage |
+| | |
+| ΓÇó [Azure Cosmos DB](functions-bindings-cosmosdb-v2.md)<br/>ΓÇó [HTTP](functions-bindings-http-webhook.md)<br/>ΓÇó [Kafka](functions-bindings-kafka.md)<br/>ΓÇó [RabbitMQ](functions-bindings-rabbitmq.md)<br/>ΓÇó [Service Bus](functions-bindings-service-bus.md) | ΓÇó [Azure SQL](functions-bindings-azure-sql.md)<br/>ΓÇó [Blob storage](functions-bindings-storage-blob.md)<br/>ΓÇó [Event Grid](functions-bindings-event-grid.md)<br/>ΓÇó [Event Hubs](functions-bindings-event-hubs.md)<br/>ΓÇó [IoT Hub](functions-bindings-event-iot.md)<br/>ΓÇó [Queue storage](functions-bindings-storage-queue.md)<br/>ΓÇó [SendGrid](functions-bindings-sendgrid.md)<br/>ΓÇó [SignalR](functions-bindings-signalr-service.md)<br/>ΓÇó [Table storage](functions-bindings-storage-table.md)<br/>ΓÇó [Timer](functions-bindings-timer.md)<br/>ΓÇó [Twilio](functions-bindings-twilio.md)
+
+To create a function app on an Azure Arc-enabled Kubernetes cluster without storage, you must use the Azure CLI command [az functionapp create](/cli/azure/functionapp#az-functionapp-create). The version of the Azure CLI must include version 0.1.7 or a later version of the [appservice-kube extension](/cli/azure/appservice/kube). Use the `az --version` command to verify that the extension is installed and is the correct version.
+
+Creating your function app resources using methods other than the Azure CLI requires an existing storage account. If you plan to use any triggers that require a storage account, you should create the account before you create the function app.
+ ## Create an app without Azure Files Azure Files is set up by default for Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system. This means that you can create your function app without Azure Files. If you create your function app with Azure Files, a writeable file system is still provided. However, this file system may not be available for all function app instances.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[ITInfra](https://itinfra.biz/)| |[ITsavvy](https://www.itsavvy.com)| |[IV4, Inc](https://www.iv4.com)|
-|[J and C Landwehr LLC](https://jandclandwehr.com/)|
+|J and C Landwehr LLC|
|[Jackpine Technologies](https://www.jackpinetech.com)| |[Jacobs Technolgy Inc.](https://www.jacobs.com/)| |[Jadex Strategic Group](https://jadexstrategic.com)|
azure-monitor Action Groups Create Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-create-resource-manager-template.md
Previously updated : 2/23/2022- Last updated : 09/07/2022++ # Create an action group with a Resource Manager template
azure-monitor Action Groups Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-logic-app.md
description: Learn how to create a logic app action to process Azure Monitor ale
Previously updated : 2/23/2022- Last updated : 09/07/2022++ # How to trigger complex actions with Azure Monitor alerts
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Manage action groups in the Azure portal
description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure Functions. Previously updated : 06/06/2022-- Last updated : 09/07/2022++
An action group is a **global** service, so there's no dependency on a specific
### Configure basic action group settings
-1. Under **Project details**, select values for **Subscription** and **Resource group**. The action group is saved in the subscription and resource group that you select.
+1. Under **Project details**
+ - Select values for **Subscription** and **Resource group**.
+ - Select the region
+
+ | Option | Behavior |
+ | | -- |
+ | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/en-in/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](https://docs.microsoft.com/en-us/azure/service-health/alerts-activity-log-service-notifications-portal) are resilient to Azure live-site-incidents. |
+ | Regional | The action group is stored within the selected region. The action group is [zone-redundant](https://docs.microsoft.com/en-us/azure/availability-zones/az-region#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](https://azure.microsoft.com/en-in/global-infrastructure/geographies/#overview). |
+
+ The action group is saved in the subscription, region and resource group that you select.
1. Under **Instance details**, enter values for **Action group name** and **Display name**. The display name is used in place of a full action group name when the group is used to send notifications.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 05/11/2020 Last updated : 09/07/2022 ms.devlang: csharp, java, javascript, vb
To determine how long data is kept, see [Data retention and privacy](./data-rete
* [Node.js SDK](https://github.com/Microsoft/ApplicationInsights-Node.js) * [JavaScript SDK](https://github.com/Microsoft/ApplicationInsights-JS)
-## Questions
+## Frequently asked questions
-* What exceptions might `Track_()` calls throw?
+### What exceptions might `Track_()` calls throw?
- None. You don't need to wrap them in try-catch clauses. If the SDK encounters problems, it will log messages in the debug console output and, if the messages get through, in Diagnostic Search.
-* Is there a REST API to get data from the portal?
+None. You don't need to wrap them in try-catch clauses. If the SDK encounters problems, it will log messages in the debug console output and, if the messages get through, in Diagnostic Search.
- Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [export from Log Analytics to Power BI](./export-power-bi.md) and [continuous export](./export-telemetry.md).
+### Is there a REST API to get data from the portal?
+
+Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [export from Log Analytics to Power BI](./export-power-bi.md) and [continuous export](./export-telemetry.md).
+
+### Why are my calls to custom events and metrics APIs ignored?
+
+The Application Insights SDK isn't compatible with auto-instrumentation. If auto-instrumentation is enabled, calls to <code class="notranslate">Track()</code> and other custom events and metrics APIs will be ignored.
+
+Turn off auto-instrumentation in the Azure portal on the Application Insights tab of the App Service page or set <code class="notranslate">ApplicationInsightsAgent_EXTENSION_VERSION</code> to <code class="notranslate">disabled</code>.
## <a name="next"></a>Next steps
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
Title: Application Map in Azure Application Insights | Microsoft Docs
-description: Monitor complex application topologies with Application Map and Intelligent View
+description: Monitor complex application topologies with Application Map and Intelligent view.
Last updated 05/16/2022 ms.devlang: csharp, java, javascript, python
-# Application Map: Triage Distributed Applications
+# Application Map: Triage distributed applications
-Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies; and has health KPI and alerts status. You can select any component to get more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also select Azure diagnostics, such as SQL Database Advisor recommendations.
+Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. Each node on the map represents an application component or its dependencies and has health KPI and alerts status. You can select any component to get more detailed diagnostics, such as Application Insights events. If your app uses Azure services, you can also select Azure diagnostics, such as SQL Database Advisor recommendations.
-Application Map also features an [Intelligent View](#application-map-intelligent-view-public-preview) to assist with fast service health investigations.
-## What is a Component?
+Application Map also features [Intelligent view](#application-map-intelligent-view-public-preview) to assist with fast service health investigations.
-Components are independently deployable parts of your distributed/microservices application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components.
+## What is a component?
-* Components are different from "observed" external dependencies such as SQL, Event Hubs etc., which your team/organization may not have access to (code or telemetry).
-* Components run on any number of server/role/container instances.
-* Components can be separate Application Insights resources (even if subscriptions are different) or different roles reporting to a single Application Insights resource. The preview map experience shows the components regardless of how they're set up.
+Components are independently deployable parts of your distributed or microservice application. Developers and operations teams have code-level visibility or access to telemetry generated by these application components. For example:
-## Composite Application Map
+* Components are different from "observed" external dependencies, such as Azure SQL and Azure Event Hubs, which your team or organization might not have access to (code or telemetry).
+* Components run on any number of server, role, or container instances.
+* Components can be separate Application Insights resources, even if subscriptions are different. They can also be different roles that report to a single Application Insights resource. The preview map experience shows the components regardless of how they're set up.
-You can see the full application topology across multiple levels of related application components. Components could be different Application Insights resources, or different roles in a single resource. The app map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
+## Composite application map
-This experience starts with progressive discovery of the components. When you first load the Application Map, a set of queries is triggered to discover the components related to this component. A button at the top-left corner will update with the number of components in your application as they're discovered.
+You can see the full application topology across multiple levels of related application components. Components can be different Application Insights resources or different roles in a single resource. The application map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
-On clicking "Update map components", the map is refreshed with all components discovered until that point. Depending on the complexity of your application, this update may take a minute to load.
+This experience starts with progressive discovery of the components. When you first load Application Map, a set of queries is triggered to discover the components related to this component. A button at the upper-left corner updates with the number of components in your application as they're discovered.
-If all of the components are roles within a single Application Insights resource, then this discovery step isn't required. The initial load for such an application will have all its components.
+When you select **Update map components**, the map is refreshed with all components discovered until that point. Depending on the complexity of your application, this update might take a minute to load.
-![Screenshot shows an example of an Application Map.](media/app-map/app-map-001.png)
+If all the components are roles within a single Application Insights resource, this discovery step isn't required. The initial load for such an application will have all its components.
+
+![Screenshot that shows an example of an application map.](media/app-map/app-map-001.png)
One of the key objectives with this experience is to be able to visualize complex topologies with hundreds of components. Select any component to see related insights and go to the performance and failure triage experience for that component.
-![Flyout](media/app-map/application-map-002.png)
+![Diagram that shows application map details.](media/app-map/application-map-002.png)
### Investigate failures
-Select **investigate failures** to launch the failures pane.
+Select **Investigate failures** to open the **Failures** pane.
-![Screenshot of investigate failures button](media/app-map/investigate-failures.png)
+![Screenshot that shows the Investigate failures button.](media/app-map/investigate-failures.png)
-![Screenshot of failures experience](media/app-map/failures.png)
+![Screenshot that shows the Failures screen.](media/app-map/failures.png)
### Investigate performance
-To troubleshoot performance problems, select **investigate performance**.
+To troubleshoot performance problems, select **Investigate performance**.
-![Screenshot of investigate performance button](media/app-map/investigate-performance.png)
+![Screenshot that shows the Investigate performance button.](media/app-map/investigate-performance.png)
-![Screenshot of performance experience](media/app-map/performance.png)
+![Screenshot that shows the Performance screen.](media/app-map/performance.png)
### Go to details The **Go to details** button displays the end-to-end transaction experience, which offers views at the call stack level.
-![Screenshot of go-to-details button](media/app-map/go-to-details.png)
+![Screenshot that shows the Go to details button.](media/app-map/go-to-details.png)
-![Screenshot of end-to-end transaction details](media/app-map/end-to-end-transaction.png)
+![Screenshot that shows the End-to-end transaction details screen.](media/app-map/end-to-end-transaction.png)
-### View Logs (Analytics)
+### View in Logs (Analytics)
-To query and investigate your applications data further, select **view in Logs (Analytics)**.
+To query and investigate your applications data further, select **View in Logs (Analytics)**.
-![Screenshot of view in analytics button](media/app-map/view-logs.png)
+![Screenshot that shows the View in Logs (Analytics) button.](media/app-map/view-logs.png)
-![Screenshot of analytics experience. Line graph summarizing the average response duration of a request over the past 12 hours.](media/app-map/log-analytics.png)
+![Screenshot that shows the Logs screen with a line graph that summarizes the average response duration of a request over the past 12 hours.](media/app-map/log-analytics.png)
### Alerts
-To view active alerts and the underlying rules that cause the alerts to be triggered, select **alerts**.
+To view active alerts and the underlying rules that cause the alerts to be triggered, select **Alerts**.
-![Screenshot of alerts button](media/app-map/alerts.png)
+![Screenshot that shows the Alerts button.](media/app-map/alerts.png)
-![Screenshot of analytics experience](media/app-map/alerts-view.png)
+![Screenshot that shows a list of alerts.](media/app-map/alerts-view.png)
## Set or override cloud role name Application Map uses the **cloud role name** property to identify the components on the map.
-Follow this guidance to manually set or override cloud role names and change what gets displayed on the Application Map:
+Follow this guidance to manually set or override cloud role names and change what appears on the application map.
> [!NOTE] > The Application Insights SDK or Agent automatically adds the cloud role name property to the telemetry emitted by components in an Azure App Service environment. # [.NET/.NetCore](#tab/net)
-**Write custom TelemetryInitializer as below.**
+**Write custom TelemetryInitializer**
```csharp using Microsoft.ApplicationInsights.Channel;
namespace CustomInitializer.Telemetry
**ASP.NET apps: Load initializer in the active TelemetryConfiguration**
-In ApplicationInsights.config:
+In `ApplicationInsights.config`:
```xml <ApplicationInsights>
In ApplicationInsights.config:
</ApplicationInsights> ```
-An alternate method for ASP.NET Web apps is to instantiate the initializer in code, for example in Global.aspx.cs:
+An alternate method for ASP.NET Web apps is to instantiate the initializer in code, for example, in `Global.aspx.cs`:
```csharp using Microsoft.ApplicationInsights.Extensibility;
An alternate method for ASP.NET Web apps is to instantiate the initializer in co
``` > [!NOTE]
-> Adding initializer using `ApplicationInsights.config` or using `TelemetryConfiguration.Active` is not valid for ASP.NET Core applications.
+> Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications.
-**ASP.NET Core apps: Load initializer to the TelemetryConfiguration**
+**ASP.NET Core apps: Load an initializer to TelemetryConfiguration**
-For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, adding a new `TelemetryInitializer` is done by adding it to the Dependency Injection container, as shown below. This step is done in `ConfigureServices` method of your `Startup.cs` class.
+For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, to add a new `TelemetryInitializer` instance, you add it to the Dependency Injection container, as shown. You do this step in the `ConfigureServices` method of your `Startup.cs` class.
```csharp using Microsoft.ApplicationInsights.Extensibility;
For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, a
# [Java](#tab/java)
-The cloud role name is set as follows:
+The cloud role name is set as shown in this example:
```json {
The cloud role name is set as follows:
} ```
-You can also set the cloud role name using via environment variable or system property,
-see [configuring cloud role name](./java-standalone-config.md#cloud-role-name) for details.
+You can also set the cloud role name via the environment variable or system property. For more information,
+see [Configuring cloud role name](./java-standalone-config.md#cloud-role-name).
# [Node.js](#tab/nodejs)
appInsights.addTelemetryInitializer((envelope) => {
# [Python](#tab/python)
-For Python, [OpenCensus Python telemetry processors](api-filtering-sampling.md#opencensus-python-telemetry-processors) can be used.
+For Python, you can use [OpenCensus Python telemetry processors](api-filtering-sampling.md#opencensus-python-telemetry-processors).
```python def callback_function(envelope):
exporter.add_telemetry_processor(callback_function)
```
-### Understanding cloud role name within the context of the Application Map
+### Understand the cloud role name within the context of an application map
-As far as how to think about **cloud role name**, it can be helpful to look at an Application Map that has multiple cloud role names present:
+To help you understand the concept of *cloud role names*, look at an application map that has multiple cloud role names present.
-![Application Map Screenshot](media/app-map/cloud-rolename.png)
+![Screenshot that shows an application map example.](media/app-map/cloud-rolename.png)
-In the Application Map above, each of the names in green boxes are cloud role name values for different aspects of this particular distributed application. So for this app its roles consist of: `Authentication`, `acmefrontend`, `Inventory Management`, a `Payment Processing Worker Role`.
+In the application map shown, each of the names in green boxes is a cloud role name value for different aspects of this particular distributed application. For this app, its roles consist of `Authentication`, `acmefrontend`, `Inventory Management`, and `Payment Processing Worker Role`.
-In this app, each of those cloud role names also represents a different unique Application Insights resource with their own instrumentation keys. Since the owner of this application has access to each of those four disparate Application Insights resources, Application Map is able to stitch together a map of the underlying relationships.
+In this app, each of the cloud role names also represents a different unique Application Insights resource with its own instrumentation keys. Because the owner of this application has access to each of those four disparate Application Insights resources, Application Map can stitch together a map of the underlying relationships.
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-dotnet/blob/39a5ef23d834777eefdd72149de705a016eb06b0/Schema/PublicSchema/ContextTagKeys.bond#L93):
For the [official definitions](https://github.com/Microsoft/ApplicationInsights-
715: string CloudRoleInstance = "ai.cloud.roleInstance"; ```
-Alternatively, **cloud role instance** can be helpful for scenarios where **cloud role name** tells you the problem is somewhere in your web front-end, but you might be running across your web front-end multiple load-balanced servers so being able to drill in a layer deeper via Kusto queries and knowing if the issue is impacting all web front-end servers/instances or just one can be important.
+Alternatively, *cloud role instance* can be helpful for scenarios where a cloud role name tells you the problem is somewhere in your web front end. But you might be running multiple load-balanced servers across your web front end. Being able to drill in a layer deeper via Kusto queries and knowing if the issue is affecting all web front-end servers or instances or just one can be important.
-A scenario where you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a given issue.
+A scenario when you might want to override the value for cloud role instance could be if your app is running in a containerized environment. In this case, just knowing the individual server might not be enough information to locate a specific issue.
For more information about how to override the cloud role name property with telemetry initializers, see [Add properties: ITelemetryInitializer](api-filtering-sampling.md#addmodify-properties-itelemetryinitializer).
-## Application Map Intelligent View (public preview)
+## Application Map Intelligent view (public preview)
+
+The following sections discuss Intelligent view.
-### Intelligent View summary
+### Intelligent view summary
-Application Map's Intelligent View is designed to aid in service health investigations. It applies machine learning (ML) to quickly identify potential root cause(s) of issues by filtering out noise. The ML model learns from Application Map's historical behavior to identify dominant patterns and anomalies that indicate potential causes of an incident.
+Application Map **Intelligent view** is designed to aid in service health investigations. It applies machine learning to quickly identify potential root causes of issues by filtering out noise. The machine learning model learns from Application Map's historical behavior to identify dominant patterns and anomalies that indicate potential causes of an incident.
-In large distributed applications there's always some degree of noise coming from "benign" failures, which may cause Application Map to be noisy by showing many red edges. The Intelligent View shows only the most probable causes of service failure and removes node-to-node red edges (service-to-service communication) in healthy services. It not only highlights (in red) the edges that should be investigated but also offers actionable insights for the highlighted edge.
+In large distributed applications, there's always some degree of noise coming from "benign" failures, which might cause Application Map to be noisy by showing many red edges. Intelligent view shows only the most probable causes of service failure and removes node-to-node red edges (service-to-service communication) in healthy services. Intelligent view highlights the edges in red that should be investigated. It also offers actionable insights for the highlighted edge.
-### Intelligent View benefits
+### Intelligent view benefits
> [!div class="checklist"] > * Reduces time to resolution by highlighting only failures that need to be investigated > * Provides actionable insights on why a certain red edge was highlighted
-> * Enables Application Map to be used for large distributed applications seamlessly. (By focusing only on the edges marked red).
+> * Enables Application Map to be used for large distributed applications seamlessly (by focusing only on edges marked in red)
-### Enabling Intelligent View in Application Map
+### Enable Intelligent view in Application Map
-Enable the Intelligent View toggle. Optionally, to change the sensitivity of the detections choose--**Low**, **Medium**, or **High**. See more detail on [sensitivity here](#how-does-intelligent-view-sensitivity-work).
+Enable the **Intelligent view** toggle. Optionally, to change the sensitivity of the detections, select **Low**, **Medium**, or **High**. For more information, see the troubleshooting question about [sensitivity](#how-does-intelligent-view-sensitivity-work).
-After the Intelligent View has been enabled, select one of the highlighted edges to see the "actionable insights". The insights will be visible in the panel on the right and explain why the edge was highlighted.
+After you enable **Intelligent view**, select one of the highlighted edges to see the "actionable insights." The insights appear in the pane on the right and explain why the edge was highlighted.
-Begin your troubleshooting journey by selecting **Investigate Failures**. This button will launch the failures pane, in which you may investigate if the detected issue is the root cause. If no edges are red, the ML model didn't find potential incidents in the dependencies of your application.
+To begin troubleshooting, select **Investigate failures**. In the **Failures** pane that opens, investigate if the detected issue is the root cause. If no edges are red, the machine learning model didn't find potential incidents in the dependencies of your application.
-Provide your feedback by pressing the **Feedback** button on the map.
+To provide feedback, select the **Feedback** button on the map.
-### How does Intelligent View determine where red edges are highlighted?
+### How does Intelligent view determine where red edges are highlighted?
-Intelligent View uses the patented AIOps machine learning model to highlight what's truly important in an Application Map.
+Intelligent view uses the patented AIOps machine learning model to highlight what's truly important in an application map.
-A non-exhaustive list of example considerations includes:
+Some example considerations include:
* Failure rates * Request counts
A non-exhaustive list of example considerations includes:
For comparison, the normal view only utilizes the raw failure rate.
-### How does Intelligent View sensitivity work?
+### How does Intelligent view sensitivity work?
-Intelligent View sensitivity adjusts the probability that a service issue will be detected.
+Intelligent view sensitivity adjusts the probability that a service issue will be detected.
Adjust sensitivity to achieve the desired confidence level in highlighted edges.
-|Sensitivity Setting | Result |
+|Sensitivity setting | Result |
||| |High | Fewer edges will be highlighted. | |Medium (default) | A balanced number of edges will be highlighted. | |Low | More edges will be highlighted. |
-### Limitations of Intelligent View
+### Limitations of Intelligent view
-* Large distributed applications may take a minute to load Intelligent View.
-* Timeframes of up to seven days are supported.
+Intelligent view has some limitations:
-We would love to hear your feedback. ([Portal feedback](#portal-feedback))
+* Large distributed applications might take a minute to load Intelligent view.
+* Time frames of up to seven days are supported.
+
+To provide feedback, see [Portal feedback](#portal-feedback).
## Troubleshooting
-If you're having trouble getting Application Map to work as expected, try these steps:
+If you're having trouble getting Application Map to work as expected, try these steps.
### General
-1. Make sure you're using an officially supported SDK. Unsupported/community SDKs might not support correlation.
+1. Make sure you're using an officially supported SDK. Unsupported or community SDKs might not support correlation.
- Refer to this [article](./platforms.md) for a list of supported SDKs.
+ For a list of supported SDKs, see [Application Insights: Languages, platforms, and integrations](./platforms.md).
-2. Upgrade all components to the latest SDK version.
+1. Upgrade all components to the latest SDK version.
-3. If you're using Azure Functions with C#, upgrade to [Functions V2](../../azure-functions/functions-versions.md).
+1. If you're using Azure Functions with C#, upgrade to [Azure Functions V2](../../azure-functions/functions-versions.md).
-4. Confirm [cloud role name](#set-or-override-cloud-role-name) is correctly configured.
+1. Confirm the [cloud role name](#set-or-override-cloud-role-name) is correctly configured.
-5. If you're missing a dependency, make sure it's in the list of [auto-collected dependencies](./auto-collect-dependencies.md). If not, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
+1. If you're missing a dependency, make sure it's in the list of [autocollected dependencies](./auto-collect-dependencies.md). If not, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
### Too many nodes on the map
-Application Map constructs an application node for each unique cloud role name present in your request telemetry. In addition, a dependency node is constructed for each unique combination of type, target, and cloud role name.
+Application Map constructs an application node for each unique cloud role name present in your request telemetry. A dependency node is also constructed for each unique combination of type, target, and cloud role name.
-If there are more than 10,000 nodes in your telemetry, Application Map won't be able to fetch all the nodes and links, so your map will be incomplete. If this scenario occurs, a warning message will appear when viewing the map.
+If there are more than 10,000 nodes in your telemetry, Application Map can't fetch all the nodes and links, so your map will be incomplete. If this scenario occurs, a warning message appears when you view the map.
-Application Map only supports up to 1000 separate ungrouped nodes rendered at once. Application Map reduces visual complexity by grouping dependencies together that have the same type and callers.
+Application Map only supports up to 1,000 separate ungrouped nodes rendered at once. Application Map reduces visual complexity by grouping dependencies together that have the same type and callers.
-If your telemetry has too many unique cloud role names or too many dependency types, that grouping will be insufficient, and the map will be unable to render.
+If your telemetry has too many unique cloud role names or too many dependency types, that grouping will be insufficient and the map won't render.
To fix this issue, you'll need to change your instrumentation to properly set the cloud role name, dependency type, and dependency target fields.
-* Dependency target should represent the logical name of a dependency. In many cases, it's equivalent to the server or resource name of the dependency. For example, If there are HTTP dependencies, it's set to the hostname. It shouldn't contain unique IDs or parameters that change from one request to another.
+* Dependency target should represent the logical name of a dependency. In many cases, it's equivalent to the server or resource name of the dependency. For example, if there are HTTP dependencies, it's set to the hostname. It shouldn't contain unique IDs or parameters that change from one request to another.
+
+* Dependency type should represent the logical type of a dependency. For example, HTTP, SQL, or Azure Blob are typical dependency types. It shouldn't contain unique IDs.
-* Dependency type should represent the logical type of a dependency. For example, HTTP, SQL or Azure Blob are typical dependency types. It shouldn't contain unique IDs.
+* The purpose of cloud role name is described in the [Set or override cloud role name](#set-or-override-cloud-role-name) section.
-* The purpose of cloud role name is described in the [above section](#set-or-override-cloud-role-name).
+### Intelligent view
-### Intelligent View
+Common troubleshooting questions about Intelligent view.
#### Why isn't this edge highlighted, even with low sensitivity?
-Try these steps if a dependency appears to be failing but the model doesn't indicate it's a potential incident:
+A dependency might appear to be failing but the model doesn't indicate it's a potential incident:
-* If this dependency has been failing for a while now, the model might believe it's a regular state and not highlight the edge for you. It focuses on problem solving in RT.
-* If this dependency has a minimal effect on the overall performance of the app that can also make the model ignore it.
-* If none of the above is correct, use the **Feedback** option and describe your experience--you can help us improve future model versions.
+* If this dependency has been failing for a while, the model might believe it's a regular state and not highlight the edge for you. It focuses on problem-solving in RT.
+* If this dependency has a minimal effect on the overall performance of the app, that can also make the model ignore it.
+* If none of the above is correct, use the **Feedback** option and describe your experience. You can help us improve future model versions.
#### Why is the edge highlighted?
-In a case where an edge is highlighted the explanation from the model should point you to the most important features that made the model give this dependency a high probability score. The recommendation isn't based solely on failures but other indicators like unexpected latency in dominant flows.
+If an edge is highlighted, the explanation from the model should point you to the most important features that made the model give this dependency a high probability score. The recommendation isn't based solely on failures but on other indicators like unexpected latency in dominant flows.
-#### Intelligent View doesn't load
+#### Why doesn't Intelligent view load?
-Follow these steps if Intelligent View doesn't load.
+If **Intelligent view** doesn't load:
1. Set the configured time frame to six days or less.
-1. The `Try preview` button must be selected to opt in.
+1. The **Try preview** button must be selected to opt in.
+ :::image type="content" source="media/app-map/intelligent-view-try-preview.png" alt-text="Screenshot that shows the Try preview button in the Application Map user interface." lightbox="media/app-map/intelligent-view-try-preview.png":::
-#### Intelligent View takes a long time to load
+#### Why does Intelligent view take a long time to load?
-Avoid selecting the **Update Map Component**.
+Avoid selecting **Update map components**.
-Enable Intelligent View only for a single Application Insight resource.
+Enable **Intelligent view** only for a single Application Insights resource.
## Portal feedback To provide feedback, use the feedback option.
-![MapLink-1 image](./media/app-map/14-updated.png)
+![Screenshot that shows the Feedback option.](./media/app-map/14-updated.png)
## Next steps
-* To learn more about how correlation works in Application Insights consult the [telemetry correlation article](correlation.md).
-* The [end-to-end transaction diagnostic experience](transaction-diagnostics.md) correlates server-side telemetry from across all your Application Insights monitored components into a single view.
-* For advanced correlation scenarios in ASP.NET Core and ASP.NET, consult the [track custom operations](custom-operations-tracking.md) article.
+* To learn more about how correlation works in Application Insights, see [Telemetry correlation](correlation.md).
+* The [end-to-end transaction diagnostic experience](transaction-diagnostics.md) correlates server-side telemetry from across all your Application Insights-monitored components into a single view.
+* For advanced correlation scenarios in ASP.NET Core and ASP.NET, see [Track custom operations](custom-operations-tracking.md).
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Title: Dependency Tracking in Azure Application Insights | Microsoft Docs
-description: Monitor dependency calls from your on-premises or Microsoft Azure web application with Application Insights.
+ Title: Dependency tracking in Application Insights | Microsoft Docs
+description: Monitor dependency calls from your on-premises or Azure web application with Application Insights.
Last updated 08/26/2020 ms.devlang: csharp
-# Dependency Tracking in Azure Application Insights
+# Dependency tracking in Application Insights
-A *dependency* is a component that is called by your application. It's typically a service called using HTTP, or a database, or a file system. [Application Insights](./app-insights-overview.md) measures the duration of dependency calls, whether it's failing or not, along with additional information like name of dependency and so on. You can investigate specific dependency calls, and correlate them to requests and exceptions.
+A *dependency* is a component that's called by your application. It's typically a service called by using HTTP, a database, or a file system. [Application Insights](./app-insights-overview.md) measures the duration of dependency calls and whether it's failing or not, along with information like the name of the dependency. You can investigate specific dependency calls and correlate them to requests and exceptions.
## Automatically tracked dependencies
-Application Insights SDKs for .NET and .NET Core ships with `DependencyTrackingTelemetryModule`, which is a Telemetry Module that automatically collects dependencies. This dependency collection is enabled automatically for [ASP.NET](./asp-net.md) and [ASP.NET Core](./asp-net-core.md) applications, when configured as per the linked official docs. `DependencyTrackingTelemetryModule` is shipped as [this](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/) NuGet package, and is brought automatically when using either of the NuGet packages `Microsoft.ApplicationInsights.Web` or `Microsoft.ApplicationInsights.AspNetCore`.
+Application Insights SDKs for .NET and .NET Core ship with `DependencyTrackingTelemetryModule`, which is a telemetry module that automatically collects dependencies. This dependency collection is enabled automatically for [ASP.NET](./asp-net.md) and [ASP.NET Core](./asp-net-core.md) applications when it's configured according to the linked official docs. The module `DependencyTrackingTelemetryModule` is shipped as the [Microsoft.ApplicationInsights.DependencyCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DependencyCollector/) NuGet package. It's brought automatically when you use either the `Microsoft.ApplicationInsights.Web` NuGet package or the `Microsoft.ApplicationInsights.AspNetCore` NuGet package.
- `DependencyTrackingTelemetryModule` currently tracks the following dependencies automatically:
+ Currently, `DependencyTrackingTelemetryModule` tracks the following dependencies automatically:
|Dependencies |Details| ||-|
-|Http/Https | Local or Remote http/https calls |
-|WCF calls| Only tracked automatically if Http-based bindings are used.|
-|SQL | Calls made with `SqlClient`. See [this documentation](#advanced-sql-tracking-to-get-full-sql-query) for capturing SQL query. |
-|[Azure storage (Blob, Table, Queue )](https://www.nuget.org/packages/WindowsAzure.Storage/) | Calls made with Azure Storage Client. |
-|[EventHub Client SDK](https://nuget.org/packages/Azure.Messaging.EventHubs) | Use the latest package. https://nuget.org/packages/Azure.Messaging.EventHubs |
-|[ServiceBus Client SDK](https://nuget.org/packages/Azure.Messaging.ServiceBus)| Use the latest package. https://nuget.org/packages/Azure.Messaging.ServiceBus |
+|HTTP/HTTPS | Local or remote HTTP/HTTPS calls. |
+|WCF calls| Only tracked automatically if HTTP-based bindings are used.|
+|SQL | Calls made with `SqlClient`. See the section [Advanced SQL tracking to get full SQL query](#advanced-sql-tracking-to-get-full-sql-query) for capturing SQL queries. |
+|[Azure Blob Storage, Table Storage, or Queue Storage](https://www.nuget.org/packages/WindowsAzure.Storage/) | Calls made with the Azure Storage client. |
+|[Azure Event Hubs client SDK](https://nuget.org/packages/Azure.Messaging.EventHubs) | Use the latest package: https://nuget.org/packages/Azure.Messaging.EventHubs. |
+|[Azure Service Bus client SDK](https://nuget.org/packages/Azure.Messaging.ServiceBus)| Use the latest package: https://nuget.org/packages/Azure.Messaging.ServiceBus. |
|Azure Cosmos DB | Only tracked automatically if HTTP/HTTPS is used. TCP mode won't be captured by Application Insights. |
-If you're missing a dependency, or using a different SDK make sure it's in the list of [auto-collected dependencies](./auto-collect-dependencies.md). If the dependency isn't auto-collected, you can still track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
+If you're missing a dependency or using a different SDK, make sure it's in the list of [autocollected dependencies](./auto-collect-dependencies.md). If the dependency isn't autocollected, you can track it manually with a [track dependency call](./api-custom-events-metrics.md#trackdependency).
-## Setup automatic dependency tracking in Console Apps
+## Set up automatic dependency tracking in console apps
-To automatically track dependencies from .NET console apps, install the NuGet package `Microsoft.ApplicationInsights.DependencyCollector`, and initialize `DependencyTrackingTelemetryModule` as follows:
+To automatically track dependencies from .NET console apps, install the NuGet package `Microsoft.ApplicationInsights.DependencyCollector` and initialize `DependencyTrackingTelemetryModule`:
```csharp DependencyTrackingTelemetryModule depModule = new DependencyTrackingTelemetryModule(); depModule.Initialize(TelemetryConfiguration.Active); ```
-For .NET Core console apps, `TelemetryConfiguration.Active` is obsolete. Refer to the guidance in the [worker service documentation](./worker-service.md) and the [ASP.NET Core monitoring documentation](./asp-net-core.md)
+For .NET Core console apps, `TelemetryConfiguration.Active` is obsolete. See the guidance in the [Worker service documentation](./worker-service.md) and the [ASP.NET Core monitoring documentation](./asp-net-core.md).
-### How automatic dependency monitoring works?
+### How does automatic dependency monitoring work?
Dependencies are automatically collected by using one of the following techniques:
-* Using byte code instrumentation around select methods. (InstrumentationEngine either from StatusMonitor or Azure Web App Extension)
-* EventSource callbacks
-* DiagnosticSource callbacks (in the latest .NET/.NET Core SDKs)
+* Using byte code instrumentation around select methods. Use `InstrumentationEngine` either from `StatusMonitor` or an Azure App Service Web Apps extension.
+* `EventSource` callbacks.
+* `DiagnosticSource` callbacks in the latest .NET or .NET Core SDKs.
## Manually tracking dependencies
-The following are some examples of dependencies, which aren't automatically collected, and hence require manual tracking.
+The following examples of dependencies, which aren't automatically collected, require manual tracking:
* Azure Cosmos DB is tracked automatically only if [HTTP/HTTPS](../../cosmos-db/performance-tips.md#networking) is used. TCP mode won't be captured by Application Insights. * Redis
-For those dependencies not automatically collected by SDK, you can track them manually using the [TrackDependency API](api-custom-events-metrics.md#trackdependency) that is used by the standard auto collection modules.
+For those dependencies not automatically collected by SDK, you can track them manually by using the [TrackDependency API](api-custom-events-metrics.md#trackdependency) that's used by the standard autocollection modules.
**Example**+ If you build your code with an assembly that you didn't write yourself, you could time all the calls to it. This scenario would allow you to find out what contribution it makes to your response times.
-To have this data displayed in the dependency charts in Application Insights, send it using `TrackDependency`.
+To have this data displayed in the dependency charts in Application Insights, send it by using `TrackDependency`:
```csharp
To have this data displayed in the dependency charts in Application Insights, se
} ```
-Alternatively, `TelemetryClient` provides extension methods `StartOperation` and `StopOperation`, which can be used to manually track dependencies as shown [here](custom-operations-tracking.md#outgoing-dependencies-tracking)
+Alternatively, `TelemetryClient` provides the extension methods `StartOperation` and `StopOperation`, which can be used to manually track dependencies as shown in [Outgoing dependencies tracking](custom-operations-tracking.md#outgoing-dependencies-tracking).
-If you want to switch off the standard dependency tracking module, remove the reference to DependencyTrackingTelemetryModule in [ApplicationInsights.config](../../azure-monitor/app/configuration-with-applicationinsights-config.md) for ASP.NET applications. For ASP.NET Core applications, follow instructions [here](asp-net-core.md#configuring-or-removing-default-telemetrymodules).
+If you want to switch off the standard dependency tracking module, remove the reference to `DependencyTrackingTelemetryModule` in [ApplicationInsights.config](../../azure-monitor/app/configuration-with-applicationinsights-config.md) for ASP.NET applications. For ASP.NET Core applications, follow the instructions in [Application Insights for ASP.NET Core applications](asp-net-core.md#configuring-or-removing-default-telemetrymodules).
-## Tracking AJAX calls from Web Pages
+## Track AJAX calls from webpages
-For web pages, Application Insights JavaScript SDK automatically collects AJAX calls as dependencies.
+For webpages, the Application Insights JavaScript SDK automatically collects AJAX calls as dependencies.
-## Advanced SQL tracking to get full SQL Query
+## Advanced SQL tracking to get full SQL query
> [!NOTE]
-> Azure Functions requires separate settings to enable SQL text collection: within [host.json](../../azure-functions/functions-host-json.md#applicationinsights) set `"EnableDependencyTracking": true,` and `"DependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }` in `applicationInsights`.
+> Azure Functions requires separate settings to enable SQL text collection. Within [host.json](../../azure-functions/functions-host-json.md#applicationinsights), set `"EnableDependencyTracking": true,` and `"DependencyTrackingOptions": { "enableSqlCommandTextInstrumentation": true }` in `applicationInsights`.
+
+For SQL calls, the name of the server and database is always collected and stored as the name of the collected `DependencyTelemetry`. Another field, called data, can contain the full SQL query text.
-For SQL calls, the name of the server and database is always collected and stored as name of the collected `DependencyTelemetry`. There's another field called 'data', which can contain the full SQL query text.
+For ASP.NET Core applications, It's now required to opt in to SQL Text collection by using:
-For ASP.NET Core applications, It's now required to opt in to SQL Text collection by using
```csharp services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) => { module. EnableSqlCommandTextInstrumentation = true; }); ```
-For ASP.NET applications, full SQL query text is collected with the help of byte code instrumentation, which requires using the instrumentation engine or by using the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package instead of the System.Data.SqlClient library. Platform specific steps to enable full SQL Query collection are described below:
+For ASP.NET applications, the full SQL query text is collected with the help of byte code instrumentation, which requires using the instrumentation engine or by using the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package instead of the System.Data.SqlClient library. Platform-specific steps to enable full SQL Query collection are described in the following table.
-| Platform | Step(s) Needed to get full SQL Query |
+| Platform | Steps needed to get full SQL query |
| | |
-| Azure Web App |In your web app control panel, [open the Application Insights pane](../../azure-monitor/app/azure-web-apps.md) and enable SQL Commands under .NET |
-| IIS Server (Azure VM, on-premises, and so on.) | Either use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package or use the Status Monitor PowerShell Module to [install the Instrumentation Engine](../../azure-monitor/app/status-monitor-v2-api-reference.md#enable-instrumentationengine) and restart IIS. |
-| Azure Cloud Service | Add [startup task to install StatusMonitor](../../azure-monitor/app/azure-web-apps-net-core.md) <br> Your app should be onboarded to ApplicationInsights SDK at build time by installing NuGet packages for [ASP.NET](./asp-net.md) or [ASP.NET Core applications](./asp-net-core.md) |
+| Web Apps in Azure App Service|In your web app control panel, [open the Application Insights pane](../../azure-monitor/app/azure-web-apps.md) and enable SQL Commands under .NET. |
+| IIS Server (Azure Virtual Machines, on-premises, and so on) | Either use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package or use the Status Monitor PowerShell Module to [install the instrumentation engine](../../azure-monitor/app/status-monitor-v2-api-reference.md#enable-instrumentationengine) and restart IIS. |
+| Azure Cloud Services | Add a [startup task to install StatusMonitor](../../azure-monitor/app/azure-web-apps-net-core.md). <br> Your app should be onboarded to the ApplicationInsights SDK at build time by installing NuGet packages for [ASP.NET](./asp-net.md) or [ASP.NET Core applications](./asp-net-core.md). |
| IIS Express | Use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package.
-| Azure Web Jobs | Use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package.
+| WebJobs in Azure App Service| Use the [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient) NuGet package.
-In addition to the platform specific steps above, you **must also explicitly opt-in to enable SQL command collection** by modifying the applicationInsights.config file with the following:
+In addition to the preceding platform-specific steps, you *must also explicitly opt in to enable SQL command collection* by modifying the `applicationInsights.config` file with the following code:
```xml <TelemetryModules>
In addition to the platform specific steps above, you **must also explicitly opt
</Add> ```
-In the above cases, the correct way of validating that instrumentation engine is correctly installed is by validating that the SDK version of collected `DependencyTelemetry` is 'rddp'. 'rdddsd' or 'rddf' indicates dependencies are collected via DiagnosticSource or EventSource callbacks, and hence full SQL query won't be captured.
+In the preceding cases, the proper way of validating that the instrumentation engine is correctly installed is by validating that the SDK version of collected `DependencyTelemetry` is `rddp`. Use of `rdddsd` or `rddf` indicates dependencies are collected via `DiagnosticSource` or `EventSource` callbacks, so the full SQL query won't be captured.
## Where to find dependency data
In the above cases, the correct way of validating that instrumentation engine is
## <a name="diagnosis"></a> Diagnose slow requests
-Each request event is associated with the dependency calls, exceptions, and other events tracked while processing the request. So if some requests are doing badly, you can find out whether it's because of slow responses from a dependency.
+Each request event is associated with the dependency calls, exceptions, and other events tracked while processing the request. So, if some requests are doing badly, you can find out whether it's because of slow responses from a dependency.
### Tracing from requests to dependencies
-Open the **Performance** tab and navigate to the **Dependencies** tab at the top next to operations.
+Select the **Performance** tab on the left and select the **Dependencies** tab at the top.
-Select a **Dependency Name** under overall. After you select a dependency, a graph of that dependency's distribution of durations will show up on the right.
+Select a **Dependency Name** under **Overall**. After you select a dependency, a graph of that dependency's distribution of durations appears on the right.
-![In the performance tab click on the Dependency tab at the top then a Dependency name in the chart](./media/asp-net-dependencies/2-perf-dependencies.png)
+![Screenshot that shows the Dependencies tab open to select a Dependency Name in the chart.](./media/asp-net-dependencies/2-perf-dependencies.png)
-Select the blue **Samples** button on the bottom right and then on a sample to see the end-to-end transaction details.
+Select the **Samples** button at the bottom right. Then select a sample to see the end-to-end transaction details.
-![Click on a sample to see the end-to-end transaction details](./media/asp-net-dependencies/3-end-to-end.png)
+![Screenshot that shows selecting a sample to see the end-to-end transaction details.](./media/asp-net-dependencies/3-end-to-end.png)
### Profile your live site
-No idea where the time goes? The [Application Insights profiler](../../azure-monitor/app/profiler.md) traces HTTP calls to your live site and shows you the functions in your code that took the longest time.
+The [Application Insights profiler](../../azure-monitor/app/profiler.md) traces HTTP calls to your live site and shows you the functions in your code that took the longest time.
## Failed requests Failed requests might also be associated with failed calls to dependencies.
-We can go to the **Failures** tab on the left and then select on the **dependencies** tab at the top.
+Select the **Failures** tab on the left and then select the **Dependencies** tab at the top.
-![Click the failed requests chart](./media/asp-net-dependencies/4-fail.png)
+![Screenshot that shows selecting the failed requests chart.](./media/asp-net-dependencies/4-fail.png)
-Here you'll be able to see the failed dependency count. To get more details about a failed occurrence trying clicking on a dependency name in the bottom table. You can select the blue **Dependencies** button at the bottom right to get the end-to-end transaction details.
+Here you'll see the failed dependency count. To get more information about a failed occurrence, select a **Dependency Name** in the bottom table. Select the **Dependencies** button at the bottom right to see the end-to-end transaction details.
## Logs (Analytics) You can track dependencies in the [Kusto query language](/azure/kusto/query/). Here are some examples. * Find any failed dependency calls:-
-``` Kusto
-
- dependencies | where success != "True" | take 10
-```
+
+ ``` Kusto
+
+ dependencies | where success != "True" | take 10
+ ```
* Find AJAX calls:
-``` Kusto
-
- dependencies | where client_Type == "Browser" | take 10
-```
+ ``` Kusto
+
+ dependencies | where client_Type == "Browser" | take 10
+ ```
* Find dependency calls associated with requests:-
-``` Kusto
-
- dependencies
- | where timestamp > ago(1d) and client_Type != "Browser"
- | join (requests | where timestamp > ago(1d))
- on operation_Id
-```
-
+
+ ``` Kusto
+
+ dependencies
+ | where timestamp > ago(1d) and client_Type != "Browser"
+ | join (requests | where timestamp > ago(1d))
+ on operation_Id
+ ```
* Find AJAX calls associated with page views:
+
+ ``` Kusto
+
+ dependencies
+ | where timestamp > ago(1d) and client_Type == "Browser"
+ | join (browserTimings | where timestamp > ago(1d))
+ on operation_Id
+ ```
-``` Kusto
+## Frequently asked questions
- dependencies
- | where timestamp > ago(1d) and client_Type == "Browser"
- | join (browserTimings | where timestamp > ago(1d))
- on operation_Id
-```
+This section provides answers to common questions.
-## Frequently asked questions
+### How does the automatic dependency collector report failed calls to dependencies?
-### *How does automatic dependency collector report failed calls to dependencies?*
+Failed dependency calls will have the `success` field set to False. The module `DependencyTrackingTelemetryModule` doesn't report `ExceptionTelemetry`. The full data model for dependency is described [Dependency telemetry: Application Insights data model](data-model-dependency-telemetry.md).
-* Failed dependency calls will have 'success' field set to False. `DependencyTrackingTelemetryModule` doesn't report `ExceptionTelemetry`. The full data model for dependency is described [here](data-model-dependency-telemetry.md).
+### How do I calculate ingestion latency for my dependency telemetry?
-### *How do I calculate ingestion latency for my dependency telemetry?*
+Use this code:
```kusto dependencies
dependencies
| extend TimeIngested = ingestion_time() ```
-### *How do I determine the time the dependency call was initiated?*
+### How do I determine the time the dependency call was initiated?
-In the Log Analytics query view, `timestamp` represents the moment the TrackDependency() call was initiated which occurs immediately after the dependency call response is received. To calculate the time when the dependency call began, you would take `timestamp` and subtract the recorded `duration` of the dependency call.
+In the Log Analytics query view, `timestamp` represents the moment the TrackDependency() call was initiated, which occurs immediately after the dependency call response is received. To calculate the time when the dependency call began, you would take `timestamp` and subtract the recorded `duration` of the dependency call.
## Open-source SDK
-Like every Application Insights SDK, dependency collection module is also open-source. Read and contribute to the code, or report issues at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet).
+
+Like every Application Insights SDK, the dependency collection module is also open source. Read and contribute to the code or report issues at [the official GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet).
## Next steps * [Exceptions](./asp-net-exceptions.md)
-* [User & page data](./javascript.md)
-* [Availability](./monitor-web-app-availability.md)
-
+* [User and page data](./javascript.md)
+* [Availability](./monitor-web-app-availability.md)
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
This article will cover how to create an Azure Function with TrackAvailability()
## Create a timer trigger function
-1. Create a Azure Functions resource.
+1. Create an Azure Functions resource.
- If you already have an Application Insights Resource: - By default Azure Functions creates an Application Insights resource but if you would like to use one of your already created resources you will need to specify that during creation. - Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
You can use Logs(analytics) to view you availability results, dependencies, and
## Next steps - [Application Map](./app-map.md)-- [Transaction diagnostics](./transaction-diagnostics.md)
+- [Transaction diagnostics](./transaction-diagnostics.md)
azure-monitor Create Workspace Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-workspace-resource.md
# Workspace-based Application Insights resources
-Workspace-based resources support full integration between Application Insights and Log Analytics. You can now choose to send your Application Insights telemetry to a common Log Analytics workspace, which allows you full access to all the features of Log Analytics while keeping application, infrastructure, and platform logs in a single consolidated location.
+Workspace-based resources support full integration between Application Insights and Log Analytics. Now you can send your Application Insights telemetry to a common Log Analytics workspace. You'll have full access to all the features of Log Analytics, while your application, infrastructure, and platform logs remain in a single consolidated location.
-This also allows for common Azure role-based access control (Azure RBAC) across your resources, and eliminates the need for cross-app/workspace queries.
+This integration allows for common Azure role-based access control across your resources. It also eliminates the need for cross-app/workspace queries.
> [!NOTE]
-> Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. [Learn more](../logs/cost-logs.md) about billing for workspace-based Application Insights resources.
+> Data ingestion and retention for workspace-based Application Insights resources are billed through the Log Analytics workspace where the data is located. To learn more about billing for workspace-based Application Insights resources, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] - ## New capabilities
-Workspace-based Application Insights allows you to take advantage of the latest capabilities of Azure Monitor and Log Analytics including:
+With workspace-based Application Insights, you can take advantage of the latest capabilities of Azure Monitor and Log Analytics. For example:
-* [Customer-Managed Keys (CMK)](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys to which only you have access.
-* [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints.
-* [Bring Your Own Storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over the encryption-at-rest policy, the lifetime management policy, and network access for all data associated with Application Insights Profiler and Snapshot Debugger.
-* [Commitment Tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the Pay-As-You-Go price.
-* Faster data ingestion via Log Analytics streaming ingestion.
+* [Customer-managed key](../logs/customer-managed-keys.md) provides encryption at rest for your data with encryption keys to which only you have access.
+* [Azure Private Link](../logs/private-link-security.md) allows you to securely link Azure platform as a service (PaaS) services to your virtual network by using private endpoints.
+* [Bring your own storage (BYOS) for Profiler and Snapshot Debugger](./profiler-bring-your-own-storage.md) gives you full control over the encryption-at-rest policy, the lifetime management policy, and network access for all data associated with Application Insights Profiler and Snapshot Debugger.
+* [Commitment tiers](../logs/cost-logs.md#commitment-tiers) enable you to save as much as 30% compared to the pay-as-you-go price.
+* Log Analytics streaming ingests data faster.
-## Create workspace-based resource
+## Create a workspace-based resource
-Sign in to the [Azure portal](https://portal.azure.com), and create an Application Insights resource:
+Sign in to the [Azure portal](https://portal.azure.com), and create an Application Insights resource.
> [!div class="mx-imgBorder"]
-> ![Workspace-based Application Insights resource](./media/create-workspace-resource/create-workspace-based.png)
+> ![Screenshot that shows a workspace-based Application Insights resource.](./media/create-workspace-resource/create-workspace-based.png)
-If you don't already have an existing Log Analytics Workspace, [consult the Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
+If you don't have an existing Log Analytics workspace, see the [Log Analytics workspace creation documentation](../logs/quick-create-workspace.md).
-**Workspace-based resources are currently available in all commercial regions and Azure Government**
+*Workspace-based resources are currently available in all commercial regions and Azure Government.*
-Once your resource is created, you will see the corresponding workspace info in the **Overview** pane:
+After you create your resource, you'll see corresponding workspace information in the **Overview** pane.
-![Workspace Name](./media/create-workspace-resource/workspace-name.png)
+![Screenshot that shows a workspace name.](./media/create-workspace-resource/workspace-name.png)
-Clicking the blue link text will take you to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
+Select the blue link text to go to the associated Log Analytics workspace where you can take advantage of the new unified workspace query environment.
> [!NOTE]
-> We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience. To query/view against the [new workspace-based table structure/schema](convert-classic-resource.md#workspace-based-resource-changes) you must first navigate to your Log Analytics workspace. Selecting **Logs (Analytics)** from within the Application Insights panes will give you access to the classic Application Insights query experience.
+> We still provide full backward compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts. To query or view the [new workspace-based table structure or schema](convert-classic-resource.md#workspace-based-resource-changes), you must first go to your Log Analytics workspace. Select **Logs (Analytics)** in the **Application Insights** panes for access to the classic Application Insights query experience.
## Copy the connection string
-The [connection string](./sdk-connection-string.md?tabs=net) identifies the resource that you want to associate your telemetry data with. It also allows you to modify the endpoints your resource will use as a destination for your telemetry. You will need to copy the connection string and add it to your application's code or to an environment variable.
+The [connection string](./sdk-connection-string.md?tabs=net) identifies the resource that you want to associate your telemetry data with. You can also use it to modify the endpoints your resource will use as a destination for your telemetry. You must copy the connection string and add it to your application's code or to an environment variable.
-## Monitoring configuration
+## Configure monitoring
-Once a workspace-based Application Insights resource has been created, configuring monitoring is relatively straightforward.
+After you've created a workspace-based Application Insights resource, you configure monitoring.
### Code-based application monitoring
-For code-based application monitoring, you would just install the appropriate Application Insights SDK and point it to the instrumentation key or connection string to your newly created resource.
+For code-based application monitoring, you install the appropriate Application Insights SDK and point the instrumentation key or connection string to your newly created resource.
-For detailed documentation on setting up an Application Insights SDK for code-based monitoring consult the language/framework specific documentation:
+For information on how to set up an Application Insights SDK for code-based monitoring, see the following documentation specific to the language or framework:
- [ASP.NET](./asp-net.md) - [ASP.NET Core](./asp-net-core.md)-- [Background tasks & modern console applications (.NET/.NET Core)](./worker-service.md)-- [Classic console applications (.NET)](./console.md)
+- [Background tasks and modern console applications (.NET/.NET Core)](./worker-service.md)
+- [Classic console applications (.NET)](./console.md)
- [Java](./java-in-process-agent.md) - [JavaScript](./javascript.md) - [Node.js](./nodejs.md)
For detailed documentation on setting up an Application Insights SDK for code-ba
### Codeless monitoring and Visual Studio resource creation
-For codeless monitoring of services like Azure Functions and Azure App Services, you will also need to first create your workspace-based Application Insights resource and then point to that resource during the monitoring configuration phase.
+For codeless monitoring of services like Azure Functions and Azure App Services, you first create your workspace-based Application Insights resource. Then you point to that resource when you configure monitoring.
-While these services offer the option to create a new Application Insights resource within their own resource creation process, resources created via these UI options are currently restricted to the classic Application Insights experience.
+These services offer the option to create a new Application Insights resource within their own resource creation process. But resources created via these UI options are currently restricted to the classic Application Insights experience.
-The same applies to the Application Insights resource creation experience in Visual Studio for ASP.NET and ASP.NET Core. You must select an existing workspace-based resource from with the Visual Studio monitoring enablement UI. Selecting create new resource from within Visual Studio will limit you to creating a classic Application Insights resource.
+The same restriction applies to the Application Insights resource creation experience in Visual Studio for ASP.NET and ASP.NET Core. You must select an existing workspace-based resource in the Visual Studio UI where you enable monitoring. Selecting **Create new resource** in Visual Studio limits you to creating a classic Application Insights resource.
-## Creating a resource automatically
+## Create a resource automatically
### Azure CLI
To access the preview Application Insights Azure CLI commands, you first need to
az extension add -n application-insights ```
-If you don't run the `az extension add` command, you will see an error message that states: `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
+If you don't run the `az extension add` command, you'll see an error message that states `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'`.
-Now you can run the following to create your Application Insights resource:
+Now you can run the following code to create your Application Insights resource:
```azurecli az monitor app-insights component create --app
az monitor app-insights component create --app
az monitor app-insights component create --app demoApp --location eastus --kind web -g my_resource_group --workspace "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555" ```
-For the full Azure CLI documentation for this command, consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create).
+For the full Azure CLI documentation for this command, see the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create).
+
+### Azure PowerShell
-### Azure PowerShell
-Create a new workspace-based Application Insights resource
+Create a new workspace-based Application Insights resource.
```powershell New-AzApplicationInsights -Name <String> -ResourceGroupName <String> -Location <String> -WorkspaceResourceId <String>
New-AzApplicationInsights -Name <String> -ResourceGroupName <String> -Location <
New-AzApplicationInsights -Kind java -ResourceGroupName testgroup -Name test1027 -location eastus -WorkspaceResourceId "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/test1234/providers/microsoft.operationalinsights/workspaces/test1234555" ```
-For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key consult the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights).
-
+For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key, see the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights).
### Azure Resource Manager templates
- To create a workspace-based resource, you can use the Azure Resource Manager templates below and deploy with PowerShell.
+ To create a workspace-based resource, use the following Azure Resource Manager templates and deploy them with PowerShell.
#### Template file
For the full PowerShell documentation for this cmdlet, and to learn how to retri
``` > [!NOTE]
-> * For more information on resource properties, see [Property values](/azure/templates/microsoft.insights/components?tabs=bicep#property-values)
-> * Flow_Type and Request_Source are not used, but are included in this sample for completeness.
+> For more information on resource properties, see [Property values](/azure/templates/microsoft.insights/components?tabs=bicep#property-values).
+> `Flow_Type` and `Request_Source` aren't used but are included in this sample for completeness.
#### Parameters file
For the full PowerShell documentation for this cmdlet, and to learn how to retri
```
-## Modifying the associated workspace
+## Modify the associated workspace
-Once a workspace-based Application Insights resource has been created, you can modify the associated Log Analytics Workspace.
+After you've created a workspace-based Application Insights resource, you can modify the associated Log Analytics workspace.
-From within the Application Insights resource pane, select **Properties** > **Change Workspace** > **Log Analytics Workspaces**
+In the Application Insights resource pane, select **Properties** > **Change Workspace** > **Log Analytics Workspaces**.
## Export telemetry
-The legacy continuous export functionality is not supported for workspace-based resources. Instead, select **Diagnostic settings** > **add diagnostic setting** from within your Application Insights resource. You can select all tables, or a subset of tables to archive to a storage account, or to stream to an Azure Event Hub.
+The legacy continuous export functionality isn't supported for workspace-based resources. Instead, select **Diagnostic settings** > **Add diagnostic setting** in your Application Insights resource. You can select all tables, or a subset of tables, to archive to a storage account. You can also stream to an Azure event hub.
> [!NOTE]
-> * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
-> * Pricing information for this feature will be available on the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). Prior to the start of billing, notifications will be sent. Should you choose to continue using telemetry export after the notice period, you will be billed at the applicable rate.
+> Diagnostic settings export might increase costs. For more information, see [Export telemetry from Application Insights](export-telemetry.md#diagnostic-settings-based-export).
+> For pricing information for this feature, see the [Azure Monitor pricing page](https://azure.microsoft.com/pricing/details/monitor/). Prior to the start of billing, notifications will be sent. If you continue to use telemetry export after the notice period, you'll be billed at the applicable rate.
## Next steps * [Explore metrics](../essentials/metrics-charts.md)
-* [Write Analytics queries](../logs/log-query-overview.md)
+* [Write Log Analytics queries](../logs/log-query-overview.md)
azure-monitor Data Model Pageview Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-pageview-telemetry.md
Title: Azure Application Insights Data Model - PageView Telemetry description: Application Insights data model for page view telemetry Previously updated : 03/24/2022 Last updated : 09/07/2022 # PageView telemetry: Application Insights data model
-PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and is not necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages is not tied to browser page actions. [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user.
+PageView telemetry (in [Application Insights](./app-insights-overview.md)) is logged when an application user opens a new page of a monitored application. The `Page` in this context is a logical unit that is defined by the developer to be an application tab or a screen and isn't necessarily correlated to a browser webpage load or refresh action. This distinction can be further understood in the context of single-page applications (SPA) where the switch between pages isn't tied to browser page actions. [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) is the time it takes for the application to present the page to the user.
> [!NOTE]
-> By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
+> * By default, Application Insights SDKs log single PageView events on each browser webpage load action, with [`pageViews.duration`](/azure/azure-monitor/reference/tables/pageviews) populated by [browser timing](#measuring-browsertiming-in-application-insights). Developers can extend additional tracking of PageView events by using the [trackPageView API call](./api-custom-events-metrics.md#page-views).
+> * The default logs retention is 30 days and needs to be adjusted if you want to view page view statistics over a longer period of time.
## Measuring browserTiming in Application Insights Modern browsers expose measurements for page load actions with the [Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance_API). Application Insights simplifies these measurements by consolidating related timings into [standard browser metrics](../essentials/metrics-supported.md#microsoftinsightscomponents) as defined by these processing time definitions:
-1. Client <--> DNS : Client reaches out to DNS to resolve website hostname, DNS responds with IP address.
-1. Client <--> Web Server : Client creates TCP then TLS handshakes with web server.
-1. Client <--> Web Server : Client sends request payload, waits for server to execute request, and receives first response packet.
-1. Client <-- Web Server : Client receives the rest of the response payload bytes from the web server.
-1. Client : Client now has full response payload and has to render contents into browser and load the DOM.
+1. Client <--> DNS: Client reaches out to DNS to resolve website hostname, DNS responds with IP address.
+1. Client <--> Web Server: Client creates TCP then TLS handshakes with web server.
+1. Client <--> Web Server: Client sends request payload, waits for server to execute request, and receives first response packet.
+1. Client <--Web Server: Client receives the rest of the response payload bytes from the web server.
+1. Client: Client now has full response payload and has to render contents into browser and load the DOM.
* `browserTimings/networkDuration` = #1 + #2 * `browserTimings/sendDuration` = #3
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Java auto-instrumentation is enabled through configuration changes; no code chan
- Java application using Java 8+ - Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)-- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-workspace-based-resource)
+- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
### Enable Azure Monitor Application Insights
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Most configuration fields are named so that they can default to false. All field
| samplingPercentage | Percentage of events that will be sent. Default is 100, meaning all events are sent. Set this option if you want to preserve your data cap for large-scale applications. | numeric<br/>100 | | autoTrackPageVisitTime | If true, on a pageview, the _previous_ instrumented page's view time is tracked and sent as telemetry and a new timer is started for the current pageview. It's sent as a custom metric named `PageVisitTime` in `milliseconds` and is calculated via the Date [now()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/now) function (if available) and falls back to (new Date()).[getTime()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/getTime) if now() is unavailable (Internet Explorer 8 or less). Default is false. | boolean<br/>false | | disableAjaxTracking | If true, Ajax calls aren't autocollected. | boolean<br/> false |
-| disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>true |
+| disableFetchTracking | If true, Fetch requests aren't autocollected.|boolean<br/>false |
| overridePageViewDuration | If true, default behavior of trackPageView is changed to record end of page view duration interval when trackPageView is called. If false and no custom duration is provided to trackPageView, the page view performance is calculated by using the navigation timing API. |boolean<br/> | maxAjaxCallsPerView | Default 500 controls how many Ajax calls will be monitored per page view. Set to -1 to monitor all (unlimited) Ajax calls on the page. | numeric<br/> 500 | | disableDataLossAnalysis | If false, internal telemetry sender buffers will be checked at startup for items not yet sent. | boolean<br/> true |
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
This guide walks through migrating from [instrumentation keys](separate-resource
1. Hover over the connection string and select the ΓÇ£Copy to clipboardΓÇ¥ icon.
-1. Configure the Application Insights SDK by following [How to set connection strings](sdk-connection-string.md#how-to-set-a-connection-string).
+1. Configure the Application Insights SDK by following [How to set connection strings](sdk-connection-string.md#set-a-connection-string).
> [!IMPORTANT] > Using both a connection string and instrumentation key isn't recommended. Whichever was set last takes precedence.
Connection strings provide a single configuration setting and eliminate the need
- **Security:** Connection strings allow authenticated telemetry ingestion by using [Azure AD authentication for Application Insights](azure-ad-authentication.md). -- **Customized endpoints (sovereign or hybrid cloud environments):** Endpoint settings allow sending data to a specific [Azure Government region](custom-endpoints.md#regions-that-require-endpoint-modification). ([see examples](sdk-connection-string.md#how-to-set-a-connection-string))
+- **Customized endpoints (sovereign or hybrid cloud environments):** Endpoint settings allow sending data to a specific [Azure Government region](custom-endpoints.md#regions-that-require-endpoint-modification). ([see examples](sdk-connection-string.md#set-a-connection-string))
- **Privacy (regional endpoints)** ΓÇô Connection strings ease privacy concerns by sending data to regional endpoints, ensuring data doesn't leave a geographic region.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Follow the steps in this section to instrument your application with OpenTelemet
### Prerequisites - Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)-- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-workspace-based-resource)
+- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
### [.NET](#tab/net)
As part of using Application Insights instrumentation, we collect and send diagn
## Set the Cloud Role Name and the Cloud Role Instance
-You might set the [Cloud Role Name](app-map.md#understanding-cloud-role-name-within-the-context-of-the-application-map) and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. This step updates Cloud Role Name and Cloud Role Instance from their default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value.
+You might set the [Cloud Role Name](app-map.md#understand-the-cloud-role-name-within-the-context-of-an-application-map) and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. This step updates Cloud Role Name and Cloud Role Instance from their default values to something that makes sense to your team. They'll appear on the Application Map as the name underneath a node. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value.
### [.NET](#tab/net)
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
Title: Connection strings in Azure Application Insights | Microsoft Docs
-description: How to use connection strings.
+ Title: Connection strings in Application Insights | Microsoft Docs
+description: This article shows how to use connection strings.
Last updated 04/13/2022
# Connection strings
+This article shows how to use connection strings.
+ ## Overview Connection strings define where to send telemetry data.
-The key value pairs provide an easy way for users to define a prefix suffix combination for each Application Insights (AI) service/ product.
+Key-value pairs provide an easy way for users to define a prefix suffix combination for each Application Insights service or product.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-> [!IMPORTANT]
-> Do not use a connection string and instrumentation key simultaneously. Whichever was set last will take precedence.
+Don't use a connection string and instrumentation key simultaneously. Whichever was set last will take precedence.
-## Scenario overview
+## Scenario overview
Scenarios most affected by this change: -- Firewall exceptions or proxy redirects
+- Firewall exceptions or proxy redirects:
+
+ In cases where monitoring for intranet web server is required, our earlier solution asked you to add individual service endpoints to your configuration. For more information, see the [Azure Monitor FAQ](../faq.yml#can-i-monitor-an-intranet-web-server-). Connection strings offer a better alternative by reducing this effort to a single setting. A simple prefix, suffix amendment, allows automatic population and redirection of all endpoints to the right services.
- In cases where monitoring for intranet web server is required, our earlier solution asked customers to add individual service endpoints to your configuration. For more information, see [here](../faq.yml#can-i-monitor-an-intranet-web-server-).
- Connection strings offer a better alternative by reducing this effort to a single setting. A simple prefix, suffix amendment allows automatic population and redirection of all endpoints to the right services.
+- Sovereign or hybrid cloud environments:
-- Sovereign or Hybrid cloud environments
+ Users can send data to a defined [Azure Government region](../../azure-government/compare-azure-government-global-azure.md#application-insights). By using connection strings, you can define endpoint settings for your intranet servers or hybrid cloud settings.
- Users can send data to a defined [Azure Government Region](../../azure-government/compare-azure-government-global-azure.md#application-insights).
- Connection strings allow you to define endpoint settings for your intranet servers or hybrid cloud settings.
+## Get started
-## Getting started
+Review the following sections to get started.
-### Finding my connection string?
+### Find your connection string
-Your connection string is displayed on the Overview section of your Application Insights resource.
+Your connection string appears in the **Overview** section of your Application Insights resource.
### Schema
+Schema elements are explained in the following sections.
+ #### Max length
-The connection has a maximum supported length of 4096 characters.
+The connection has a maximum supported length of 4,096 characters.
#### Key-value pairs
-Connection string consists of a list of settings represented as key-value pairs separated by semicolon:
+A connection string consists of a list of settings represented as key-value pairs separated by a semicolon:
`key1=value1;key2=value2;key3=value3` #### Syntax -- `InstrumentationKey` (ex: 00000000-0000-0000-0000-000000000000)
- The connection string is a **required** field.
-- `Authorization` (ex: ikey) (This setting is optional because today we only support ikey authorization.)-- `EndpointSuffix` (ex: applicationinsights.azure.cn)
- Setting the endpoint suffix will instruct the SDK which Azure cloud to connect to. The SDK will assemble the rest of the endpoint for individual services.
-- Explicit Endpoints.
- Any service can be explicitly overridden in the connection string.
- - `IngestionEndpoint` (ex: `https://dc.applicationinsights.azure.com`)
- - `LiveEndpoint` (ex: `https://live.applicationinsights.azure.com`)
- - `ProfilerEndpoint` (ex: `https://profiler.monitor.azure.com`)
- - `SnapshotEndpoint` (ex: `https://snapshot.monitor.azure.com`)
+- `InstrumentationKey` (for example, 00000000-0000-0000-0000-000000000000).
+ The connection string is a *required* field.
+- `Authorization` (for example, ikey). This setting is optional because today we only support ikey authorization.
+- `EndpointSuffix` (for example, applicationinsights.azure.cn).
+ Setting the endpoint suffix will instruct the SDK on which Azure cloud to connect to. The SDK will assemble the rest of the endpoint for individual services.
+- Explicit endpoints.
+ Any service can be explicitly overridden in the connection string:
+ - `IngestionEndpoint` (for example, `https://dc.applicationinsights.azure.com`)
+ - `LiveEndpoint` (for example, `https://live.applicationinsights.azure.com`)
+ - `ProfilerEndpoint` (for example, `https://profiler.monitor.azure.com`)
+ - `SnapshotEndpoint` (for example, `https://snapshot.monitor.azure.com`)
#### Endpoint schema `<prefix>.<suffix>`-- Prefix: Defines a service.
+- Prefix: Defines a service.
- Suffix: Defines the common domain name. ##### Valid suffixes
-Here's a list of valid suffixes
- applicationinsights.azure.cn - applicationinsights.us -
-See also: [Regions that require endpoint modification](./custom-endpoints.md#regions-that-require-endpoint-modification)
-
+For more information, see [Regions that require endpoint modification](./custom-endpoints.md#regions-that-require-endpoint-modification).
##### Valid prefixes
See also: [Regions that require endpoint modification](./custom-endpoints.md#reg
- [Profiler](./profiler-overview.md): `profiler` - [Snapshot](./snapshot-debugger.md): `snapshot`
-#### Is Connection string a secret?
+#### Is the connection string a secret?
-Connection string contains iKey which is a unique identifier used by the ingestion service to associate telemetry to a specific Application Insights resource. It is not to be considered a security token or key. The ingestion endpoint provides Azure AD-based authenticated telemetry ingestion options if you want to protect your AI resource from misuse.
+The connection string contains an ikey, which is a unique identifier used by the ingestion service to associate telemetry to a specific Application Insights resource. It's not considered a security token or key. If you want to protect your AI resource from misuse, the ingestion endpoint provides authenticated telemetry ingestion options based on Azure Active Directory (Azure AD).
> [!NOTE]
-> Application Insights JavaScript SDK requires the connection string to be passed in during initialization/configuration. This is viewable in plain text in client browsers. There is no easy way to use the Azure AD-based authentication for browser telemetry. It is recommended that customers consider creating a separate Application Insights resource for browser telemetry if they need to secure the service telemetry.
+> The Application Insights JavaScript SDK requires the connection string to be passed in during initialization and configuration. It's viewable in plain text in client browsers. There's no easy way to use the Azure AD-based authentication for browser telemetry. We recommend that you consider creating a separate Application Insights resource for browser telemetry if you need to secure the service telemetry.
## Connection string examples
+Here are some examples of connection strings.
-### Connection string with endpoint suffix
+### Connection string with an endpoint suffix
`InstrumentationKey=00000000-0000-0000-0000-000000000000;EndpointSuffix=ai.contoso.com;`
-In this example, the connection string specifies the endpoint suffix and the SDK will construct service endpoints.
+In this example, the connection string specifies the endpoint suffix and the SDK will construct service endpoints:
-- Authorization scheme defaults to "ikey" -- Instrumentation Key: 00000000-0000-0000-0000-000000000000-- The regional service URIs are based on provided endpoint suffix:
+- Authorization scheme defaults to "ikey"
+- Instrumentation key: 00000000-0000-0000-0000-000000000000
+- The regional service URIs are based on the provided endpoint suffix:
- Ingestion: `https://dc.ai.contoso.com` - Live metrics: `https://live.ai.contoso.com` - Profiler: `https://profiler.ai.contoso.com`
- - Debugger: `https://snapshot.ai.contoso.com`
+ - Debugger: `https://snapshot.ai.contoso.com`
--
-### Connection string with explicit endpoint overrides
+### Connection string with explicit endpoint overrides
`InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://custom.com:111/;LiveEndpoint=https://custom.com:222/;ProfilerEndpoint=https://custom.com:333/;SnapshotEndpoint=https://custom.com:444/;`
-In this example, the connection string specifies explicit overrides for every service. The SDK will use the exact endpoints provided without modification.
+In this example, the connection string specifies explicit overrides for every service. The SDK will use the exact endpoints provided without modification:
-- Authorization scheme defaults to "ikey" -- Instrumentation Key: 00000000-0000-0000-0000-000000000000-- The regional service URIs are based on the explicit override values:
+- Authorization scheme defaults to "ikey"
+- Instrumentation key: 00000000-0000-0000-0000-000000000000
+- The regional service URIs are based on the explicit override values:
- Ingestion: `https://custom.com:111/` - Live metrics: `https://custom.com:222/` - Profiler: `https://custom.com:333/`
- - Debugger: `https://custom.com:444/`
+ - Debugger: `https://custom.com:444/`
-### Connection string with explicit region
+### Connection string with an explicit region
`InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://southcentralus.in.applicationinsights.azure.com/`
-In this example, the connection string specifies the South Central US region.
+In this example, the connection string specifies the South Central US region:
-- Authorization scheme defaults to "ikey" -- Instrumentation Key: 00000000-0000-0000-0000-000000000000-- The regional service URIs are based on the explicit override values:
+- Authorization scheme defaults to "ikey"
+- Instrumentation key: 00000000-0000-0000-0000-000000000000
+- The regional service URIs are based on the explicit override values:
- Ingestion: `https://southcentralus.in.applicationinsights.azure.com/`
-Run the following command in the [Azure Command-Line Interface (CLI)](/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions.
+Run the following command in the [Azure CLI](/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions:
`az account list-locations -o table`
-## How to set a connection string
+## Set a connection string
-Connection Strings are supported in the following SDK versions:
+Connection strings are supported in the following SDK versions:
- .NET v2.12.0 - Java v2.5.1 and Java 3.0 - JavaScript v2.3.0 - NodeJS v1.5.0 - Python v1.0.0
-A connection string can be set by either in code, environment variable, or configuration file.
--
+You can set a connection string in code or by using an environment variable or a configuration file.
### Environment variable -- Connection String: `APPLICATIONINSIGHTS_CONNECTION_STRING`
+Connection string: `APPLICATIONINSIGHTS_CONNECTION_STRING`
### Code samples # [.NET/.NetCore](#tab/net)
-Set the property [TelemetryConfiguration.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/add45ceed35a817dc7202ec07d3df1672d1f610d/BASE/src/Microsoft.ApplicationInsights/Extensibility/TelemetryConfiguration.cs#L271-L274) or [ApplicationInsightsServiceOptions.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/81288f26921df1e8e713d31e7e9c2187ac9e6590/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs#L66-L69)
+Set the property [TelemetryConfiguration.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/add45ceed35a817dc7202ec07d3df1672d1f610d/BASE/src/Microsoft.ApplicationInsights/Extensibility/TelemetryConfiguration.cs#L271-L274) or [ApplicationInsightsServiceOptions.ConnectionString](https://github.com/microsoft/ApplicationInsights-dotnet/blob/81288f26921df1e8e713d31e7e9c2187ac9e6590/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs#L66-L69).
-.NET Explicitly Set:
+.NET explicitly set:
```csharp var configuration = new TelemetryConfiguration {
var configuration = new TelemetryConfiguration
}; ```
-.NET Config File:
+.NET config file:
```xml <?xml version="1.0" encoding="utf-8"?>
var configuration = new TelemetryConfiguration
</ApplicationInsights> ```
-.NET Core Explicitly Set:
+.NET Core explicitly set:
```csharp public void ConfigureServices(IServiceCollection services) {
public void ConfigureServices(IServiceCollection services)
} ```
-.NET Core config.json:
+.NET Core config.json:
```json {
public void ConfigureServices(IServiceCollection services)
} ``` - # [Java](#tab/java) You can set the connection string in the `applicationinsights.json` configuration file:
You can set the connection string in the `applicationinsights.json` configuratio
} ```
-For more information, [connection string configuration](./java-standalone-config.md#connection-string).
+For more information, see [Connection string configuration](./java-standalone-config.md#connection-string).
# [JavaScript](#tab/js)
-Important: JavaScript doesn't support the use of Environment Variables.
+JavaScript doesn't support the use of environment variables.
Using the snippet:
-The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
+The current snippet is version 5 and is shown here. The version is encoded in the snippet as sv:"#". The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
```html <script type="text/javascript">
src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source
// ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout, // useXhr: 1, // Use XHR instead of fetch to report failures (if available), crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
-// onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- As they won't get called)
+// onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- as they won't get called)
cfg: { // Application Insights Configuration connectionString:"InstrumentationKey=00000000-0000-0000-0000-000000000000;" }});
cfg: { // Application Insights Configuration
``` > [!NOTE]
-> For readability and to reduce possible JavaScript errors, all of the possible configuration options are listed on a new line in snippet code above, if you don't want to change the value of a commented line it can be removed.
+> For readability and to reduce possible JavaScript errors, all the possible configuration options are listed on a new line in the preceding snippet code. If you don't want to change the value of a commented line, it can be removed.
-Manual Setup:
+Manual setup:
```javascript import { ApplicationInsights } from '@microsoft/applicationinsights-web'
appInsights.start();
# [Python](#tab/python)
-We recommend users set the environment variable.
+We recommend that users set the environment variable.
To explicitly set the connection string:
tracer = Tracer(exporter=AzureExporter(connection_string='InstrumentationKey=000
Get started at runtime with:
-* [Azure VM and Azure virtual machine scale set IIS-hosted apps](./azure-vm-vmss-apps.md)
+* [Azure VM and Azure Virtual Machine Scale Sets IIS-hosted apps](./azure-vm-vmss-apps.md)
* [IIS server](./status-monitor-v2-overview.md)
-* [Azure Web Apps](./azure-web-apps.md)
+* [Web Apps feature of Azure App Service](./azure-web-apps.md)
Get started at development time with:
azure-monitor Status Monitor V2 Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-get-started.md
If you don't have an Azure subscription, create a [free account](https://azure.m
### Install prerequisites -- To enable monitoring you will require a connection string. A connection string is displayed on the Overview blade of your Application Insights resource. For more information, see page [Connection Strings](./sdk-connection-string.md?tabs=net#finding-my-connection-string).
+- To enable monitoring you will require a connection string. A connection string is displayed on the Overview blade of your Application Insights resource. For more information, see page [Connection Strings](./sdk-connection-string.md?tabs=net#find-your-connection-string).
> [!NOTE] > As of April 2020, PowerShell Gallery has deprecated TLS 1.1 and 1.0.
azure-monitor Usage Heart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-heart.md
These dimensions are measured independently, but they interact with each other a
## Get started ### Prerequisites - Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)
+ - Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
- Instrument the below attributes to calculate HEART metrics: | Source | Attribute | Description |
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
# Usage analysis with Application Insights
-Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? [Application Insights](./app-insights-overview.md) helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data driven decisions about your next development cycles.
-
+Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? [Application Insights](./app-insights-overview.md) helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data-driven decisions about your next development cycles.
## Send telemetry from your app
-The best experience is obtained by installing Application Insights both in your app server code, and in your web pages. The client and server components of your app send telemetry back to the Azure portal for analysis.
+The best experience is obtained by installing Application Insights both in your app server code and in your webpages. The client and server components of your app send telemetry back to the Azure portal for analysis.
1. **Server code:** Install the appropriate module for your [ASP.NET](./asp-net.md), [Azure](./app-insights-overview.md), [Java](./java-in-process-agent.md), [Node.js](./nodejs.md), or [other](./platforms.md) app.
- * *Don't want to install server code? Just [create an Azure Application Insights resource](./create-new-resource.md).*
+ * If you don't want to install server code, [create an Application Insights resource](./create-new-resource.md).
-2. **Web page code:** Add the following script to your web page before the closing ``</head>``. Replace instrumentation key with the appropriate value for your Application Insights resource:
+1. **Webpage code:** Add the following script to your webpage before the closing ``</head>``. Replace the instrumentation key with the appropriate value for your Application Insights resource.
- The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
+ The current snippet is version 5 and is listed here. The version is encoded in the snippet as sv:"#". The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
```html <script type="text/javascript">
The best experience is obtained by installing Application Insights both in your
// ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout, // useXhr: 1, // Use XHR instead of fetch to report failures (if available), crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
- // onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- As they won't get called)
+ // onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- as they won't get called)
cfg: { // Application Insights Configuration instrumentationKey:"INSTRUMENTATION_KEY" }}); </script> ```
+ [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
+ To learn more advanced configurations for monitoring websites, check out the [JavaScript SDK reference article](./javascript.md).
-3. **Mobile app code:** Use the App Center SDK to collect events from your app, then send copies of these events to Application Insights for analysis by [following this guide](../app/mobile-center-quickstart.md).
+1. **Mobile app code:** Use the App Center SDK to collect events from your app. Then send copies of these events to Application Insights for analysis by [following this guide](../app/mobile-center-quickstart.md).
-4. **Get telemetry:** Run your project in debug mode for a few minutes, and then look for results in the Overview pane in Application Insights.
+1. **Get telemetry:** Run your project in debug mode for a few minutes. Then look for results in the **Overview** pane in Application Insights.
Publish your app to monitor your app's performance and find out what your users are doing with your app. ## Explore usage demographics and statistics
-Find out when people use your app, what pages they're most interested in, where your users are located, what browsers and operating systems they use.
-The Users and Sessions reports filter your data by pages or custom events, and segment them by properties such as location, environment, and page. You can also add your own filters.
+Find out when people use your app and what pages they're most interested in. You can also find out where your users are located and what browsers and operating systems they use.
+
+The **Users** and **Sessions** reports filter your data by pages or custom events. The reports segment the data by properties such as location, environment, and page. You can also add your own filters.
-Insights on the right point out interesting patterns in the set of data.
+Insights on the right point out interesting patterns in the set of data.
-* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, then they will be counted more than once.
-* The **Sessions** report counts the number of user sessions that access your site. A session is a period of activity by a user, terminated by a period of inactivity of more than half an hour.
+* The **Users** report counts the numbers of unique users that access your pages within your chosen time periods. For web apps, users are counted by using cookies. If someone accesses your site with different browsers or client machines, or clears their cookies, they'll be counted more than once.
+* The **Sessions** report counts the number of user sessions that access your site. A session is a period of activity by a user. It's terminated by a period of inactivity of more than half an hour.
-[More about the Users, Sessions, and Events tools](usage-segmentation.md)
+For more information about the Users, Sessions, and Events tools, see [Users, sessions, and events analysis in Application Insights](usage-segmentation.md).
-## Retention - how many users come back?
+## Retention: How many users come back?
-Retention helps you understand how often your users return to use their app, based on cohorts of users that performed some business action during a certain time bucket. 
+Retention helps you understand how often your users return to use their app, based on cohorts of users that performed some business action during a certain time bucket. You can:
-- Understand what specific features cause users to come back more than others -- Form hypotheses based on real user data -- Determine whether retention is a problem in your product
+- Understand what specific features cause users to come back more than others.
+- Form hypotheses based on real user data.
+- Determine whether retention is a problem in your product.
-The retention controls on top allow you to define specific events and time range to calculate retention. The graph in the middle gives a visual representation of the overall retention percentage by the time range specified. The graph on the bottom represents individual retention in a given time period. This level of detail allows you to understand what your users are doing and what might affect returning users on a more detailed granularity.
+You can use the retention controls on top to define specific events and time ranges to calculate retention. The graph in the middle gives a visual representation of the overall retention percentage by the time range specified. The graph on the bottom represents individual retention in a specific time period. This level of detail allows you to understand what your users are doing and what might affect returning users on a more detailed granularity.
-[More about the Retention workbook](usage-retention.md)
+For more information about the Retention workbook, see [User retention analysis for web applications with Application Insights](usage-retention.md).
## Custom business events
-To get a clear understanding of what users do with your app, it's useful to insert lines of code to log custom events. These events can track anything from detailed user actions such as clicking specific buttons, to more significant business events such as making a purchase or winning a game.
+To get a clear understanding of what users do with your app, it's useful to insert lines of code to log custom events. These events can track anything from detailed user actions, such as selecting specific buttons, to more significant business events, such as making a purchase or winning a game.
-You can also use the [Click Analytics Auto-collection Plugin](javascript-click-analytics-plugin.md) to collect custom events.
+You can also use the [Click Analytics Auto-collection plug-in](javascript-click-analytics-plugin.md) to collect custom events.
-Although in some cases, page views can represent useful events, it isn't true in general. A user can open a product page without buying the product.
+In some cases, page views can represent useful events, but it isn't true in general. A user can open a product page without buying the product.
-With specific business events, you can chart your users' progress through your site. Find out their preferences for different options, and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog.
+With specific business events, you can chart your users' progress through your site. You can find out their preferences for different options and where they drop out or have difficulties. With this knowledge, you can make informed decisions about the priorities in your development backlog.
Events can be logged from the client side of the app:
Events can be logged from the client side of the app:
appInsights.trackEvent({name: "incrementCount"}); ```
-Or from the server side:
+Or events can be logged from the server side:
```csharp var tc = new Microsoft.ApplicationInsights.TelemetryClient();
Or from the server side:
tc.TrackEvent("CompletedPurchase"); ```
-You can attach property values to these events, so that you can filter or split the events when you inspect them in the portal. A standard set of properties is also attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user.
+You can attach property values to these events so that you can filter or split the events when you inspect them in the portal. A standard set of properties is also attached to each event, such as anonymous user ID, which allows you to trace the sequence of activities of an individual user.
Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and [properties](./api-custom-events-metrics.md#properties).
Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and
In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties. ## Design the telemetry with the app
-When you're designing each feature of your app, consider how you're going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start.
+When you design each feature of your app, consider how you're going to measure its success with your users. Decide what business events you need to record, and code the tracking calls for those events into your app from the start.
-## A | B Testing
-If you don't know which variant of a feature will be more successful, release both of them, making each accessible to different users. Measure the success of each, and then move to a unified version.
+## A | B testing
-For this technique, you attach distinct property values to all the telemetry that is sent by each version of your app. You can do that by defining properties in the active TelemetryContext. These default properties are added to every telemetry message that the application sends - not just your custom messages, but the standard telemetry as well.
+If you don't know which variant of a feature will be more successful, release both and make each variant accessible to different users. Measure the success of each variant, and then move to a unified version.
-In the Application Insights portal, filter and split your data on the property values, so as to compare the different versions.
+For this technique, you attach distinct property values to all the telemetry that's sent by each version of your app. You can do that step by defining properties in the active TelemetryContext. These default properties are added to every telemetry message that the application sends. That means the properties are added to your custom messages and the standard telemetry.
-To do this, [set up a telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer):
+In the Application Insights portal, filter and split your data on the property values so that you can compare the different versions.
+
+To do this step, [set up a telemetry initializer](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer):
**ASP.NET apps**
To do this, [set up a telemetry initializer](./api-filtering-sampling.md#addmodi
} ```
-In the web app initializer such as Global.asax.cs:
+In the web app initializer, such as Global.asax.cs:
```csharp
In the web app initializer such as Global.asax.cs:
**ASP.NET Core apps** > [!NOTE]
-> Adding initializer using `ApplicationInsights.config` or using `TelemetryConfiguration.Active` is not valid for ASP.NET Core applications.
+> Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications.
-For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, adding a new `TelemetryInitializer` is done by adding it to the Dependency Injection container, as shown below. This is done in `ConfigureServices` method of your `Startup.cs` class.
+For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown here. This step is done in the `ConfigureServices` method of your `Startup.cs` class.
```csharp using Microsoft.ApplicationInsights.Extensibility;
public void ConfigureServices(IServiceCollection services)
``` ## Next steps
- - [Users, Sessions, Events](usage-segmentation.md)
+
+ - [Users, sessions, and events](usage-segmentation.md)
- [Funnels](usage-funnels.md) - [Retention](usage-retention.md) - [User Flows](usage-flows.md)
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
The following table describes the settings you can configure to control data col
| `[log_collection_settings.stdout] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stdout logs will not be collected. This setting is effective only if<br> `log_collection_settings.stdout.enabled`<br> is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. | | `[log_collection_settings.stderr] enabled =` | Boolean | true or false | This controls if stderr container log collection is enabled.<br> When set to `true` and no namespaces are excluded for stdout log collection<br> (`log_collection_settings.stderr.exclude_namespaces` setting), stderr logs will be collected from all containers across all pods/nodes in the cluster.<br> If not specified in ConfigMaps, the default value is<br> `enabled = true`. | | `[log_collection_settings.stderr] exclude_namespaces =` | String | Comma-separated array | Array of Kubernetes namespaces for which stderr logs will not be collected.<br> This setting is effective only if<br> `log_collection_settings.stdout.enabled` is set to `true`.<br> If not specified in ConfigMap, the default value is<br> `exclude_namespaces = ["kube-system"]`. |
-| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). |
+| `[log_collection_settings.env_var] enabled =` | Boolean | true or false | This setting controls environment variable collection<br> across all pods/nodes in the cluster<br> and defaults to `enabled = true` when not specified<br> in ConfigMaps.<br> If collection of environment variables is globally enabled, you can disable it for a specific container<br> by setting the environment variable<br> `AZMON_COLLECT_ENV` to **False** either with a Dockerfile setting or in the [configuration file for the Pod](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) under the **env:** section.<br> If collection of environment variables is globally disabled, then you cannot enable collection for a specific container (that is, the only override that can be applied at the container level is to disable collection when it's already enabled globally.). ItΓÇÖs strongly recommended to secure log analytics workspace access with the default [log_collection_settings.env_var] enabled = true. If sensitive data is stored in environment variables, it is mandatory and very critical to secure log analytics workspace. |
| `[log_collection_settings.enrich_container_logs] enabled =` | Boolean | true or false | This setting controls container log enrichment to populate the Name and Image property values<br> for every log record written to the ContainerLog table for all container logs in the cluster.<br> It defaults to `enabled = false` when not specified in ConfigMap. | | `[log_collection_settings.collect_all_kube_events] enabled =` | Boolean | true or false | This setting allows the collection of Kube events of all types.<br> By default the Kube events with type *Normal* are not collected. When this setting is set to `true`, the *Normal* events are no longer filtered and all events are collected.<br> It defaults to `enabled = false` when not specified in the ConfigMap |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Total number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
-|ByteCount|Yes|Byte Count|Bytes|Total|Total number of Bytes transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
+|AllocatedSnatPorts|No|Allocated SNAT Ports|Count|Average|Average number of SNAT ports allocated within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
+|ByteCount|Yes|Byte Count|Bytes|Sum|Total number of Bytes transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
|DipAvailability|Yes|Health Probe Status|Count|Average|Average Load Balancer health probe status per time duration|ProtocolType, BackendPort, FrontendIPAddress, FrontendPort, BackendIPAddress|
-|PacketCount|Yes|Packet Count|Count|Total|Total number of Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
-|SnatConnectionCount|Yes|SNAT Connection Count|Count|Total|Total number of new SNAT connections created within time period|FrontendIPAddress, BackendIPAddress, ConnectionState|
-|SYNCount|Yes|SYN Count|Count|Total|Total number of SYN Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
-|UsedSnatPorts|No|Used SNAT Ports|Count|Max|Max number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
+|PacketCount|Yes|Packet Count|Count|Sum|Total number of Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
+|SnatConnectionCount|Yes|SNAT Connection Count|Count|Sum|Total number of new SNAT connections created within time period|FrontendIPAddress, BackendIPAddress, ConnectionState|
+|SYNCount|Yes|SYN Count|Count|Sum|Total number of SYN Packets transmitted within time period|FrontendIPAddress, FrontendPort, Direction|
+|UsedSnatPorts|No|Used SNAT Ports|Count|Average|Average number of SNAT ports used within time period|FrontendIPAddress, BackendIPAddress, ProtocolType, IsAwaitingRemoval|
|VipAvailability|Yes|Data Path Availability|Count|Average|Average Load Balancer data path availability per time duration|FrontendIPAddress, FrontendPort|
azure-monitor Resource Manager Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-manager-diagnostic-settings.md
param eventHubAuthorizationRuleId string
@description('The name of the event hub.') param eventHubName string
-resource vault 'Microsoft.KeyVault/managedHSMs@2021-11-01-preview' existing = {
+resource vault 'Microsoft.KeyVault/vaults@2021-11-01-preview' existing = {
name: vaultName }
resource setting 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' = {
{ "type": "Microsoft.Insights/diagnosticSettings", "apiVersion": "2021-05-01-preview",
- "scope": "[format('Microsoft.KeyVault/managedHSMs/{0}', parameters('vaultName'))]",
+ "scope": "[format('Microsoft.KeyVault/vaults/{0}', parameters('vaultName'))]",
"name": "[parameters('settingName')]", "properties": { "workspaceId": "[parameters('workspaceId')]",
azure-netapp-files Azure Netapp Files Mount Unmount Volumes For Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md
Previously updated : 08/18/2022 Last updated : 09/07/2022 # Mount NFS volumes for Linux or Windows VMs
You can mount an NFS file for Windows or Linux virtual machines (VMs).
## Mount NFS volumes on Windows clients
-Mounting NFSv4.1 volumes on Windows clients is supported. For more information, see [Network File System overview](/windows-server/storage/nfs/nfs-overview).
+Mounting NFSv4.1 volumes on Windows clients is not supported. For more information, see [Network File System overview](/windows-server/storage/nfs/nfs-overview).
If you want to mount NFSv3 volumes on a Windows client using NFS:
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 08/24/2022 Last updated : 09/7/2022 # Resource limits for Azure NetApp Files
The service dynamically adjusts the `maxfiles` limit for a volume based on its p
| > 3 TiB but <= 4 TiB | 80 million | | > 4 TiB | 100 million |
-If you have allocated at least 4 TiB of quota for a volume, you can initiate a [support request](#request-limit-increase) to increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
+>[!IMPORTANT]
+> To increase the quota for a volume with a quota of at least 4 TiB, you must initiate [a support request](#request-limit-increase).
+
+For volumes with at least 4 TiB of quota, you can increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
You can increase the `maxfiles` limit to 500 million if your volume quota is at least 20 TiB.
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
na Previously updated : 08/29/2022 Last updated : 09/07/2022
Azure NetApp Files backup is supported for the following regions:
* Japan East * North Europe * South Central US
+* Southeast Asia
* UK South * West Europe * West US
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 08/16/2022 Last updated : 09/07/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
* Azure Key Vault to store Service Principal content * Azure Managed Disk as an alternate storage back end
-* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) is now in public preview. You can back up Azure NetApp Files datastores and VMs using Cloud Backup. This virtual appliance installs in the Azure VMware Solution cluster and provides policy based automated backup of VMs integrated with Azure NetApp Files snapshot technology for fast backups and restores of VMs, groups of VMs (organized in resource groups) or complete datastores.
- * [Active Directory connection enhancement: Reset Active Directory computer account password](create-active-directory-connections.md#reset-active-directory) (Preview) ## June 2022
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 06/01/2022 Last updated : 09/06/2022
This will enable the **Advanced filters** page, where you can create and manage
## Advanced filters
-On the **Advanced filters** page, you can create, modify, or delete subscription filters.
+After enabling the **Advanced filters** page, you can create, modify, or delete subscription filters.
:::image type="content" source="media/set-preferences/settings-advanced-filters.png" alt-text="Screenshot showing the Advanced filters screen.":::
Information about your custom settings is stored in Azure. You can export the fo
- User settings like favorite subscriptions or directories - Themes and other custom portal settings
-It's a good idea to export and review your settings if you plan to delete them. Rebuilding dashboards or redoing settings can be time-consuming.
+To export your portal settings, select **Export settings** from the top of the **My information** pane. This creates a *.json* file that contains your user settings data.
-To export your portal settings, select **Export settings** from the top of the settings **Overview** pane. This creates a *.json* file that contains your user settings data.
-
-Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file.
+Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file. However, you can use this file to review the settings you selected. It can be useful to have a backup of your selections if you choose to delete your settings and private dashboards.
### Restore default settings
-If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the settings **Overview** pane. You'll be prompted to confirm this action. When you do so, any changes you've made to your Azure portal settings will be lost. This option doesn't affect dashboard customizations.
+If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the **My information** pane. You'll be prompted to confirm this action. When you do so, any changes you've made to your Azure portal settings will be lost. This option doesn't affect dashboard customizations.
### Delete user settings and dashboards
Information about your custom settings is stored in Azure. You can delete the fo
- User settings like favorite subscriptions or directories - Themes and other custom portal settings
-It's a good idea to export and review your settings before you delete them. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming.
+It's a good idea to export and review your settings before you delete them, as described above. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming.
[!INCLUDE [GDPR-related guidance](../../includes/gdpr-intro-sentence.md)]
azure-vmware Ecosystem Os Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-os-vms.md
Last updated 04/11/2022
# Operating system support for Azure VMware Solution virtual machines
-Azure VMware Solution supports a wide range of operating systems to be used in the guest virtual machines. Being based on VMware vSphere, currently 6.7 version, all operating systems currently supported by vSphere can be used by any Azure VMware Solution customer for their workloads.
+Azure VMware Solution supports a wide range of operating systems to be used in the guest virtual machines. Being based on VMware vSphere, currently 7.0 version, all operating systems currently supported by vSphere can be used by any Azure VMware Solution customer for their workloads.
-Check the list of operating systems and configurations supported in the [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software), create a query for ESXi 6.7 Update 3 and select all operating systems and vendors.
+Check the list of operating systems and configurations supported in the [VMware Compatibility Guide](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software), create a query for ESXi 7.0 Update 3 and select all operating systems and vendors.
Additionally to the supported operating systems by VMware for vSphere, we have worked with Red Hat, SUSE and Canonical to extend the support model currently in place for Azure Virtual Machines to the workloads running on Azure VMware Solution, given that it is a first-party Azure service. You can check the following sites of vendors for more information about the benefits of running their operating system on Azure.
azure-vmware Enable Vmware Cds With Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md
To provide access to vNET based Azure resources, each tenant can have their own
As shown in the diagram above, organization 01 has two organization virtual datacenters: VDC1 and VDC2. The virtual datacenter of each organization has its own Azure vNETs connected with their respective organization VDC Edge gateway through IPSEC VPN. Providers provide public IP addresses to the organization VDC Edge gateway for IPSEC VPN configuration. An ORG VDC Edge gateway firewall blocks all traffic by default, specific allow rules needs to be added on organization Edge gateway firewall.
-Organization VDCs can be part of a single organization and still provide isolation between them. For example, JSVM1 hosted in organization VDC1 cannot ping Azure VM JSVM2 for tenant2.
+Organization VDCs can be part of a single organization and still provide isolation between them. For example, VM1 hosted in organization VDC1 cannot ping Azure VM JSVM2 for tenant2.
### Prerequisites - Organization VDC is configured with an Edge gateway and has Public IPs assigned to it to establish IPSEC VPN by provider. - Tenants have created a routed Organization VDC network in tenantΓÇÖs virtual datacenter.-- Test JSVM1 and JSVM2 are created in the Organization VDC1 and VDC2 respectively. Both VMs are connected to the routed orgVDC network in their respective VDCs.
+- Test VM1 and VM2 are created in the Organization VDC1 and VDC2 respectively. Both VMs are connected to the routed orgVDC network in their respective VDCs.
- Have a dedicated [Azure vNET](tutorial-configure-networking.md#create-a-vnet-manually) configured for each tenant. For this example, we created Tenant1-vNet and Tenant2-vNet for tenant1 and tenant2 respectively. - Create an [Azure Virtual network gateway](tutorial-configure-networking.md#create-a-virtual-network-gateway) for vNETs created earlier. - Deploy Azure VMs JSVM1 and JSVM2 for tenant1 and tenant2 for test purposes.
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Container images for Read are available.
| Container | Container Registry / Repository / Image Name | Tags | |--||--| | Read 3.2 GA | `mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30` | latest, 3.2, 3.2-model-2022-04-30 |
-| Read 2.0-preview | `mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview` |2.0.019300020-amd64-preview |
Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image. ### Docker pull for the Read OCR container
-# [Version 3.2 GA](#tab/version-3-2)
- ```bash docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 ```
-# [Version 2.0 preview](#tab/version-2)
-
-```bash
-docker pull mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview
-```
- [!INCLUDE [Tip for using docker list](../../../includes/cognitive-services-containers-docker-list-tip.md)]
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
[Examples](computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
-# [Version 3.2](#tab/version-3-2)
- ```bash docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \ mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 \
docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
mcr.microsoft.com/azure-cognitive-services/vision/read:3.2-model-2022-04-30 ```
-# [Version 2.0-preview](#tab/version-2)
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
-mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs the Read OCR container from the container image.
-* Allocates 8 CPU core and 16 gigabytes (GB) of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
-
-You can alternatively run the container using environment variables:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 16g --cpus 8 \
env Eula=accept \env Billing={ENDPOINT_URI} \env ApiKey={API_KEY} \
-mcr.microsoft.com/azure-cognitive-services/vision/read:2.0-preview
-```
-- More [examples](./computer-vision-resource-container-config.md#example-docker-run-commands) of the `docker run` command are available.
The container provides REST-based query prediction endpoint APIs.
Use the host, `http://localhost:5000`, for container APIs. You can view the Swagger path at: `http://localhost:5000/swagger/`. -- ### Asynchronous Read
-# [Version 3.2](#tab/version-3-2)
- You can use the `POST /vision/v3.2/read/analyze` and `GET /vision/v3.2/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifer to the HTTP GET request. - From the swagger UI, select the `Analyze` to expand it in the browser. Then select **Try it out** > **Choose file**. In this example, we'll use the following image: ![tabs vs spaces](media/tabs-vs-spaces.png)
The `operation-location` is the fully qualified URL and is accessed via an HTTP
} ```
-# [Version 2.0-preview](#tab/version-2)
-
-You can use the `POST /vision/v2.0/read/core/asyncBatchAnalyze` and `GET /vision/v2.0/read/operations/{operationId}` operations in concert to asynchronously read an image, similar to how the Computer Vision service uses those corresponding REST operations. The asynchronous POST method will return an `operationId` that is used as the identifer to the HTTP GET request.
-
-From the swagger UI, select the `asyncBatchAnalyze` to expand it in the browser. Then select **Try it out** > **Choose file**. In this example, we'll use the following image:
-
-![tabs vs spaces](media/tabs-vs-spaces.png)
-
-When the asynchronous POST has run successfully, it returns an **HTTP 202** status code. As part of the response, there is an `operation-location` header that holds the result endpoint for the request.
-
-```http
- content-length: 0
- date: Fri, 13 Sep 2019 16:23:01 GMT
- operation-location: http://localhost:5000/vision/v2.0/read/operations/a527d445-8a74-4482-8cb3-c98a65ec7ef9
- server: Kestrel
-```
-
-The `operation-location` is the fully qualified URL and is accessed via an HTTP GET. Here is the JSON response from executing the `operation-location` URL from the preceding image:
-
-```json
-{
- "status": "Succeeded",
- "recognitionResults": [
- {
- "page": 1,
- "clockwiseOrientation": 2.42,
- "width": 502,
- "height": 252,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [ 56, 39, 317, 50, 313, 134, 53, 123 ],
- "text": "Tabs VS",
- "words": [
- {
- "boundingBox": [ 90, 43, 243, 53, 243, 123, 94, 125 ],
- "text": "Tabs",
- "confidence": "Low"
- },
- {
- "boundingBox": [ 259, 55, 313, 62, 313, 122, 259, 123 ],
- "text": "VS"
- }
- ]
- },
- {
- "boundingBox": [ 221, 148, 417, 146, 417, 206, 227, 218 ],
- "text": "Spaces",
- "words": [
- {
- "boundingBox": [ 230, 148, 416, 141, 419, 211, 232, 218 ],
- "text": "Spaces"
- }
- ]
- }
- ]
- }
- ]
-}
-```
-- > [!IMPORTANT] > If you deploy multiple Read OCR containers behind a load balancer, for example, under Docker Compose or Kubernetes, you must have an external cache. Because the processing container and the GET request container might not be the same, an external cache stores the results and shares them across containers. For details about cache settings, see [Configure Computer Vision Docker containers](./computer-vision-resource-container-config.md).
The `operation-location` is the fully qualified URL and is accessed via an HTTP
You can use the following operation to synchronously read an image.
-# [Version 3.2](#tab/version-3-2)
- `POST /vision/v3.2/read/syncAnalyze`
-# [Version 2.0-preview](#tab/version-2)
-
-`POST /vision/v2.0/read/core/Analyze`
--- When the image is read in its entirety, then and only then does the API return a JSON response. The only exception to this is if an error occurs. When an error occurs the following JSON is returned: ```json
cognitive-services Audio Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/audio-processing-overview.md
Previously updated : 12/27/2021 Last updated : 09/07/2022
The Microsoft Audio Stack is a set of enhancements optimized for speech processi
[ ![Block diagram of Microsoft Audio Stack's enhancements.](media/audio-processing/mas-block-diagram.png) ](media/audio-processing/mas-block-diagram.png#lightbox)
-Different scenarios/use-cases require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
+Different scenarios and use-cases can require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
Processing is performed fully locally where the Speech SDK is being used. No audio data is streamed to MicrosoftΓÇÖs cloud services for processing by the Microsoft Audio Stack. The only exception to this is for the Conversation Transcription Service, where raw audio is sent to MicrosoftΓÇÖs cloud services for processing.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
Previously updated : 03/01/2022 Last updated : 09/05/2022
By default, Text Analytics for health will use the latest available AI model on
| Supported Versions | latest version | |--|--|
+| `2022-03-01` | `2022-03-01` |
| `2021-05-15` | `2021-05-15` | + ### Text Analytics for health container The [Text Analytics for health container](use-containers.md) uses separate model versioning than the REST API and client libraries. Only one model version is available per container image. | Endpoint | Container Image Tag | Model version | ||--||
-| `/entities/health` | `3.0.016230002-onprem-amd64` (latest) | `2021-05-15` |
+| `/entities/health` | `3.0.59413252-onprem-amd64` (latest) | `2022-03-01` |
+| `/entities/health` | `3.0.59413252-latin-onprem-amd64` (latin) | `2022-08-15-preview` |
+| `/entities/health` | `3.0.59413252-semitic-onprem-amd64` (semitic) | `2022-08-15-preview` |
+| `/entities/health` | `3.0.016230002-onprem-amd64` | `2021-05-15` |
| `/entities/health` | `3.0.015370001-onprem-amd64` | `2021-03-01` | | `/entities/health` | `1.1.013530001-amd64-preview` | `2020-09-03` | | `/entities/health` | `1.1.013150001-amd64-preview` | `2020-07-24` |
The [Text Analytics for health container](use-containers.md) uses separate model
### Input languages
-Currently the Text Analytics for health only [supports](../language-support.md) the English language.
+Currently the Text Analytics for health hosted API only [supports](../language-support.md) the English language. Additional languages are currently in preview when deploying the API in a container, as detailed [under Text Analytics for health languages support](../language-support.md).
## Submitting data
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/use-containers.md
Previously updated : 04/20/2022 Last updated : 09/05/2022 ms.devlang: azurecli
CPU core and memory correspond to the `--cpus` and `--memory` settings, which ar
## Get the container image with `docker pull`
-Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download this container image from the Microsoft public container registry.
+Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pull/) command to download this container image from the Microsoft public container registry. You can find the featured tags on the [dockerhub page](https://hub.docker.com/_/microsoft-azure-cognitive-services-textanalytics-healthcare)
```
-docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:<tag-name>
``` - [!INCLUDE [Tip for using docker list](../../../../../includes/cognitive-services-containers-docker-list-tip.md)] ## Run the container with `docker run`
To run the container in your own environment after downloading the container ima
```bash docker run --rm -it -p 5000:5000 --cpus 6 --memory 12g \
-mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:latest \
+mcr.microsoft.com/azure-cognitive-services/textanalytics/healthcare:<tag-name> \
Eula=accept \ rai_terms=accept \ Billing={ENDPOINT_URI} \
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
Previously updated : 11/02/2021 Last updated : 9/5/2022 # Language support for Text Analytics for health
-> [!NOTE]
-> * The container uses different model versions than the API endpoints and SDK.
-> * Languages are added as new model versions are released. The current [model versions](how-to/call-api.md#specify-the-text-analytics-for-health-model) for Text Analytics for health are:
-> * API and SDK: `2021-05-15`
-> * Container: `2021-03-01`
- Use this article to learn which natural languages are supported by Text Analytics for health and its Docker container.
-## REST API and client library
+## Hosted API Service
-| Language | Language code | Starting with v3 model version: | Notes |
-|:|:-:|:--:|:--:|
-| English | `en` | API endpoint and client library: 2019-10-01 | |
+The hosted API service supports English language, model version 03-01-2022.
## Docker container
-| Language | Language code | Starting with v3 model version: | Notes |
-|:|:-:|:--:|:--:|
-| English | `en` | Container: 2020-04-16 | |
+The docker container supports English language, model version 03-01-2022.
+Additional languages are also supported when using a docker container to deploy the API: Spanish, French, German Italian, Portuguese and Hebrew. This functionality is currently in preview, model version 2022-08-15-preview.
+Full details for deploying the service in a container can be found [here](../text-analytics-for-health/how-to/use-containers.md).
+
+In order to download the new container images from the Microsoft public container registry, use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command, as follows :
+
+For English, Spanish, Italian, French, German and Portuguese:
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/latin
+```
+
+For Hebrew:
+
+```
+docker pull mcr.microsoft.com/azure-cognitive-services/textanalytics/semitic
+```
++
+When structuring the API request, the relevant language tags must be added for these languages:
+
+```
+English ΓÇô ΓÇ£enΓÇ¥
+Spanish ΓÇô ΓÇ£esΓÇ¥
+French - ΓÇ£frΓÇ¥
+German ΓÇô ΓÇ£deΓÇ¥
+Italian ΓÇô ΓÇ£itΓÇ¥
+Portuguese ΓÇô ΓÇ£ptΓÇ¥
+Hebrew ΓÇô ΓÇ£heΓÇ¥
+```
+
+The following json is an example of a JSON file attached to the Language request's POST body, for a Spanish document:
+
+```json
+json
+
+{
+ "analysisInput": {
+ "documents": [
+ {
+ "text": "El médico prescrió 200 mg de ibuprofeno.",
+ "language": "es",
+ "id": "1"
+ }
+ ]
+ },
+ "tasks": [
+ {
+ "taskName": "analyze 1",
+ "kind": "Healthcare",
+ "parameters": {
+ "fhirVersion": "4.0.1"
+ }
+ }
+ ]
+}
+```
+## Details of the supported model versions for each language:
++
+| Language code | model version: | Featured Tag | Specific Tag |
+|:--|:-:|:-:|::|
+| en | 2022-03-01 | latest | 3.0.59413252-onprem-amd64 |
+| en,es,it,fr,de,pt | 2022-08-15-preview | latin | 3.0.59413252-latin-onprem-amd64 |
+| he | 2022-08-15-preview | semitic | 3.0.59413252-semitic-onprem-amd64 |
+++++ ## See also
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
If you previously set up your Azure container registry as a chart repository usi
> * After you complete migration from a Helm 2-style (index.yaml-based) chart repository to OCI artifact repositories, use the Helm CLI and `az acr repository` commands to manage the charts. See previous sections in this article. > * The Helm OCI artifact repositories are not discoverable using Helm commands such as `helm search` and `helm repo list`. For more information about Helm commands used to store charts as OCI artifacts, see the [Helm documentation](https://helm.sh/docs/topics/registries/).
-### Enable OCI support
+### Enable OCI support (enabled by default in Helm v3.8.0)
Ensure that you are using the Helm 3 client:
Ensure that you are using the Helm 3 client:
helm version ```
-Enable OCI support in the Helm 3 client. Currently, this support is experimental and subject to change.
+If you are using Helm v3.8.0 or higher, this is enabled by default. If you are using a lower version, you can enable OCI support setting the environment variable:
```console export HELM_EXPERIMENTAL_OCI=1
container-registry Container Registry Tutorial Sign Build Push https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md
In this tutorial:
# Download the plugin curl -Lo notation-azure-kv.tar.gz \
- https://github.com/Azure/notation-azure-kv/releases/download/v0.3.0-alpha.1/notation-azure-kv_0.3.0-alpha.1_Linux_amd64.tar.gz
+ https://github.com/Azure/notation-azure-kv/releases/download/v0.3.1-alpha.1/notation-azure-kv_0.3.1-alpha.1_Linux_amd64.tar.gz
# Extract to the plugin directory tar xvzf notation-azure-kv.tar.gz -C ~/.config/notation/plugins/azure-kv notation-azure-kv
notation verify $IMAGE
## Next steps
-[Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)
+[Enforce policy to only deploy signed container images to Azure Kubernetes Service (AKS) utilizing **ratify** and **gatekeeper**.](https://github.com/Azure/notation-azure-kv/blob/main/docs/nv2-sign-verify-aks.md)
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
description: Learn how to configure your API for MongoDB account capabilities
Previously updated : 07/01/2022 Last updated : 09/06/2022
Copy each of these capabilities. In this example, we have EnableMongo and Disabl
```powershell az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongo, DisableRateLimitingResponses ```
+If you are using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities:
+```powershell
+az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities @("EnableMongo","DisableRateLimitingResponses")
+```
## Disable a capability 1. Retrieve your existing account capabilities:
Copy each of these capabilities. In this example, we have EnableMongo and Disabl
```powershell az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongo ```
+If you are using PowerShell and receive an error using the command above, try using a PowerShell array instead to list the capabilities:
+```powershell
+az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities @("EnableMongo")
+```
## Next steps
az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities En
- Explore MongoDB [samples](nodejs-console-app.md) with Azure Cosmos DB API for MongoDB. - Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning. - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md).
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
+ - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-capacity-planner.md).
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
> * [Go](create-sql-api-go.md) >
-This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
+This tutorial is a quick start guide to show how to use Cosmos DB Spark Connector to read from or write to Cosmos DB. Cosmos DB Spark Connector supports Spark 3.1.x and 3.2.x.
-Throughout this quick tutorial, we rely on [Azure Databricks Runtime 8.0 with Spark 3.1.1](/azure/databricks/release-notes/runtime/8.0) and a Jupyter Notebook to show how to use the Cosmos DB Spark Connector, but you can also use [Azure Databricks Runtime 10.3 with Spark 3.2.1](/azure/databricks/release-notes/runtime/10.3).
+Throughout this quick tutorial, we rely on [Azure Databricks Runtime 10.4 with Spark 3.2.1](/azure/databricks/release-notes/runtime/10.4) and a Jupyter Notebook to show how to use the Cosmos DB Spark Connector.
-You can use any other Spark 3.1.1 or 3.2.1 spark offering as well, also you should be able to use any language supported by Spark (PySpark, Scala, Java, etc.), or any Spark interface you are familiar with (Jupyter Notebook, Livy, etc.).
+You can use any other Spark (for e.g., spark 3.1.1) offering as well, also you should be able to use any language supported by Spark (PySpark, Scala, Java, etc.), or any Spark interface you are familiar with (Jupyter Notebook, Livy, etc.).
## Prerequisites
-* An active Azure account. If you don't have one, you can sign up for a [free account](https://aka.ms/trycosmosdb). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
+* An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/try/cosmosdb/). Alternatively, you can use the [use Azure Cosmos DB Emulator](../local-emulator.md) for development and testing.
-* [Azure Databricks](/azure/databricks/release-notes/runtime/8.0) runtime 8.0 with Spark 3.1.1 or [Azure Databricks](/azure/databricks/release-notes/runtime/10.3) runtime 10.3 with Spark 3.2.1.
+* [Azure Databricks](/azure/databricks/release-notes/runtime/10.4) runtime 10.4 with Spark 3.2.1
* (Optional) [SLF4J binding](https://www.slf4j.org/manual.html) is used to associate a specific logging framework with SLF4J. SLF4J is only needed if you plan to use logging, also download an SLF4J binding, which will link the SLF4J API with the logging implementation of your choice. See the [SLF4J user manual](https://www.slf4j.org/manual.html) for more information.
-Install Cosmos DB Spark Connector in your spark cluster [using the latest version for Spark 3.1.x](https://aka.ms/azure-cosmos-spark-3-1-download) or [using the latest version for Spark 3.2.x](https://aka.ms/azure-cosmos-spark-3-2-download).
+Install Cosmos DB Spark Connector in your spark cluster [using the latest version for Spark 3.2.x](https://aka.ms/azure-cosmos-spark-3-2-download).
-The getting started guide is based on PySpark however you can use the equivalent scala version as well, and you can run the following code snippet in an Azure Databricks PySpark notebook.
+The getting started guide is based on PySpark/Scala and you can run the following code snippet in an Azure Databricks PySpark/Scala notebook.
## Create databases and containers First, set Cosmos DB account credentials, and the Cosmos DB Database name and container name.
+#### [Python](#tab/python)
+ ```python cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/" cosmosMasterKey = "REPLACEME"
cfg = {
} ```
+#### [Scala](#tab/scala)
+
+```scala
+val cosmosEndpoint = "https://REPLACEME.documents.azure.com:443/"
+val cosmosMasterKey = "REPLACEME"
+val cosmosDatabaseName = "sampleDB"
+val cosmosContainerName = "sampleContainer"
+
+val cfg = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.database" -> cosmosDatabaseName,
+ "spark.cosmos.container" -> cosmosContainerName
+)
+```
++ Next, you can use the new Catalog API to create a Cosmos DB Database and Container through Spark.
+#### [Python](#tab/python)
+ ```python # Configure Catalog Api to be used spark.conf.set("spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
spark.sql("CREATE DATABASE IF NOT EXISTS cosmosCatalog.{};".format(cosmosDatabas
spark.sql("CREATE TABLE IF NOT EXISTS cosmosCatalog.{}.{} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')".format(cosmosDatabaseName, cosmosContainerName)) ```
+#### [Scala](#tab/scala)
+
+```scala
+// Configure Catalog Api to be used
+spark.conf.set(s"spark.sql.catalog.cosmosCatalog", "com.azure.cosmos.spark.CosmosCatalog")
+spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountEndpoint", cosmosEndpoint)
+spark.conf.set(s"spark.sql.catalog.cosmosCatalog.spark.cosmos.accountKey", cosmosMasterKey)
+
+// create a cosmos database using catalog api
+spark.sql(s"CREATE DATABASE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName};")
+
+// create a cosmos container using catalog api
+spark.sql(s"CREATE TABLE IF NOT EXISTS cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} using cosmos.oltp TBLPROPERTIES(partitionKeyPath = '/id', manualThroughput = '1100')")
+```
++ When creating containers with the Catalog API, you can set the throughput and [partition key path](../partitioning-overview.md#choose-partitionkey) for the container to be created. For more information, see the full [Catalog API](https://github.com/Azure/azure-sdk-for-jav) documentation.
For more information, see the full [Catalog API](https://github.com/Azure/azure-
The name of the data source is `cosmos.oltp`, and the following example shows how you can write a memory dataframe consisting of two items to Cosmos DB:
+#### [Python](#tab/python)
+ ```python spark.createDataFrame((("cat-alive", "Schrodinger cat", 2, True), ("cat-dead", "Schrodinger cat", 2, False)))\ .toDF("id","name","age","isAlive") \
spark.createDataFrame((("cat-alive", "Schrodinger cat", 2, True), ("cat-dead", "
.save() ```
+#### [Scala](#tab/scala)
+
+```scala
+spark.createDataFrame(Seq(("cat-alive", "Schrodinger cat", 2, true), ("cat-dead", "Schrodinger cat", 2, false)))
+ .toDF("id","name","age","isAlive")
+ .write
+ .format("cosmos.oltp")
+ .options(cfg)
+ .mode("APPEND")
+ .save()
+```
++ Note that `id` is a mandatory field for Cosmos DB. For more information related to ingesting data, see the full [write configuration](https://github.com/Azure/azure-sdk-for-jav#write-config) documentation.
For more information related to ingesting data, see the full [write configuratio
Using the same `cosmos.oltp` data source, we can query data and use `filter` to push down filters:
+#### [Python](#tab/python)
+ ```python from pyspark.sql.functions import col
df.filter(col("isAlive") == True)\
.show() ```
+#### [Scala](#tab/scala)
+
+```scala
+import org.apache.spark.sql.functions.col
+
+val df = spark.read.format("cosmos.oltp").options(cfg).load()
+
+df.filter(col("isAlive") === true)
+ .withColumn("age", col("age") + 1)
+ .show()
+```
++ For more information related to querying data, see the full [query configuration](https://github.com/Azure/azure-sdk-for-jav#query-config) documentation.
+## Partial document update using Patch
+
+Using the same `cosmos.oltp` data source, we can do partial update in Cosmos DB using Patch API:
+
+#### [Python](#tab/python)
+
+```python
+cfgPatch = {"spark.cosmos.accountEndpoint": cosmosEndpoint,
+ "spark.cosmos.accountKey": cosmosMasterKey,
+ "spark.cosmos.database": cosmosDatabaseName,
+ "spark.cosmos.container": cosmosContainerName,
+ "spark.cosmos.write.strategy": "ItemPatch",
+ "spark.cosmos.write.bulk.enabled": "false",
+ "spark.cosmos.write.patch.defaultOperationType": "Set",
+ "spark.cosmos.write.patch.columnConfigs": "[col(name).op(set)]"
+ }
+
+id = "<document-id>"
+query = "select * from cosmosCatalog.{}.{} where id = '{}';".format(
+ cosmosDatabaseName, cosmosContainerName, id)
+
+dfBeforePatch = spark.sql(query)
+print("document before patch operation")
+dfBeforePatch.show()
+
+data = [{"id": id, "name": "Joel Brakus"}]
+patchDf = spark.createDataFrame(data)
+
+patchDf.write.format("cosmos.oltp").mode("Append").options(**cfgPatch).save()
+
+dfAfterPatch = spark.sql(query)
+print("document after patch operation")
+dfAfterPatch.show()
+```
+
+For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Python/patch-sample.py).
++
+#### [Scala](#tab/scala)
+
+```scala
+val cfgPatch = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.database" -> cosmosDatabaseName,
+ "spark.cosmos.container" -> cosmosContainerName,
+ "spark.cosmos.write.strategy" -> "ItemPatch",
+ "spark.cosmos.write.bulk.enabled" -> "false",
+
+ "spark.cosmos.write.patch.columnConfigs" -> "[col(name).op(set)]"
+ )
+
+val id = "<document-id>"
+val query = s"select * from cosmosCatalog.${cosmosDatabaseName}.${cosmosContainerName} where id = '$id';"
+
+val dfBeforePatch = spark.sql(query)
+println("document before patch operation")
+dfBeforePatch.show()
+val patchDf = Seq(
+ (id, "Joel Brakus")
+ ).toDF("id", "name")
+
+patchDf.write.format("cosmos.oltp").mode("Append").options(cfgPatch).save()
+val dfAfterPatch = spark.sql(query)
+println("document after patch operation")
+dfAfterPatch.show()
+```
+
+For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala/PatchSample.scala).
+++ ## Schema inference When querying data, the Spark Connector can infer the schema based on sampling existing items by setting `spark.cosmos.read.inferSchema.enabled` to `true`.
+#### [Python](#tab/python)
+ ```python df = spark.read.format("cosmos.oltp").options(**cfg)\ .option("spark.cosmos.read.inferSchema.enabled", "true")\ .load() df.printSchema()
-```
-Alternatively, you can pass the custom schema you want to be used to read the data:
-```python
+# Alternatively, you can pass the custom schema you want to be used to read the data:
+ customSchema = StructType([ StructField("id", StringType()), StructField("name", StringType()),
df = spark.read.schema(customSchema).format("cosmos.oltp").options(**cfg)\
.load() df.printSchema()
-```
-If no custom schema is specified and schema inference is disabled, then the resulting data will be returning the raw Json content of the items:
+# If no custom schema is specified and schema inference is disabled, then the resulting data will be returning the raw Json content of the items:
-```python
df = spark.read.format("cosmos.oltp").options(**cfg)\ .load() df.printSchema() ```
+#### [Scala](#tab/scala)
+
+```scala
+val cfgWithAutoSchemaInference = Map("spark.cosmos.accountEndpoint" -> cosmosEndpoint,
+ "spark.cosmos.accountKey" -> cosmosMasterKey,
+ "spark.cosmos.database" -> cosmosDatabaseName,
+ "spark.cosmos.container" -> cosmosContainerName,
+ "spark.cosmos.read.inferSchema.enabled" -> "true"
+)
+
+val df = spark.read.format("cosmos.oltp").options(cfgWithAutoSchemaInference).load()
+df.printSchema()
+
+df.show()
+```
++ For more information related to schema inference, see the full [schema inference configuration](https://github.com/Azure/azure-sdk-for-jav#schema-inference-config) documentation. ## Configuration reference
cosmos-db How To Delete By Partition Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-delete-by-partition-key.md
+
+ Title: Delete items by partition key value using the Cosmos SDK (preview)
+description: Learn how to delete items by partition key value using the Cosmos SDKs
+++++ Last updated : 08/19/2022+++
+# Delete items by partition key value - SQL API (preview)
+
+This article explains how to use the Cosmos SDKs to delete all items by logical partition key value.
+
+## Feature overview
+
+The delete by partition key feature is an asynchronous, background operation that allows you to delete all documents with the same logical partition key value, using the Comsos SDK.
+
+Because the number of documents to be deleted may be large, the operation runs in the background. Though the physical deletion operation runs in the background, the effects will be available immediately, as the documents to be deleted will not appear in the results of queries or read operations.
+
+To help limit the resources used by this background task, the delete by partition key operation is constrained to consume at most 10% of the total available RU/s on the container each second.
+
+## Getting started
+
+To use the feature, your Cosmos account must be enrolled in the preview. To enroll, submit a request for the **DeleteAllItemsByPartitionKey** feature via the [**Preview Features** page](../../azure-resource-manager/management/preview-features.md) in your Azure Subscription overview page.
++
+#### [.NET](#tab/dotnet-example)
+
+## Sample code
+Use [version 3.25.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) (or a higher preview version) of the Azure Cosmos DB .NET SDK to delete items by partition key.
+
+```csharp
+// Suppose our container is partitioned by tenantId, and we want to delete all the data for a particular tenant Contoso
+
+// Get reference to the container
+var container = cosmosClient.GetContainer("DatabaseName", "ContainerName");
+
+// Delete by logical partition key
+ResponseMessage deleteResponse = await container.DeleteAllItemsByPartitionKeyStreamAsync(new PartitionKey("Contoso"));
+
+ if (deleteResponse.IsSuccessStatusCode) {
+ Console.WriteLine($"Delete all documents with partition key operation has successfully started");
+}
+```
+#### [Java](#tab/java-example)
+
+Use [version 4.19.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos) (or a higher version) of the Azure Cosmos DB Java SDK to delete items by partition key. The delete by partition key API will be marked as beta.
++
+```java
+// Suppose our container is partitioned by tenantId, and we want to delete all the data for a particular tenant Contoso
+
+// Delete by logical partition key
+CosmosItemResponse<?> deleteResponse = container.deleteAllItemsByPartitionKey(
+ new PartitionKey("Contoso"), new CosmosItemRequestOptions()).block();
+```
+
+
+### Frequently asked questions (FAQ)
+#### Are the results of the delete by partition key operation reflected immediately?
+Yes, once the delete by partition key operation starts, the documents to be deleted will not appear in the results of queries or read operations. This also means that you can write new a new document with the same ID and partition key as a document to be deleted without resulting in a conflict.
+
+See [Known issues](#known-issues) for exceptions.
+
+#### What happens if I issue a delete by partition key operation, and then immediately write a new document with the same partition key?
+When the delete by partition key operation is issued, only the documents that exist in the container at that point in time with the partition key value will be deleted. Any new documents that come in will not be in scope for the deletion.
+
+### How is the delete by partition key operation prioritized among other operations against the container?
+By default, the delete by partition key value operation can consume up to a reserved fraction - 0.1, or 10% - of the overall RU/s on the resource. Any Request Units (RUs) in this bucket that are unused will be available for other non-background operations, such as reads, writes, and queries.
+
+For example, suppose you have provisioned 1000 RU/s on a container. There is an ongoing delete by partition key operation that consumes 100 RUs each second for 5 seconds. During each of these 5 seconds, there are 900 RUs available for non-background database operations. Once the delete operation is complete, all 1000 RU/s are now available again.
+
+### Known issues
+For certain scenarios, the effects of a delete by partition key operation is not guaranteed to be immediately reflected. The effect may be partially seen as the operation progresses.
+
+- [Aggregate queries](sql-query-aggregate-functions.md) that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.
+- Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.
+- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It is not recommended to use this preview feature if you have a scenario that requires continuous backup.
+
+## How to give feedback or report an issue/bug
+* Email cosmosPkDeleteFeedbk@microsoft.com with questions or feedback.
+
+### SDK requirements
+
+Find the latest version of the SDK that supports this feature.
+
+| SDK | Supported versions | Package manager link |
+| | | |
+| **.NET SDK v3** | *>= 3.25.0-preview (must be preview version)* | <https://www.nuget.org/packages/Microsoft.Azure.Cosmos/> |
+| **Java SDK v4** | *>= 4.19.0 (API is marked as beta)* | <https://mvnrepository.com/artifact/com.azure/azure-cosmos> |
+
+Support for other SDKs is planned for the future.
+
+## Next steps
+
+See the following articles to learn about more SDK operations in Azure Cosmos DB.
+- [Query an Azure Cosmos container
+](how-to-query-container.md)
+- [Transactional batch operations in Azure Cosmos DB using the .NET SDK
+](transactional-batch.md)
cosmos-db Troubleshoot Sdk Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-sdk-availability.md
Title: Diagnose and troubleshoot the availability of Azure Cosmos SDKs in multir
description: Learn all about the Azure Cosmos SDK availability behavior when operating in multi regional environments. Previously updated : 03/28/2022 Last updated : 09/07/2022
Every 5 minutes, the Azure Cosmos SDK client reads the account configuration and
If you remove a region and later add it back to the account, if the added region has a higher regional preference order in the SDK configuration than the current connected region, the SDK will switch back to use this region permanently. After the added region is detected, all the future requests are directed to it.
-If you configure the client to preferably connect to a region that the Azure Cosmos account does not have, the preferred region is ignored. If you add that region later, the client detects it and will switch permanently to that region.
+If you configure the client to preferably connect to a region that the Azure Cosmos account doesn't have, the preferred region is ignored. If you add that region later, the client detects it, and will switch permanently to that region.
## <a id="manual-failover-single-region"></a>Fail over the write region in a single write region account
-If you initiate a failover of the current write region, the next write request will fail with a known backend response. When this response is detected, the client will query the account to learn the new write region and proceeds to retry the current operation and permanently route all future write operations to the new region.
+If you initiate a failover of the current write region, the next write request will fail with a known backend response. When this response is detected, the client will query the account to learn the new write region, and proceed to retry the current operation and permanently route all future write operations to the new region.
## Regional outage
If the account is single write region and the regional outage occurs during a wr
## Session consistency guarantees
-When using [session consistency](../consistency-levels.md#guarantees-associated-with-consistency-levels), the client needs to guarantee that it can read its own writes. In single write region accounts where the read region preference is different from the write region, there could be cases where the user issues a write and when doing a read from a local region, the local region has not yet received the data replication (speed of light constraint). In such cases, the SDK detects the specific failure on the read operation and retries the read on the primary region to ensure session consistency.
+When using [session consistency](../consistency-levels.md#guarantees-associated-with-consistency-levels), the client needs to guarantee that it can read its own writes. In single write region accounts where the read region preference is different from the write region, there could be cases where the user issues a write and when doing a read from a local region, the local region hasn't yet received the data replication (speed of light constraint). In such cases, the SDK detects the specific failure on the read operation and retries the read on the primary region to ensure session consistency.
## Transient connectivity issues on TCP protocol
-In scenarios where the Azure Cosmos SDK client is configured to use the TCP protocol, for a given request, there might be situations where the network conditions are temporarily affecting the communication with a particular endpoint. These temporary network conditions can surface as TCP timeouts and Service Unavailable (HTTP 503) errors. The client will retry the request locally on the same endpoint for some seconds before surfacing the error.
+In scenarios where the Azure Cosmos SDK client is configured to use the TCP protocol, for a given request, there might be situations where the network conditions are temporarily affecting the communication with a particular endpoint. These temporary network conditions can surface as TCP timeouts and Service Unavailable (HTTP 503) errors. The client will, if possible, [retry the request locally](conceptual-resilient-sdk-applications.md#timeouts-and-connectivity-related-failures-http-408503) on the same endpoint for some seconds.
-If the user has configured a preferred region list with more than one region and the Azure Cosmos account is multiple write regions or single write region and the operation is a read request, the client will detect the local failure, and retry that single operation in the next region from the preference list.
+If the user has configured a preferred region list with more than one region and the client exhausted all local retries, it can attempt to retry that single operation in the next region from the preference list. Write operations can only be retried in other region if the Azure Cosmos DB account has multiple write regions enabled, while read operations can be retried in any available region.
## Next steps
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
Previously updated : 10/21/2021 Last updated : 09/07/2022
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [App Service](prepay-app-service.md) - [App Service - JBoss EA Integrated Support](prepay-jboss-eap-integrated-support-app-service.md)
+- [Azure Backup](../../backup/backup-azure-reserved-pricing-optimize-cost.md)
- [Azure Cache for Redis](../../azure-cache-for-redis/cache-reserved-pricing.md) - [Azure Data Factory](../../data-factory/data-flow-understand-reservation-charges.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Azure Database for MariaDB](../../mariadb/concept-reserved-pricing.md)
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
In this section, you will learn to create Azure Automation runbook that executes
### Create your Azure Automation account
-If you do not have an Azure Automation account already, create one by following the instructions in this step. For detailed steps, see [Create an Azure Automation account](../automation/quickstarts/create-account-portal.md) article. As part of this step, you create an **Azure Run As** account (a service principal in your Azure Active Directory) and assign it a **Contributor** role in your Azure subscription. Ensure that it is the same subscription that contains your ADF with Azure SSIS IR. Azure Automation will use this account to authenticate to Azure Resource Manager and operate on your resources.
+If you do not have an Azure Automation account already, create one by following the instructions in this step. For detailed steps, see [Create an Azure Automation account](/azure/automation/quickstarts/create-azure-automation-account-portal) article. As part of this step, you create an **Azure Run As** account (a service principal in your Azure Active Directory) and assign it a **Contributor** role in your Azure subscription. Ensure that it is the same subscription that contains your ADF with Azure SSIS IR. Azure Automation will use this account to authenticate to Azure Resource Manager and operate on your resources.
1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, ADF UI/app is only supported in Microsoft Edge and Google Chrome web browsers. 2. Sign in to [Azure portal](https://portal.azure.com/).
See the following articles from SSIS documentation:
- [Deploy, run, and monitor an SSIS package on Azure](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial) - [Connect to SSIS catalog on Azure](/sql/integration-services/lift-shift/ssis-azure-connect-to-catalog-database) - [Schedule package execution on Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages)-- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth)
+- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth)
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
The following table shows Azure Database Migration Service support for **offline
| **Azure DB for PostgreSQL - Hyperscale (Citus)** | PostgreSQL | X | | | Amazon RDS PostgreSQL | X | |
-1. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose MySQL as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation.
+1. If your source database is already in Azure PaaS (for example, Azure DB for MySQL or Azure DB for PostgreSQL), choose the corresponding engine when creating your migration activity. For example, if you're migrating from Azure DB for MySQL - Single Server to Azure DB for MySQL - Flexible Server, choose Azure Database for MySQL - Single Server as the source engine during scenario creation. If you're migrating from Azure DB for PostgreSQL - Single Server to Azure DB for PostgreSQL - Flexible Server, choose PostgreSQL as the source engine during scenario creation.
### Online (continuous sync) migration support
The following table shows Azure Database Migration Service support for **online*
| | Amazon RDS SQL | X | | | | Oracle | X | | | **Azure Cosmos DB** | MongoDB | Γ£ö | GA |
-| **Azure DB for MySQL** | MySQL | X | |
-| | Amazon RDS MySQL | X | |
+| **Azure DB for MySQL - Flexible Server** | Azure DB for MySQL - Single Server | Γ£ö | Preview |
+| | MySQL | Γ£ö | Preview |
+| | Amazon RDS MySQL | Γ£ö | Preview |
| **Azure DB for PostgreSQL - Single server** | PostgreSQL | Γ£ö | GA | | | Azure DB for PostgreSQL - Single server <sup>1</sup> | Γ£ö | GA | | | Amazon DS PostgreSQL | Γ£ö | GA |
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
+
+ Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal"
+
+description: "Learn to perform an offline migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."
++++ Last updated : 09/07/2022+++++
+# Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal
+
+You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an offline migration of a sample database from an Azure Database for MySQL single server to a MySQL flexible server (both running version 5.7) using a DMS migration activity.
+
+> [!NOTE]
+> DMS supports migrating from lower version MySQL servers (v5.6 and above) to higher versions. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a different region, resource group, and subscription for the target server than that specified for your source server.
+
+> [!IMPORTANT]
+For online migrations, you can use the Enable Transactional Consistency feature supported by DMS together with [Data-in replication](./../mysql/single-server/concepts-data-in-replication.md) or [replicate changes](https://techcommunity.microsoft.com/t5/microsoft-data-migration-blog/azure-dms-mysql-replicate-changes-now-in-preview/ba-p/3601564). Additionally, you can use the online migration scenario to migrate by following the tutorial [here](./tutorial-mysql-azure-single-to-flex-offline-portal.md).
+
+In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+>
+> * Implement best practices for creating a flexible server for faster data loads using DMS.
+> * Create and configure a target flexible server.
+> * Create a DMS instance.
+> * Create a MySQL migration project in DMS.
+> * Migrate a MySQL schema using DMS.
+> * Run the migration.
+> * Monitor the migration.
+> * Perform post-migration steps.
+> * Implement best practices for performing a migration.
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* Create or use an existing instance of Azure Database for MySQL ΓÇô Single Server (the source server).
+* To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:
+ * ΓÇ£READΓÇ¥ privilege on the source database.
+ * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database
+ * If migrating views, user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege.
+ * If migrating triggers, user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
+ * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege:
+ * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table.
+ * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine.
+ * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the event is to be shown.
+
+## Limitations
+
+As you prepare for the migration, be sure to consider the following limitations.
+
+* When migrating non-table objects, DMS does not support renaming databases.
+* When migrating to a target server with bin_log enabled, be sure to enable log_bin_trust_function_creators to allow for creation of routines and triggers.
+* When migrating the schema, DMS does not support creating a database on the target server.
+* Currently, DMS does not support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration the default definer for tables will be set to the login used to run the migration.
+* Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration will not occur. Note that selecting a table for schema migration also selects it for data movement.
+
+## Best practices for creating a flexible server for faster data loads using DMS
+
+DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you are free to select appropriate region, resource group and subscription for your target flexible server. Before you create your target flexible server, consider the following configuration guidance to help ensure faster data loads using DMS.
+
+* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the following table:
+
+| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+| - | - |:-:|:-:|
+| Basic\* | 1 | General Purpose | Standard_D16ds_v4 |
+| Basic\* | 2 | General Purpose | Standard_D16ds_v4 |
+| General Purpose\* | 4 | General Purpose | Standard_D16ds_v4 |
+| General Purpose\* | 8 | General Purpose | Standard_D16ds_v4 |
+| General Purpose | 16 | General Purpose | Standard_D16ds_v4 |
+| General Purpose | 32 | General Purpose | Standard_D32ds_v4 |
+| General Purpose | 64 | General Purpose | Standard_D64ds_v4 |
+| Memory Optimized | 4 | Business Critical | Standard_E4ds_v4 |
+| Memory Optimized | 8 | Business Critical | Standard_E8ds_v4 |
+| Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 |
+| Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
+
+\* For the migration, select General Purpose 16 VCores compute for the target flexible server for faster migrations. Scale back to the desired compute size for the target server after migration is complete by following the compute size recommendation in the Performing post-migration activities section later in this article.
+
+* The MySQL version for the target flexible server must be greater than or equal to that of the source single server.
+* Unless you need to deploy the target flexible server in a specific zone, set the value of the Availability Zone parameter to ΓÇÿNo preferenceΓÇÖ.
+* For network connectivity, on the Networking tab, if the source single server has private endpoints or private links configured, select Private Access; otherwise, select Public Access.
+* Copy all firewall rules from the source single server to the target flexible server.
+* Copy all the name/value tags from the single to flex server during creation itself.
+
+## Create and configure the target flexible server
+
+With these best practices in mind, create your target flexible server and then configure it.
+
+* Create the target flexible server. For guided steps, see the quickstart [Create an Azure Database for MySQL flexible server](./../mysql/flexible-server/quickstart-create-server-portal.md).
+* Next to configure the newly created target flexible server, proceed as follows:
+ * The user performing the migration requires the following permissions:
+ * To create tables on the target, the user must have the ΓÇ£CREATEΓÇ¥ privilege.
+ * If migrating a table with ΓÇ£DATA DIRECTORYΓÇ¥ or ΓÇ£INDEX DIRECTORYΓÇ¥ partition options, the user must have the ΓÇ£FILEΓÇ¥ privilege.
+ * If migrating to a table with a ΓÇ£UNIONΓÇ¥ option, the user must have the ΓÇ£SELECT,ΓÇ¥ ΓÇ£UPDATE,ΓÇ¥ and ΓÇ£DELETEΓÇ¥ privileges for the tables you map to a MERGE table.
+ * If migrating views, you must have the ΓÇ£CREATE VIEWΓÇ¥ privilege.
+ Keep in mind that some privileges may be necessary depending on the contents of the views. Please refer to the MySQL docs specific to your version for ΓÇ£CREATE VIEW STATEMENTΓÇ¥ for details
+ * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege.
+ * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
+ * If migrating routines, the user must have the ΓÇ£CREATE ROUTINEΓÇ¥ privilege.
+ * Create a target database, though it need not be populated with tables/views, etc.
+ * Set the appropriate character, collations, and any other applicable schema settings prior to starting the migration, as this may affect the DEFAULT set in some of the object definitions.
+ * Additionally, if migrating non-table objects, be sure to use the same name for the target schema as is used on the source.
+ * Configure the server parameters on the target flexible server as follows:
+ * Set the TLS version and require_secure_transport server parameter to match the values on the source server.
+ * Configure server parameters on the target server to match any non-default values used on the source server.
+ * To ensure faster data loads when using DMS, configure the following server parameters as described.
+ * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1GB) to prevent any connection issues due to large rows.
+ * slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
+ * query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
+ * innodb_buffer_pool_size ΓÇô Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
+ * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.to-flex-offlinealt-text="Screenshot of a
+ * innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+ * Configure the firewall rules and replicas on the target server to match those on the source server.
+ * Replicate the following server management features from the source single server to the target flexible server:
+ * Role assignments, Roles, Deny Assignments, classic administrators, Access Control (IAM)
+ * Locks (read-only and delete)
+ * Alerts
+ * Tasks
+ * Resource Health Alerts
+
+## Set up DMS
+
+With your target flexible server deployed and configured, you next need to set up DMS to migrate your single server to a flexible server.
+
+### Register the resource provider
+
+To register the Microsoft.DataMigration resource provider, perform the following steps.
+
+1. Before creating your first DMS instance, sign in to the Azure portal, and then search for and select **Subscriptions**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/1-subscriptions.png" alt-text="Screenshot of a Azure Marketplace.":::
+
+2. Select the subscription that you want to use to create the DMS instance, and then select **Resource providers**.
+ :::image type="content" source="media/tutorial-Azure-mysql-single-to-flex-offline/2-resource-provider.png" alt-text="Screenshot of a Screenshot of a Select resource provider.":::
+
+3. Search for the term ΓÇ£MigrationΓÇ¥, and then, for **Microsoft.DataMigration**, select **Register**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/3-register.png" alt-text="Screenshot of a Select Register.":::
+
+### Create a Database Migration Service (DMS) instance
+
+01. In the Azure portal, select + **Create a resource**, search for the term ΓÇ£Azure Database Migration ServiceΓÇ¥, and then select **Azure Database Migration Service** from the drop-down list.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/4-dms-portal-marketplace.png" alt-text="Screenshot of a Search Azure Database Migration Service.":::
+
+02. On the **Azure Database Migration Service** screen, select **Create**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/5-dms-portal-marketplace-create.png" alt-text="Screenshot of a Create Azure Database Migration Service instance.":::
+
+03. On the **Select migration scenario and Database Migration Service** page, under **Migration scenario**, select **Azure Database for MySQL-Single Server** as the source server type, and then select **Azure Database for MySQL** as target server type, and then select **Select**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/6-create-dms-service-scenario-offline.png" alt-text="Screenshot of a Select Migration Scenario.":::
+
+04. On the **Create Migration Service** page, on the **Basics** tab, under **Project details**, select the appropriate subscription, and then select an existing resource group or create a new one.
+
+05. Under **Instance details**, specify a name for the service, select a region, and then verify that **Azure** is selected as the service mode.
+
+06. To the right of **Pricing tier**, select **Configure tier**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/7-project-details.png" alt-text="Screenshot of a Select Configure Tier.":::
+
+07. On the Configure page, select the pricing tier and number of vCores for your DMS instance, and then select Apply.
+ For more information on DMS costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/8-configure-pricing-tier.png" alt-text="Screenshot of a Select Pricing tier.":::
+
+ Next, we need to specify the VNet that will provide the DMS instance with access to the source single server and the target flexible server.
+
+08. On the **Create Migration Service** page, select **Next : Networking >>**.
+
+09. On the **Networking** tab, select an existing VNet from the list or provide the name of new VNet to create, and then select **Review + Create**.
+ For more information, see the article [Create a virtual network using the Azure portal](./../virtual-network/quick-create-portal.md).
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/8-1-networking.png" alt-text="Screenshot of a Select Networking.":::
+
+ > [!IMPORTANT]
+ > Your vNet must be configured with access to both the source single server and the target flexible server, so be sure to:
+ >
+ > * Create a server-level firewall rule or [configure VNET service endpoints](./../mysql/single-server/how-to-manage-vnet-using-portal.md) for both the source and target Azure Database for MySQL servers to allow the VNet for Azure Database Migration Service access to the source and target databases.
+ > * Ensure that your VNet Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and Azure Monitor. For more details about VNet NSG traffic filtering, see [Filter network traffic with network security groups](./../virtual-network/virtual-network-vnet-plan-design-arm.md).
+
+ > [!NOTE]
+ > If you want to add tags to the service, first select Next : Tags to advance to the Tags tab first. Adding tags to the service is optional.
+
+10. Navigate to the **Review + create** tab, review the configurations, view the terms, and then select **Create**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/9-review-create.png" alt-text="Screenshot of a Select Review+Create.":::
+ Deployment of your instance of DMS now begins. The message **Deployment is in progress** appears for a few minutes, and then the message changes to **Your deployment is complete**.
+
+11. Select **Go to resource**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/9-1-go-to-resource.png" alt-text="Screenshot of a Select Go to resource.":::
+
+### Create a migration project
+
+To create a migration project, perform the following steps.
+
+1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/10-dms-search.png" alt-text="Screenshot of a Locate all instances of Azure Database Migration Service.":::
+
+2. In the search results, select the DMS instance that you just created, and then select + **New Migration Project**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/11-select-create.png" alt-text="Screenshot of a CSelect a new migration project.":::
+
+3. On the **New migration project** page, specify a name for the project, in the Source server type selection box, select **Azure Database For MySQL ΓÇô Single Server**, in the Target server type selection box, select **Azure Database For MySQL**, in the **Migration activity type** selection box, select **Online migration**, and then select **Create and run activity**.
+ > [!NOTE]
+ > Selecting Create project only as the migration activity type will only create the migration project; you can then run the migration project at a later time.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/12-create-project-offline.png" alt-text="Screenshot of a Create a new migration project.":::
+
+### Configure the migration project
+
+To configure your DMS migration project, perform the following steps.
+
+1. On the **Select source** screen, specify the connection details for the source MySQL instance.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/13-select-source-offline.png" alt-text="Screenshot of a Add source details screen.":::
+ When performing an offline migration, itΓÇÖs important to stop incoming traffic on the source when configuring the migration project.
+
+2. To proceed with the offline migration, select the **Make Source Server Read Only** check box.
+Selecting this check box prevents Write/Delete operations on the source server during migration, which ensures the data integrity of the target database as the source is migrated. When you make your source server read only as part of the migration process, all the databases on the source server, regardless of whether they are selected for migration, will be read-only.
+ > [!NOTE]
+ > Alternately, if you were performing an online migration, you would select the **Enable Transactional Consistency** check box. For more information about consistent backup, see [MySQL Consistent Backup](./migrate-azure-mysql-consistent-backup.md).
+
+3. Select **Next : Select target>>**, and then, on the **Select target** screen, specify the connection details for the target flexible server.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/15-select-target.png" alt-text="Screenshot of a Select target.":::
+
+4. Select **Next : Select databases>>**, and then, on the Select databases tab, under [Preview] Select server objects, select the server objects that you want to migrate.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/16-select-db.png" alt-text="Screenshot of a Select databases.":::
+
+5. In the **Select databases** section, under **Source Database**, select the database(s) to migrate.
+ The non-table objects in the database(s) you specified will be migrated, while the items you didnΓÇÖt select will be skipped.
+
+6. Select **Next : Select databases>>** to navigate to the Select tables tab.
+ Before the tab populates, DMS fetches the tables from the selected database(s) on the source and target and then determines whether the table exists and contains data.
+
+7. Select the tables that you want to migrate.
+ If you select a table in the source database that doesnΓÇÖt exist on the target database, the box under **Migrate schema** is selected by default. For tables that do exist in the target database, a note indicates that the selected table already contains data and will be truncated. In addition, if the schema of a table on the target server does not match the schema on the source, the table will be dropped before the migration continues.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/17-select-tables.png" alt-text="Screenshot of a Select Tables.":::
+
+ DMS validates your inputs, and if the validation passes, you will be able to start the migration.
+
+8. After configuring for schema migration, select **Next : Summary>>**.
+ > [!NOTE]
+ > You only need to navigate to the Configure migration settings tab if you are trying to troubleshoot failing migrations.
+
+9. On the **Summary** tab, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/18-summary-offline.png" alt-text="Screenshot of a Select Summary.":::
+
+10. Select **Start migration**.
+ The migration activity window appears, and the Status of the activity is Initializing. The Status changes to Running when the table migrations start.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/19-running-project-offline.png" alt-text="Screenshot of a Running status.":::
+
+### Monitor the migration
+
+1. On the migration activity screen, select **Refresh** to update the display and view the progress and the number of tables completed.
+
+2. To see the status of each table during the migration, select the database name and then select Refresh to update the display.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/20-monitor-migration-offline.png" alt-text="Screenshot of a Monitoring migration.":::
+
+3. Select **Refresh** to update the display until the **Status** of the migration shows as **Completed**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/21-status-complete-offline.png" alt-text="Screenshot of a Status of Migration.":::
+
+## Perform post-migration activities
+
+When the migration is complete, be sure to complete the following post-migration activities.
+
+* Perform sanity testing of the application against the target database to certify the migration.
+* Update the connection string to point to the new target flexible server.
+* Delete the source single server after you have ensured application continuity.
+* If you scaled-up the target flexible server for faster migration, scale it back by selecting the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the table below.
+
+ | Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+ | - | - |:-:|:-:|
+ | Basic | 1 | Burstable | Standard_B1s |
+ | Basic | 2 | Burstable | Standard_B2s |
+ | General Purpose | 4 | General Purpose | Standard_D4ds_v4 |
+ | General Purpose | 8 | General Purpose | Standard_D8ds_v4 |
+* Clean up Data Migration Service resources:
+ 1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+ 2. Select your migration service instance from the search results and select **Delete service**.
+ 3. On the confirmation dialog box, in the **TYPE THE DATABASE MIGRATION SERVICE NAME** textbox, specify the name of the service, and then select **Delete**.
+
+## Migration best practices
+
+When performing a migration, be sure to keep the following best practices in mind.
+
+* As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+* Plan the mode of migration for each database. For simpler migrations and smaller databases, consider offline mode.
+* Perform test migrations before migrating for production:
+ * Test migrations are an important for ensuring that you cover all aspects of the database migration, including application testing. If you're migrating to a higher MySQL version, test for application compatibility.
+ * After testing is completed, you can migrate the production databases. At this point, you need to finalize the day and time of production migration. Ideally, there's low application use at this time. All stakeholders who need to be involved should be available and ready. The production migration requires close monitoring.
+* Redirect all dependent applications to access the new primary database and open the applications for production usage.
+* After the application starts running on the target flexible server target, monitor the database performance closely to see if performance tuning is required.
+
+## Next steps
+
+* For information about Azure Database for MySQL - Flexible Server, see [Overview - Azure Database for MySQL Flexible Server](./../mysql/flexible-server/overview.md).
+* For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about known issues and limitations when performing migrations using DMS, see the article [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see the article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
+
+ Title: "Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal"
+
+description: "Learn to perform an online migration from Azure Database for MySQL - Single Server to Flexible Server by using Azure Database Migration Service."
++++ Last updated : 09/07/2022+++++
+# Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal
+
+You can migrate an instance of Azure Database for MySQL ΓÇô Single Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an online migration of a sample database from an Azure Database for MySQL single server to a MySQL flexible server (both running version 5.7) using a DMS migration activity.
+
+> [!NOTE]
+> DMS online migration is now in Preview. DMS supports migration for MySQL versions - 5.7 and 8.0, and also supports migration from lower version MySQL servers (v5.7 and above) to higher versions. In addition, DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you can select a different region, resource group, and subscription for the target server than that specified for your source server.
+
+In this tutorial, you will learn how to:
+
+> [!div class="checklist"]
+>
+> * Implement best practices for creating a flexible server for faster data loads using DMS.
+> * Create and configure a target flexible server.
+> * Create a DMS instance.
+> * Create a MySQL migration project in DMS.
+> * Migrate a MySQL schema using DMS.
+> * Run the migration.
+> * Monitor the migration.
+> * Perform post-migration steps.
+> * Implement best practices for performing a migration.
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* Create or use an existing instance of Azure Database for MySQL ΓÇô Single Server (the source server).
+* To complete the replicate changes migration successfully, ensure that the following prerequisites are in place:
+ * Use the MySQL command line tool of your choice to determine whether log_bin is enabled on the source server. The Binlog is not always turned on by default, so verify that it is enabled before starting the migration. To determine whether log_bin is enabled on the source server, run the command: SHOW VARIABLES LIKE 'log_binΓÇÖ
+ * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permission on source server for reading and applying the bin log.
+ * If you're targeting a replicate changes migration, configure the binlog_expire_logs_seconds parameter on the source server to ensure that binlog files are not purged before the replica commits the changes. We recommend at least two days to start. After a successful cutover, the value can be reset.
+* To complete a schema migration successfully, on the source server, the user performing the migration requires the following privileges:
+ * ΓÇ£READΓÇ¥ privilege on the source database.
+ * ΓÇ£SELECTΓÇ¥ privilege for the ability to select objects from the database
+ * If migrating views, user must have the ΓÇ£SHOW VIEWΓÇ¥ privilege.
+ * If migrating triggers, user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
+ * If migrating routines (procedures and/or functions), the user must be named in the definer clause of the routine. Alternatively, based on version, the user must have the following privilege:
+ * For 5.7, have ΓÇ£SELECTΓÇ¥ access to the ΓÇ£mysql.procΓÇ¥ table.
+ * For 8.0, have ΓÇ£SHOW_ROUTINEΓÇ¥ privilege or have the ΓÇ£CREATE ROUTINE,ΓÇ¥ ΓÇ£ALTER ROUTINE,ΓÇ¥ or ΓÇ£EXECUTEΓÇ¥ privilege granted at a scope that includes the routine.
+ * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege for the database from which the event is to be shown.
+
+## Limitations
+
+As you prepare for the migration, be sure to consider the following limitations.
+
+* When migrating non-table objects, DMS does not support renaming databases.
+* When migrating to a target server with bin_log enabled, be sure to enable log_bin_trust_function_creators to allow for creation of routines and triggers.
+* When migrating the schema, DMS does not support creating a database on the target server.
+* Currently, DMS does not support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration the default definer for tables will be set to the login used to run the migration.
+* Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration will not occur. Note that selecting a table for schema migration also selects it for data movement.
+* Online migration support is limited to the ROW binlog format.
+* Online migration only replicates DML changes; replicating DDL changes is not supported. Do not make any schema changes to the source while replication is in progress.
+
+## Best practices for creating a flexible server for faster data loads using DMS
+
+DMS supports cross-region, cross-resource group, and cross-subscription migrations, so you are free to select appropriate region, resource group and subscription for your target flexible server. Before you create your target flexible server, consider the following configuration guidance to help ensure faster data loads using DMS.
+
+* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the following table:
+
+| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+| - | - |:-:|:-:|
+| Basic\* | 1 | General Purpose | Standard_D16ds_v4 |
+| Basic\* | 2 | General Purpose | Standard_D16ds_v4 |
+| General Purpose\* | 4 | General Purpose | Standard_D16ds_v4 |
+| General Purpose\* | 8 | General Purpose | Standard_D16ds_v4 |
+| General Purpose | 16 | General Purpose | Standard_D16ds_v4 |
+| General Purpose | 32 | General Purpose | Standard_D32ds_v4 |
+| General Purpose | 64 | General Purpose | Standard_D64ds_v4 |
+| Memory Optimized | 4 | Business Critical | Standard_E4ds_v4 |
+| Memory Optimized | 8 | Business Critical | Standard_E8ds_v4 |
+| Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 |
+| Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
+
+\* For the migration, select General Purpose 16 VCores compute for the target flexible server for faster migrations. Scale back to the desired compute size for the target server after migration is complete by following the compute size recommendation in the Performing post-migration activities section later in this article.
+
+* The MySQL version for the target flexible server must be greater than or equal to that of the source single server.
+* Unless you need to deploy the target flexible server in a specific zone, set the value of the Availability Zone parameter to ΓÇÿNo preferenceΓÇÖ.
+* For network connectivity, on the Networking tab, if the source single server has private endpoints or private links configured, select Private Access; otherwise, select Public Access.
+* Copy all firewall rules from the source single server to the target flexible server.
+* Copy all the name/value tags from the single to flex server during creation itself.
+
+## Create and configure the target flexible server
+
+With these best practices in mind, create your target flexible server and then configure it.
+
+* Create the target flexible server. For guided steps, see the quickstart [Create an Azure Database for MySQL flexible server](./../mysql/flexible-server/quickstart-create-server-portal.md).
+* Next to configure the newly created target flexible server, proceed as follows:
+ * The user performing the migration requires the following permissions:
+ * Ensure that the user has ΓÇ£REPLICATION_APPLIERΓÇ¥ or ΓÇ£BINLOG_ADMINΓÇ¥ permission on target server for applying the bin log.
+ * Ensure that the user has ΓÇ£REPLICATION SLAVEΓÇ¥ permission on target server.
+ * Ensure that the user has ΓÇ£REPLICATION CLIENTΓÇ¥ and ΓÇ£REPLICATION SLAVEΓÇ¥ permission on source server for reading and applying the bin log.
+ * To create tables on the target, the user must have the ΓÇ£CREATEΓÇ¥ privilege.
+ * If migrating a table with ΓÇ£DATA DIRECTORYΓÇ¥ or ΓÇ£INDEX DIRECTORYΓÇ¥ partition options, the user must have the ΓÇ£FILEΓÇ¥ privilege.
+ * If migrating to a table with a ΓÇ£UNIONΓÇ¥ option, the user must have the ΓÇ£SELECT,ΓÇ¥ ΓÇ£UPDATE,ΓÇ¥ and ΓÇ£DELETEΓÇ¥ privileges for the tables you map to a MERGE table.
+ * If migrating views, you must have the ΓÇ£CREATE VIEWΓÇ¥ privilege.
+ Keep in mind that some privileges may be necessary depending on the contents of the views. Please refer to the MySQL docs specific to your version for ΓÇ£CREATE VIEW STATEMENTΓÇ¥ for details
+ * If migrating events, the user must have the ΓÇ£EVENTΓÇ¥ privilege.
+ * If migrating triggers, the user must have the ΓÇ£TRIGGERΓÇ¥ privilege.
+ * If migrating routines, the user must have the ΓÇ£CREATE ROUTINEΓÇ¥ privilege.
+ * Create a target database with the same name as that on source server, though it need not be populated with tables/views, etc.
+ * Set the appropriate character, collations, and any other applicable schema settings prior to starting the migration, as this may affect the DEFAULT set in some of the object definitions.
+ * Additionally, if migrating non-table objects, be sure to use the same name for the target schema as is used on the source.
+ * Configure the server parameters on the target flexible server as follows:
+ * Set the TLS version and require_secure_transport server parameter to match the values on the source server.
+ * Configure server parameters on the target server to match any non-default values used on the source server.
+ * To ensure faster data loads when using DMS, configure the following server parameters as described.
+ * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1GB) to prevent any connection issues due to large rows.
+ * slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
+ * query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
+ * innodb_buffer_pool_size ΓÇô Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
+ * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
+ * innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+ * Configure the firewall rules and replicas on the target server to match those on the source server.
+ * Replicate the following server management features from the source single server to the target flexible server:
+ * Role assignments, Roles, Deny Assignments, classic administrators, Access Control (IAM)
+ * Locks (read-only and delete)
+ * Alerts
+ * Tasks
+ * Resource Health Alerts
+
+## Set up DMS
+
+With your target flexible server deployed and configured, you next need to set up DMS to migrate your single server to a flexible server.
+
+### Register the resource provider
+
+To register the Microsoft.DataMigration resource provider, perform the following steps.
+
+1. Before creating your first DMS instance, sign in to the Azure portal, and then search for and select **Subscriptions**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/1-subscriptions.png" alt-text="Screenshot of a Select subscriptions from Azure Marketplace.":::
+
+2. Select the subscription that you want to use to create the DMS instance, and then select **Resource providers**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/2-resource-provider.png" alt-text="Screenshot of a Select Resource Provider.":::
+
+3. Search for the term ΓÇ£MigrationΓÇ¥, and then, for **Microsoft.DataMigration**, select **Register**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/3-register.png" alt-text="Screenshot of a Register your resource provider.":::
+
+### Create a Database Migration Service (DMS) instance
+
+01. In the Azure portal, select + **Create a resource**, search for the term ΓÇ£Azure Database Migration ServiceΓÇ¥, and then select **Azure Database Migration Service** from the drop-down list.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/4-dms-portal-marketplace.png" alt-text="Screenshot of a Search Azure Database Migration Service.":::
+
+02. On the **Azure Database Migration Service** screen, select **Create**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/5-dms-portal-marketplace-create.png" alt-text="Screenshot of a Create Azure Database Migration Service instance.":::
+
+03. On the **Select migration scenario and Database Migration Service** page, under **Migration scenario**, select **Azure Database for MySQL-Single Server** as the source server type, and then select **Azure Database for MySQL** as target server type, and then select **Select**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/6-create-dms-service-scenario-online.png" alt-text="Screenshot of a Select Migration Scenario.":::
+
+04. On the **Create Migration Service** page, on the **Basics** tab, under **Project details**, select the appropriate subscription, and then select an existing resource group or create a new one.
+
+05. Under **Instance details**, specify a name for the service, select a region, and then verify that **Azure** is selected as the service mode.
+
+06. To the right of **Pricing tier**, select **Configure tier**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/7-project-details.png" alt-text="Screenshot of a Select Configure Tier.":::
+
+07. On the Configure page, select the pricing tier and number of vCores for your DMS instance, and then select Apply.
+ For more information on DMS costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/8-configure-pricing-tier.png" alt-text="Screenshot of a Select Pricing tier.":::
+
+ Next, we need to specify the VNet that will provide the DMS instance with access to the source single server and the target flexible server.
+
+08. On the **Create Migration Service** page, select **Next : Networking >>**.
+
+09. On the **Networking** tab, select an existing VNet from the list or provide the name of new VNet to create, and then select **Review + Create**.
+ For more information, see the article [Create a virtual network using the Azure portal.](./../virtual-network/quick-create-portal.md).
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/8-1-networking.png" alt-text="Screenshot of a Select Networking.":::
+
+ > [!IMPORTANT]
+ > Your vNet must be configured with access to both the source single server and the target flexible server, so be sure to:
+ >
+ > * Create a server-level firewall rule or [configure VNET service endpoints](./../mysql/single-server/how-to-manage-vnet-using-portal.md) for both the source and target Azure Database for MySQL servers to allow the VNet for Azure Database Migration Service access to the source and target databases.
+ > * Ensure that your VNet Network Security Group (NSG) rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and Azure Monitor. For more details about VNet NSG traffic filtering, see [Filter network traffic with network security groups](./../virtual-network/virtual-network-vnet-plan-design-arm.md).
+
+ > [!NOTE]
+ > If you want to add tags to the service, first select Next : Tags to advance to the Tags tab first. Adding tags to the service is optional.
+
+10. Navigate to the **Review + create** tab, review the configurations, view the terms, and then select **Create**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/9-review-create.png" alt-text="Screenshot of a Select Review+Create.":::
+ Deployment of your instance of DMS now begins. The message **Deployment is in progress** appears for a few minutes, and then the message changes to **Your deployment is complete**.
+
+11. Select **Go to resource**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/9-1-go-to-resource.png" alt-text="Screenshot of a Select Go to resource.":::
+
+### Create a migration project
+
+To create a migration project, perform the following steps.
+
+1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/10-dms-search.png" alt-text="Screenshot of a Locate all instances of Azure Database Migration Service.":::
+
+2. In the search results, select the DMS instance that you just created, and then select + **New Migration Project**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/11-select-create.png" alt-text="Screenshot of a Select a new migration project.":::
+
+3. On the **New migration project** page, specify a name for the project, in the Source server type selection box, select **Azure Database For MySQL ΓÇô Single Server**, in the Target server type selection box, select **Azure Database For MySQL**, in the **Migration activity type** selection box, select **Online migration**, and then select **Create and run activity**.
+ > [!NOTE]
+ > Selecting Create project only as the migration activity type will only create the migration project; you can then run the migration project at a later time.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/12-create-project-online.png" alt-text="Screenshot of a Create a new migration project.":::
+
+### Configure the migration project
+
+To configure your DMS migration project, perform the following steps.
+
+1. On the **Select source** screen, specify the connection details for the source MySQL instance.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/13-select-source-online.png" alt-text="Screenshot of a Add source details screen.":::
+
+2. Select **Next : Select target>>**, and then, on the **Select target** screen, specify the connection details for the target flexible server.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/15-select-target.png" alt-text="Screenshot of a Select target.":::
+
+3. Select **Next : Select databases>>**, and then, on the Select databases tab, under [Preview] Select server objects, select the server objects that you want to migrate.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/16-select-db.png" alt-text="Screenshot of a Select databases.":::
+
+4. In the **Select databases** section, under **Source Database**, select the database(s) to migrate.
+ The non-table objects in the database(s) you specified will be migrated, while the items you didnΓÇÖt select will be skipped. You can only select the source and target databases whose names match that on the source and target server.
+ If you select a database on the source server that doesnΓÇÖt exist on the target database, you will see a warning message ΓÇÿNot available at TargetΓÇÖ and you wonΓÇÖt be able to select the database for migration.
+
+5. Select **Next : Select databases>>** to navigate to the Select tables tab.
+ Before the tab populates, DMS fetches the tables from the selected database(s) on the source and target and then determines whether the table exists and contains data.
+
+6. Select the tables that you want to migrate.
+ You can only select the source and target tables whose names match that on the source and target server.
+ If you select a table in the source database that doesnΓÇÖt exist on the target database, you will see a warning message ΓÇÿNot available at TargetΓÇÖ and you wonΓÇÖt be able to select the table for migration.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/17-select-tables.png" alt-text="Screenshot of a Select Tables.":::
+
+ DMS validates your inputs, and if the validation passes, you will be able to start the migration.
+
+7. After configuring for schema migration, select **Next : Summary>>**.
+ > [!NOTE]
+ > You only need to navigate to the Configure migration settings tab if you are trying to troubleshoot failing migrations.
+
+8. On the **Summary** tab, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/18-summary-online.png" alt-text="Screenshot of a Select Summary.":::
+
+9. Select **Start migration**.
+ The migration activity window appears, and the Status of the activity is Initializing. The Status changes to Running when the table migrations start.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/19-running-project-online.png" alt-text="Screenshot of a Running status.":::
+
+### Monitor the migration
+
+1. On the migration activity screen navigate to **Initial Load**, select **Refresh** to update the display and view the progress and the number of tables completed.
+
+2. On the migration activity screen navigate to **Replicate Data Changes** tab, select **Refresh** to update the display and view the seconds behind source.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/20-monitor-migration-online.png" alt-text="Screenshot of a Monitoring migration.":::
+
+3. After the **Seconds behind source** hits 0, proceed to start cutover by clicking on the **Start Cutover** menu tab at the top of the migration activity screen. Follow the steps in the cutover window before you are ready to perform a cutover. Once all steps are completed, click on **Confirm** and next click on **Apply**.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/21-complete-cutover-online.png" alt-text="Screenshot of a Perform cutover.":::
+
+## Perform post-migration activities
+
+When the migration is complete, be sure to complete the following post-migration activities.
+
+* Perform sanity testing of the application against the target database to certify the migration.
+* Update the connection string to point to the new target flexible server.
+* Delete the source single server after you have ensured application continuity.
+* If you scaled-up the target flexible server for faster migration, scale it back by selecting the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the table below.
+
+ | Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+ | - | - |:-:|:-:|
+ | Basic | 1 | Burstable | Standard_B1s |
+ | Basic | 2 | Burstable | Standard_B2s |
+ | General Purpose | 4 | General Purpose | Standard_D4ds_v4 |
+ | General Purpose | 8 | General Purpose | Standard_D8ds_v4 |
+* Clean up Data Migration Service resources:
+ 1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+ 2. Select your migration service instance from the search results and select **Delete service**.
+ 3. On the confirmation dialog box, in the **TYPE THE DATABASE MIGRATION SERVICE NAME** textbox, specify the name of the service, and then select **Delete**.
+
+## Migration best practices
+
+When performing a migration, be sure to keep the following best practices in mind.
+
+* As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+* Perform test migrations before migrating for production:
+ * Test migrations are important for ensuring that you cover all aspects of the database migration, including application testing. The best practice is to begin by running a migration entirely for testing purposes. After a newly started migration enters the Replicate Data Changes phase with minimal lag, make your Flexible Server target the primary database server. Use that target for testing the application to ensure expected performance and results. If you're migrating to a higher MySQL version, test for application compatibility.
+ * After testing is completed, you can migrate the production databases. At this point, you need to finalize the day and time of production migration. Ideally, there's low application use at this time. All stakeholders who need to be involved should be available and ready. The production migration requires close monitoring. For an online migration, the replication must be completed before you perform the cutover, to prevent data loss.
+* Redirect all dependent applications to access the new primary database and open the applications for production usage.
+* After the application starts running on the target flexible server, monitor the database performance closely to see if performance tuning is required.
+
+## Next steps
+
+* For information about Azure Database for MySQL - Flexible Server, see [Overview - Azure Database for MySQL Flexible Server](./../mysql/flexible-server/overview.md).
+* For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about known issues and limitations when performing migrations using DMS, see the article [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see the article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
New-AzEventGridSubscription `
-DeliverySchema CloudEventSchemaV1_0 ```
- Currently, you can't use an Event Grid trigger for an Azure Functions app when the event is delivered in the CloudEvents schema. Use an HTTP trigger. For examples of implementing an HTTP trigger that receives events in the CloudEvents schema, see [Using CloudEvents with Azure Functions](#azure-functions).
-
-## Endpoint validation with CloudEvents v1.0
+ ## Endpoint validation with CloudEvents v1.0
If you're already familiar with Event Grid, you might be aware of the endpoint validation handshake for preventing abuse. CloudEvents v1.0 implements its own [abuse protection semantics](webhook-event-delivery.md) by using the HTTP OPTIONS method. To read more about it, see [HTTP 1.1 Web Hooks for event delivery - Version 1.0](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#4-abuse-protection). When you use the CloudEvents schema for output, Event Grid uses the CloudEvents v1.0 abuse protection in place of the Event Grid validation event mechanism.
event-grid Consume Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md
Last updated 05/17/2022
Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). That is, there is no support if you have strict network isolation requirements where your delivered events traffic must not leave the private IP space. ## Use managed identity
-However, if your requirements call for a secure way to send events using an encrypted channel and a known identity of the sender (in this case, Event Grid) using public IP space, you could deliver events to Event Hubs, Service Bus, or Azure Storage service using an Azure event grid custom topic or a domain with system-managed identity. For details about delivering events using managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
+However, if your requirements call for a secure way to send events using an encrypted channel and a known identity of the sender (in this case, Event Grid) using public IP space, you could deliver events to Event Hubs, Service Bus, or Azure Storage service using an Azure Event Grid custom topic or a domain with system-assigned or user-assigned managed identity. For details about delivering events using managed identity, see [Event delivery using a managed identity](managed-service-identity.md).
Then, you can use a private link configured in Azure Functions or your webhook deployed on your virtual network to pull events. See the sample: [Connect to private endpoints with Azure Functions](/samples/azure-samples/azure-functions-private-endpoints/connect-to-private-endpoints-with-azure-functions/).
Under this configuration, the secured traffic from Event Grid to Event Hubs, Ser
## Deliver events to Event Hubs using managed identity To deliver events to event hubs in your Event Hubs namespace using managed identity, follow these steps:
-1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
+1. Enable system-assigned or user-assigned managed identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
1. [Add the identity to the **Azure Event Hubs Data Sender** role on the Event Hubs namespace](../event-hubs/authenticate-managed-identity.md#to-assign-azure-roles-using-the-azure-portal). 1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Event Hubs namespace](../event-hubs/event-hubs-service-endpoints.md#trusted-microsoft-services).
-1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses an event hub as an endpoint to use the system-assigned identity.
+1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses an event hub as an endpoint to use the system-assigned or user-assigned managed identity.
## Deliver events to Service Bus using managed identity To deliver events to Service Bus queues or topics in your Service Bus namespace using managed identity, follow these steps:
-1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
+1. Enable system-assigned or user-assigned managed identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
1. [Add the identity to the **Azure Service Bus Data Sender**](../service-bus-messaging/service-bus-managed-service-identity.md#azure-built-in-roles-for-azure-service-bus) role on the Service Bus namespace 1. [Enable the **Allow trusted Microsoft services to bypass this firewall** setting on your Service Bus namespace](../service-bus-messaging/service-bus-service-endpoints.md#trusted-microsoft-services).
-1. [Configure the event subscription](managed-service-identity.md) that uses a Service Bus queue or topic as an endpoint to use the system-assigned identity.
+1. [Configure the event subscription](managed-service-identity.md) that uses a Service Bus queue or topic as an endpoint to use the system-assigned or user-assigned managed identity.
## Deliver events to Storage using managed identity To deliver events to Storage queues using managed identity, follow these steps:
-1. Enable system-assigned identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
+1. Enable system-assigned or user-assigned managed identity: [system topics](enable-identity-system-topics.md), [custom topics, and domains](enable-identity-custom-topics-domains.md).
1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/blobs/assign-azure-role-data-access.md) role on Azure Storage queue.
-1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned identity.
+1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned or user-assigned managed identity.
## Next steps
event-grid Event Schema Resource Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-resource-groups.md
The following example shows the schema for a **ResourceActionSuccess** event. Th
```json [{ "subject": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.EventHub/namespaces/{namespace}/AuthorizationRules/RootManageSharedAccessKey",
- "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}"
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}",
"type": "Microsoft.Resources.ResourceActionSuccess", "time": "2018-10-08T22:46:22.6022559Z", "id": "{ID}",
guides Azure Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/guides/developer/azure-developer-guide.md
Service Fabric supports WebAPI with Open Web Interface for .NET (OWIN) and ASP.N
> > **Get started:** [Create your first Azure Service Fabric application](../../service-fabric/service-fabric-tutorial-create-dotnet-app.md).
-#### Azure Spring Cloud
+#### Azure Spring Apps
-Azure Spring Cloud is a serverless microservices platform that enables you to build, deploy, scale and monitor your applications in the cloud. Use Spring Cloud to bring modern microservice patterns to Spring Boot apps, eliminating boilerplate code to quickly build robust Java apps.
+Azure Spring Apps is a serverless app platform that enables you to build, deploy, scale and monitor your applications in the cloud. Use Spring Cloud to bring modern microservice patterns to Spring Boot apps, eliminating boilerplate code to quickly build robust Java apps.
* Leverage managed versions of Spring Cloud Service Discovery and Config Server, while we ensure those critical components are running in optimum conditions. * Focus on building your business logic and we will take care of your service runtime with security patches, compliance standards and high availability. * Manage application lifecycle (for example, deploy, start, stop, scale) on top of Azure Kubernetes Service. * Easily bind connections between your apps and Azure services such as Azure Database for MySQL and Azure Cache for Redis.
-* Monitor and troubleshoot microservices and applications using enterprise-grade unified monitoring tools that offer deep insights on application dependencies and operational telemetry.
+* Monitor and troubleshoot applications using enterprise-grade unified monitoring tools that offer deep insights on application dependencies and operational telemetry.
-> **When to use:** As a fully managed service Azure Spring Cloud is a good choice when you're minimizing operational cost running Spring Boot/Spring Cloud based microservices on Azure.
+> **When to use:** As a fully managed service Azure Spring Apps is a good choice when you're minimizing operational cost running Spring Boot and Spring Cloud apps on Azure.
> > **Get started:** [Deploy your first Spring Boot app in Azure Spring Apps](../../spring-apps/quickstart.md). - ### Enhance your applications with Azure services Along with application hosting, Azure provides service offerings that can enhance the functionality. Azure can also improve the development and maintenance of your applications, both in the cloud and on-premises.
Most applications must store data, so however you decide to host your applicatio
> > **Get started**: [Create a database in Azure SQL Database in minutes by using the Azure portal](/azure/azure-sql/database/single-database-create-quickstart). - You can use [Azure Data Factory](../../data-factory/introduction.md) to move existing on-premises data to Azure. If you aren't ready to move data to the cloud, [Hybrid Connections](../../app-service/app-service-hybrid-connections.md) in Azure App Service lets you connect your App Service hosted app to on-premises resources. You can also connect to Azure data and storage services from your on-premises applications. #### Docker support
Docker containers, a form of OS virtualization, let you deploy applications in a
Azure provides several ways to use containers in your applications. - * **Azure Kubernetes Service**: Lets you create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications. To learn more about Azure Kubernetes Service, see [Azure Kubernetes Service introduction](../../aks/intro-kubernetes.md). > **When to use**: When you need to build production-ready, scalable environments that provide additional scheduling and management tools, or when you're deploying a Docker Swarm cluster.
When you allow access to Azure resources, it's always a best practice to provide
> > **Get started**: To learn more, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). -- **Service principal objects**: Along with providing access to user principals and groups, you can grant the same access to a service principal.
+* **Service principal objects**: Along with providing access to user principals and groups, you can grant the same access to a service principal.
> **When to use**: When you're programmatically managing Azure resources or granting access for applications. For more information, see [Create Active Directory application and service principal](../../active-directory/develop/howto-create-service-principal-portal.md).
healthcare-apis De Identified Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/de-identified-export.md
Title: Using the FHIR service to export de-identified data description: This article describes how to set up and use de-identified export-+
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Refer to the table below to find details about resolution dates or possible work
|Issue | Date discovered | Status | Date resolved | | :- | : | :- | :- |
-|Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on create, search, update, and delete operations. | May 2022 |No workaround | Not resolved |
+|Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |No workaround | Not resolved |
|The SQL provider will cause the `RawResource` column in the database to save incorrectly. This occurs in a small number of cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571)|May 2022 | ## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Title: Azure Health Data Services monthly releases description: This article provides details about the Azure Health Data Services monthly features and enhancements. -+ Last updated 08/09/2022-+
Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## August 2022
+
+### FHIR service
+
+#### **Features**
+
+| Enhancements | Related information |
+| : | :- |
+| Azure Health Data services availability expands to new regions | Azure Health Data services is now available in the following regions: Central India, Korea Central, and Sweden Central.
+| `$import` is generally available. | `$import` API is now generally available in Azure Health Data Services API version 2022-06-01. See [Executing the import](./../healthcare-apis/fhir/import-data.md) by invoking the `$import` operation on FHIR service in Azure Health Data Services.
+| `$convert-data` updated by adding STU3-R4 support. |`$convert-data` added support for FHIR STU3-R4 conversion. See [Data conversion for Azure API for FHIR](./../healthcare-apis/azure-api-for-fhir/convert-data.md). |
+| Analytics pipeline now supports data filtering. | Data filtering is now supported in FHIR to data lake pipeline. See [FHIR-Analytics-Pipelines_Filter FHIR data](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Filter%20FHIR%20data%20in%20pipeline.md) microsoft/FHIR-Analytics-Pipelines github.com. |
+| Analytics pipeline now supports FHIR extensions. | Analytics pipeline can process FHIR extensions to generate parquet data. See [FHIR-Analytics-Pipelines_Process](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Process%20FHIR%20extensions.md) in pipeline.md at main.
+
+#### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+| History bundles were sorted with the oldest version first |We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per FHIR specification, the sorting of versions defaults to the oldest version last. <br><br>This bug fix, addresses FHIR server behavior for sorting history bundle. <br><br>We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP GET command utilized for retrieving history. <br><br>For example: `<server URL>/_history?_sort=_lastUpdated` <br><br>For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689).  |
+| Queries not providing consistent result count after appended with `_sort` operator. | Issue is now fixed and queries should provide consistent result count, with and without sort operator.
+
+#### **Known issues**
+
+| Known Issue | Description |
+| : | :- |
+| Using [token type fields](https://www.hl7.org/fhir/search.html#token) of more than 128 characters in length can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | Currently, no workaround available. |
+
+For more information about the currently known issues with the FHIR service, see [Known issues: FHIR service](known-issues.md).
+
+### MedTech service
+
+#### **Features and enhancements**
+
+|Enhancements | Related information |
+| : | :- |
+|New Metric Chart |Customers can now see predefined metrics graphs in the MedTech landing page, complete with alerts to ease customers' burden of monitoring their MedTech service. |
+|Availability of Diagnostic Logs |There are now pre-defined queries with relevant logs for common issues so that customers can easily debug and diagnose issues in their MedTech service. |
+
+### DICOM service
+
+#### **Features and enhancements**
+
+|Enhancements | Related information |
+| : | :- |
+|Modality worklists (UPS-RS) is GA. |The modality worklists (UPS-RS) service is now generally available. Learn more about the [worklists service](https://github.com/microsoft/dicom-server/blob/main/docs/resources/conformance-statement.md#worklist-service-ups-rs). |
+ ## July 2022 ### FHIR service
Azure Health Data Services is a set of managed API services based on open standa
|Bug fixes |Related information | | :-- | : |
-| (Open Source) History bundles were sorted with the oldest version first. | We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per [FHIR specification](https://hl7.org/fhir/http.html#history), the sorting of versions defaults to the oldest version last. This bug fix, addresses FHIR server behavior for sorting history bundle.<br /><br />We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP `GET` command utilized for retrieving history. <br /><br />For example: `<Server URL>/_history?_sort=_lastUpdated` <br /><br />For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689).
+| (Open Source) History bundles were sorted with the oldest version first. | We've recently identified an issue with the sorting order of history bundles on FHIR® server. History bundles were sorted with the oldest version first. Per [FHIR specification](https://hl7.org/fhir/http.html#history), the sorting of versions defaults to the oldest version last. This bug fix, addresses FHIR server behavior for sorting history bundle.<br /><br />We understand if you would like to keep the sorting per existing behavior (oldest version first). To support existing behavior, we recommend you append `_sort=_lastUpdated` to the HTTP `GET` command utilized for retrieving history. <br /><br />For example: `<server URL>/_history?_sort=_lastUpdated` <br /><br />For more information, see [#2689](https://github.com/microsoft/fhir-server/pull/2689).
#### **Known issues**
For more information about the currently known issues with the FHIR service, see
|Bug fixes |Related information | | :-- | : | |Export Job not being queued for execution. |Fixes issue with export job not being queued due to duplicate job definition caused due to reference to container URL. For more information, see [#2648](https://github.com/microsoft/fhir-server/pull/2648). |
-|Queries not providing consistent result count after appended with the `_sort operator. |Fixes the issue with the help of distinct operator to resolve inconsistency and record duplication in response. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). |
+|Queries not providing consistent result count after appended with the `_sort` operator. |Fixes the issue with the help of distinct operator to resolve inconsistency and record duplication in response. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). |
## May 2022
iot-edge Tutorial Nested Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge-for-linux-on-windows.md
+
+ Title: Tutorial - Create a hierarchy of IoT Edge devices - Azure IoT Edge for Linux on Windows
+description: This tutorial shows you how to create a hierarchical structure of IoT Edge devices using gateways.
+++ Last updated : 08/04/2022++
+monikerRange: ">=iotedge-2020-11"
++
+# Tutorial: Create a hierarchy of IoT Edge devices using IoT Edge for Linux on Windows
++
+Deploy Azure IoT Edge nodes across networks organized in hierarchical layers. Each layer in a hierarchy is a gateway device that handles messages and requests from devices in the layer beneath it.
+
+You can structure a hierarchy of devices so that only the top layer has connectivity to the cloud, and the lower layers can only communicate with adjacent north and south layers. This network layering is the foundation of most industrial networks, which follow the [ISA-95 standard](https://en.wikipedia.org/wiki/ANSI/ISA-95).
+
+The goal of this tutorial is to create a hierarchy of IoT Edge devices that simulates a simplified production environment. At the end, you'll deploy the [Simulated Temperature Sensor module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-iot.simulated-temperature-sensor) to a lower layer device without internet access by downloading container images through the hierarchy.
+
+To accomplish this goal, this tutorial walks you through creating a hierarchy of IoT Edge devices using IoT Edge for Linux on Windows, deploying IoT Edge runtime containers to your devices, and configuring your devices locally. In this tutorial, you use an automated configuration tool to:
+
+> [!div class="checklist"]
+>
+> * Create and define the relationships in a hierarchy of IoT Edge devices.
+> * Configure the IoT Edge runtime on the devices in your hierarchy.
+> * Install consistent certificates across your device hierarchy.
+> * Add workloads to the devices in your hierarchy.
+> * Use the [IoT Edge API Proxy module](https://azuremarketplace.microsoft.com/marketplace/apps/azure-iot.azureiotedge-api-proxy?tab=Overview) to securely route HTTP traffic over a single port from your lower layer devices.
+
+>[!TIP]
+>This tutorial includes a mixture of manual and automated steps to provide a showcase of nested IoT Edge features.
+>
+>If you would like an entirely automated look at setting up a hierarchy of IoT Edge devices, you guide your own script based on the scripted [Azure IoT Edge for Industrial IoT sample](https://aka.ms/iotedge-nested-sample). This scripted scenario deploys Azure virtual machines as preconfigured devices to simulate a factory environment.
+>
+>If you would like an in-depth look at the manual steps to create and manage a hierarchy of IoT Edge devices, see [the how-to guide on IoT Edge device gateway hierarchies](how-to-connect-downstream-iot-edge-device.md).
+
+In this tutorial, the following network layers are defined:
+
+* **Top layer**: IoT Edge devices at this layer can connect directly to the cloud.
+
+* **Lower layers**: IoT Edge devices at layers below the top layer can't connect directly to the cloud. They need to go through one or more intermediary IoT Edge devices to send and receive data.
+
+This tutorial uses a two device hierarchy for simplicity, pictured below. One device, the **top layer device**, represents a device at the top layer of the hierarchy, which can connect directly to the cloud. This device will also be referred to as the **parent device**. The other device, the **lower layer device**, represents a device at the lower layer of the hierarchy, which can't connect directly to the cloud. You can add more lower layer devices to represent your production environment, as needed. Devices at lower layers will also be referred to as **child devices**.
+
+![Diagram that shows the structure of the tutorial hierarchy, containing two devices: the top layer device and the lower layer device.](./media/tutorial-nested-iot-edge/tutorial-hierarchy-diagram.png)
+
+## Prerequisites
+
+To create a hierarchy of IoT Edge devices, you'll need:
+
+* Two Windows devices running Azure IoT Edge for Linux on Windows. Both devices should be deployed using an **external virtual switch**.
+
+> [!TIP]
+> It is possible to use **internal** or **default** virtual switch if a port forwarding is configured on the Windows host OS. However, for the simplicity of this tutorial, both devices should use an **external** virtual switch and be connected to the same external network.
+>
+> For more information about netowrking, see [Azure IoT Edge for Linux on Windows networking](./iot-edge-for-linux-on-windows-networking.md) and [Networking configuration for Azure IoT Edge for Linux on Windows](./how-to-configure-iot-edge-for-linux-on-windows-networking.md).
+>
+> If you need to set up the EFLOW devices on a DMZ, see [How to configure Azure IoT Edge for Linux on Windows Industrial IoT & DMZ configuration](how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md).
+
+* An Azure account with a valid subscription. If you don't have an [Azure subscription](../guides/developer/azure-developer-guide.md#understanding-accounts-subscriptions-and-billing), create a [free account](https://azure.microsoft.com/free/) before you begin.
+* A free or standard tier [IoT Hub](../iot-hub/iot-hub-create-through-portal.md) in Azure.
+* Make sure that the following ports are open inbound for all devices except the lowest layer device: 443, 5671, 8883:
+ * 443: Used between parent and child edge hubs for REST API calls and to pull docker container images.
+ * 5671, 8883: Used for AMQP and MQTT.
+
+> [!TIP]
+> For more information on EFLOW virtual machine firewall, see [IoT Edge for Linux on Windows security](iot-edge-for-linux-on-windows-security.md).
+
+## Configure your IoT Edge device hierarchy
+
+### Create a hierarchy of IoT Edge devices
+
+IoT Edge devices make up the layers of your hierarchy. This tutorial will create a hierarchy of two IoT Edge devices: the **top layer device** and its child, the **lower layer device**. You can create additional child devices as needed.
+
+To create and configure your hierarchy of IoT Edge devices, you'll use the `iotedge-config` tool. This tool simplifies the configuration of the hierarchy by automating and condensing several steps into two:
+
+1. Setting up the cloud configuration and preparing each device configuration, which includes:
+
+ * Creating devices in your IoT Hub
+ * Setting the parent-child relationships to authorize communication between devices
+ * Generating a chain of certificates for each device to establish secure communication between them
+ * Generating configuration files for each device
+
+1. Installing each device configuration, which includes:
+
+ * Installing certificates on each device
+ * Applying the configuration files for each device
+
+The `iotedge-config` tool will also make the module deployments to your IoT Edge device automatically.
+
+To use the `iotedge-config` tool to create and configure your hierarchy, follow the steps below in the **top layer IoT Edge for Linux on Windows device**:
+
+1. Log in to [Azure Bash Shell](/azure/cloud-shell/quickstart.md) and start a new bash session.
+
+1. Make a directory for your tutorial's resources:
+
+ ```bash
+ mkdir nestedIotEdgeTutorial
+ ```
+
+1. Download the release of the configuration tool and configuration templates:
+
+ ```bash
+ cd ~/nestedIotEdgeTutorial
+ wget -O iotedge_config.tar "https://github.com/Azure-Samples/iotedge_config_cli/releases/download/latest/iotedge_config_cli.tar.gz"
+ tar -xvf iotedge_config.tar
+ ```
+
+ This will create the `iotedge_config_cli_release` folder in your tutorial directory.
+
+ The template file used to create your device hierarchy is the `iotedge_config.yaml` file found in `~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial`. In the same directory, `deploymentLowerLayer.json` is a JSON deployment file containing instructions for which modules to deploy to your **lower layer device**. The `deploymentTopLayer.json` file is the same, but for your **top layer device**, as the modules deployed to each device aren't the same. The `device_config.toml` file is a template for IoT Edge device configurations and will be used to automatically generate the configuration bundles for the devices in your hierarchy.
+
+ If you'd like to take a look at the source code and scripts for the `iotedge-config` tool, check out [the Azure-Samples repository on GitHub](https://github.com/Azure-Samples/iotedge_config_cli).
+
+1. Open the tutorial configuration template and edit it with your information:
+
+ ```bash
+ code ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/iotedge_config.yaml
+ ```
+
+ In the **iothub** section, populate the `iothub_hostname` and `iothub_name` fields with your information. This information can be found on the overview page of your IoT Hub on the Azure portal.
+
+ In the optional **certificates** section, you can populate the fields with the absolute paths to your certificate and key. If you leave these fields blank, the script will automatically generate self-signed test certificates for your use. If you're unfamiliar with how certificates are used in a gateway scenario, check out [the how-to guide's certificate section](how-to-connect-downstream-iot-edge-device.md#generate-certificates).
+
+ In the **configuration** section, the `template_config_path` is the path to the `device_config.toml` template used to create your device configurations. The `default_edge_agent` field determines what Edge Agent image lower layer devices will pull and from where.
+
+ In the **edgedevices** section, for a production scenario, you can edit the hierarchy tree to reflect your desired structure. For the purposes of this tutorial, accept the default tree. For each device, there's a `device_id` field, where you can name your devices. There's also the `deployment` field, which specifies the path to the deployment JSON for that device.
+
+ You can also manually register IoT Edge devices in your IoT Hub through the Azure portal, Azure Cloud Shell, or Visual Studio Code. To learn how, see [the beginning of the end-to-end guide on manually provisioning a Linux IoT Edge device](how-to-provision-single-device-linux-symmetric.md#register-your-device).
+
+ You can define the parent-child relationships manually as well. See the [create a gateway hierarchy](how-to-connect-downstream-iot-edge-device.md#create-a-gateway-hierarchy) section of the how-to guide to learn more.
+
+ ![Screenshot of the edgedevices section of the configuration file allows you to define your hierarchy.](./media/tutorial-nested-iot-edge/hierarchy-config-sample.png)
+
+1. Save and close the file:
+
+ `CTRL + S`, `CTRL + Q`
+
+1. Create an outputs directory for the configuration bundles in your tutorial resources directory:
+
+ ```bash
+ mkdir ~/nestedIotEdgeTutorial/iotedge_config_cli_release/outputs
+ ```
+
+1. Navigate to your `iotedge_config_cli_release` directory and run the tool to create your hierarchy of IoT Edge devices:
+
+ ```bash
+ cd ~/nestedIotEdgeTutorial/iotedge_config_cli_release
+ ./iotedge_config --config ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/iotedge_config.yaml --output ~/nestedIotEdgeTutorial/iotedge_config_cli_release/outputs -f
+ ```
+
+ With the `--output` flag, the tool creates the device certificates, certificate bundles, and a log file in a directory of your choice. With the `-f` flag set, the tool will automatically look for existing IoT Edge devices in your IoT Hub and remove them, to avoid errors and keep your hub clean.
+
+ The configuration tool creates your IoT Edge devices and sets up the parent-child relationships between them. Optionally, it creates certificates for your devices to use. If paths to deployment JSONs are provided, the tool will automatically create these deployments to your devices, but this isn't required. Finally, the tool will generate the configuration bundles for your devices and place them in the output directory. For a thorough look at the steps taken by the configuration tool, see the log file in the output directory.
+
+ ![Screenshot of the output of the script will display a topology of your hierarchy upon execution.](./media/tutorial-nested-iot-edge/successful-setup-tool-run.png)
+
+1. Navigate to your `output` directory and download the _lower-layer.zip_ and _top-layer.zip_ files.
+ ```bash
+ download ~/nestedIotEdgeTutorial/iotedge_config_cli_release/outputs/lower-layer.zip
+ download ~/nestedIotEdgeTutorial/iotedge_config_cli_release/outputs/top-layer.zip
+ ```
+Double-check that the topology output from the script looks correct. Once you're satisfied your hierarchy is correctly structured, you're ready to proceed.
+
+### Configure the IoT Edge runtime
+
+In addition to the provisioning of your devices, the configuration steps establish trusted communication between the devices in your hierarchy using the certificates you created earlier. The steps also begin to establish the network structure of your hierarchy. The top layer device will maintain internet connectivity, allowing it to pull images for its runtime from the cloud, while lower layer devices will route through the top layer device to access these images.
+
+To configure the IoT Edge runtime, you need to apply the configuration bundles created by the setup script to your devices. The configurations slightly differ between the **top layer device** and a **lower layer device**, so be mindful of which device's configuration file you're applying to each device.
+
+Each device needs its corresponding configuration bundle. You can use a USB drive or [secure file copy](https://www.ssh.com/ssh/scp/) to move the configuration bundles to each device. You'll first need to copy the configuration bundle to the Windows host OS of each EFLOW device and then copy it to the EFLOW VM.
+
+> [!WARNING]
+> Be sure to send the correct configuration bundle to each device.
+
+##### Top-layer device configuration
+
+1. Connect to your _top-level_ Windows host device and copy the _lower-layer.zip_ file.
+
+1. Unzip the _top-layer.zip_ configuration bundle and check that all data was correctly configured.
+
+1. Start an elevated _PowerShell_ session using **Run as Administrator**.
+
+1. Move to the _top-layer_ directory
+
+1. Copy all the content of the _top-layer_ directory into the EFLOW VM
+ ```powershell
+ Copy-EflowVmFile -fromFile *.* -tofile ~/ -pushFile
+ ```
+
+1. Connect to your EFLOW virtual machine
+ ```powershell
+ Connect-EflowVm
+ ```
+
+1. Get the EFLOW virtual machine IP address - Check for the _inet addr_ field.
+
+ ```bash
+ ifconfig eth0
+ ```
+
+ > [!NOTE]
+ > On the **top layer device**, you will receive a prompt to enter the hostname. Supply the appropriate IP or FQDN. You can use either, but be consistent in your choice across devices.
+
+1. Run the _install.sh_ script - When asked the _hostname_ use the IP address obtained in the previous step.
+ ```bash
+ sudo sh ./install.sh
+ ```
+ The output of the install script is pictured below.
+
+ ![Screenshot of the output of the script installing the configuration bundles will update the config.toml files on your device and restart all IoT Edge services automatically.](./media/tutorial-nested-iot-edge/configuration-install-output.png)
+
+1. Apply the correct certificate permissions and restart the IoT Edge runtime.
+
+ ```bash
+ sudo chmod -R 755 /etc/aziot/certificates/
+ sudo iotedge system restart
+ ```
+
+1. Check that all IoT Edge services are running correctly.
+
+ ```bash
+ sudo iotedge system status
+ ```
+
+1. Finally, add the appropriate firewall rules to enable connectivity between the lower-layer device and top-layer device.
+
+ ```bash
+ sudo iptables -A INPUT -p tcp --dport 5671 -j ACCEPT
+ sudo iptables -A INPUT -p tcp --dport 8883 -j ACCEPT
+ sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
+ sudo iptables -A INPUT -p icmp --icmp-type 8 -s 0/0 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
+ sudo iptables-save | sudo tee /etc/systemd/scripts/ip4save
+ ```
+
+1. Run the configuration and connectivity checks on your devices.
+
+ ```bash
+ sudo iotedge check
+ ```
+
+If you want a closer look at what modifications are being made to your device's configuration file, see [the configure IoT Edge on devices section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#configure-parent-device).
+
+##### Lower-layer device configuration
+
+1. Connect to your lower level Windows host device and copy the _lower-layer.zip_ file.
+
+1. Unzip the _lower-layer.zip_ configuration bundle and check that all data was correctly configured.
+
+1. Start an elevated _PowerShell_ session using **Run as Administrator**.
+
+1. Move to the _lower-layer_ directory
+
+1. Copy all the content of the _top-layer_ directory into the EFLOW VM
+ ```powershell
+ Copy-EflowVmFile -fromFile *.* -tofile ~/ -pushFile
+ ```
+
+1. Connect to your EFLOW virtual machine
+ ```powershell
+ Connect-EflowVm
+ ```
+
+1. Check you can ping the EFLOW VM top-layer device using either the FQDN or IP, based on what you used as the _hostname_ configuration.
+ ```powershell
+ ping <top-layer-device-hostname>
+ ```
+
+ If everything was correctly configured, you should be able to see the ping responses from the top-layer device.
+
+1. Get the EFLOW virtual machine IP address - Check for the _inet addr_ field.
+
+ ```bash
+ ifconfig eth0
+ ```
+
+ >[!NOTE]
+ > On the **lower layer device**, you will receive a prompt to enter the hostname and the parent hostname. Supply the appropriate **top-layer device** IP or FQDN. You can use either, but be consistent in your choice across devices.sudo
+
+1. Run the _install.sh_ script - When asked the _hostname_ use the IP address obtained in the previous step.
+ ```bash
+ sudo sh ./install.sh
+ ```
+
+1. Apply the correct certificate permissions and restart the IoT Edge runtime.
+ ```bash
+ sudo chmod -R 755 /etc/aziot/certificates/
+ sudo iotedge system restart
+ ```
+
+1. Check that all IoT Edge services are running correctly.
+ ```bash
+ sudo iotedge system status
+ ```
+
+1. Run the configuration and connectivity checks on your devices. For the **lower layer device**, the diagnostics image needs to be manually passed in the command:
+
+ ```bash
+ sudo iotedge check --diagnostics-image-name <parent_device_fqdn_or_ip>:443/azureiotedge-diagnostics:1.2
+ ```
+
+If you completed the above steps correctly, you can check your devices are configured correctly. Once you're satisfied your configurations are correct on each device, you're ready to proceed.
+
+## Deploy modules to your devices
+
+The module deployments to your devices were automatically generated when the devices were created. The `iotedge-config-cli` tool fed deployment JSONs for the **top and lower layer devices** after they were created. The module deployment were pending while you configured the IoT Edge runtime on each device. Once you configured the runtime, the deployments to the **top layer device** began. After those deployments completed, the **lower layer device** could use the **IoT Edge API Proxy** module to pull its necessary images.
+
+In the [Azure Cloud Shell](https://shell.azure.com/), you can take a look at the **top layer device's** deployment JSON to understand what modules were deployed to your device:
+
+ ```bash
+ cat ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/deploymentTopLayer.json
+ ```
+
+In addition the runtime modules **IoT Edge Agent** and **IoT Edge Hub**, the **top layer device** receives the **Docker registry** module and **IoT Edge API Proxy** module.
+
+The **Docker registry** module points to an existing Azure Container Registry. In this case, `REGISTRY_PROXY_REMOTEURL` points to the Microsoft Container Registry. By default, **Docker registry** listens on port 5000.
+
+The **IoT Edge API Proxy** module routes HTTP requests to other modules, allowing lower layer devices to pull container images or push blobs to storage. In this tutorial, it communicates on port 443 and is configured to send Docker container image pull requests route to your **Docker registry** module on port 5000. Also, any blob storage upload requests route to module AzureBlobStorageonIoTEdge on port 11002. For more information about the **IoT Edge API Proxy** module and how to configure it, see the module's [how-to guide](how-to-configure-api-proxy-module.md).
+
+If you'd like a look at how to create a deployment like this through the Azure portal or Azure Cloud Shell, see [top layer device section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#deploy-modules-to-top-layer-devices).
+
+In the [Azure Cloud Shell](https://shell.azure.com/), you can take a look at the **lower layer device's** deployment JSON to understand what modules were deployed to your device:
+
+ ```bash
+ cat ~/nestedIotEdgeTutorial/iotedge_config_cli_release/templates/tutorial/deploymentLowerLayer.json
+ ```
+
+You can see under `systemModules` that the **lower layer device's** runtime modules are set to pull from `$upstream:443`, instead of `mcr.microsoft.com`, as the **top layer device** did. The **lower layer device** sends Docker image requests the **IoT Edge API Proxy** module on port 443, as it can't directly pull the images from the cloud. The other module deployed to the **lower layer device**, the **Simulated Temperature Sensor** module, also makes its image request to `$upstream:443`.
+
+If you'd like a look at how to create a deployment like this through the Azure portal or Azure Cloud Shell, see [lower layer device section of the how-to guide](how-to-connect-downstream-iot-edge-device.md#deploy-modules-to-lower-layer-devices).
+
+You can view the status of your modules using the command:
+
+ ```azurecli
+ az iot hub module-twin show --device-id <edge_device_id> --module-id '$edgeAgent' --hub-name <iot_hub_name> --query "properties.reported.[systemModules, modules]"
+ ```
+
+ This command will output all the edgeAgent reported properties. Here are some helpful ones for monitoring the status of the device: *runtime status*, *runtime start time*, *runtime last exit time*, *runtime restart count*.
+
+You can also see the status of your modules on the [Azure portal](https://portal.azure.com/). Navigate to the **IoT Edge** section of your IoT Hub to see your devices and modules.
+
+Once you're satisfied with your module deployments, you're ready to proceed.
+
+## View generated data
+
+The **Simulated Temperature Sensor** module that you pushed generates sample environment data. It sends messages that include ambient temperature and humidity, machine temperature and pressure, and a timestamp.
+
+You can also view these messages through the [Azure Cloud Shell](https://shell.azure.com/):
+
+ ```azurecli-interactive
+ az iot hub monitor-events -n <iothub_name> -d <lower-layer-device-name>
+ ```
+
+## Troubleshooting
+
+Run the `iotedge check` command to verify the configuration and to troubleshoot errors.
+
+You can run `iotedge check` in a nested hierarchy, even if the child machines don't have direct internet access.
+
+When you run `iotedge check` from the lower layer, the program tries to pull the image from the parent through port 443.
+
+```bash
+sudo iotedge check --diagnostics-image-name $upstream:443/azureiotedge-diagnostics:1.2
+```
+
+The `azureiotedge-diagnostics` value is pulled from the container registry that's linked with the registry module. This tutorial has it set by default to https://mcr.microsoft.com:
+
+| Name | Value |
+| - | - |
+| `REGISTRY_PROXY_REMOTEURL` | `https://mcr.microsoft.com` |
+
+If you're using a private container registry, make sure that all the images (IoTEdgeAPIProxy, edgeAgent, edgeHub, Simulated Temperature Sensor, and diagnostics) are present in the container registry.
+
+## Clean up resources
+
+You can delete the local configurations and the Azure resources that you created in this article to avoid charges.
+
+To delete the resources:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
+
+2. Select the name of the resource group that contains your IoT Edge test resources.
+
+3. Review the list of resources contained in your resource group. If you want to delete all of them, you can select **Delete resource group**. If you want to delete only some of them, you can click into each resource to delete them individually.
+
+## Next steps
+
+In this tutorial, you configured two IoT Edge devices as gateways and set one as the parent device of the other. Then, you demonstrated pulling a container image onto the child device through a gateway using the IoT Edge API Proxy module. See [the how-to guide on the proxy module's use](how-to-configure-api-proxy-module.md) if you want to learn more.
+
+To learn more about using gateways to create hierarchical layers of IoT Edge devices, see [the how-to guide on connecting downstream IoT Edge devices](how-to-connect-downstream-iot-edge-device.md).
+
+To see how Azure IoT Edge can create more solutions for your business, continue on to the other tutorials.
+
+> [!div class="nextstepaction"]
+> [Deploy an Azure Machine Learning model as a module](tutorial-deploy-machine-learning.md)
iot-hub Iot Hub Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-powershell.md
You can use Azure PowerShell cmdlets to create and manage Azure IoT hubs. This t
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
+Alternatively, you can use Azure Cloud Shell, if you'd rather not install additional modules onto your machine. The following section gets you started with Azure Cloud Shell.
+ [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)] ## Connect to your Azure subscription
-If you're using the Cloud Shell, you're already logged in to your subscription. If you're running PowerShell locally instead, enter the following command to sign in to your Azure subscription:
+If you're using Cloud Shell, you're already logged in to your subscription, so you can skip this step. If you're running PowerShell locally instead, enter the following command to sign in to your Azure subscription:
```powershell # Log into Azure account.
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
# IoT Hub support for managed identities
-Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. This eliminates the needs for developers having to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. IoT Hub supports both.
+Managed identities provide Azure services with an automatically managed identity in Azure AD in a secure manner. This eliminates the need for developers having to manage credentials by providing an identity. There are two types of managed identities: system-assigned and user-assigned. IoT Hub supports both.
In IoT Hub, managed identities can be used for egress connectivity from IoT Hub to other Azure services for features such as [message routing](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [bulk device import/export](iot-hub-bulk-identity-mgmt.md). In this article, you learn how to use system-assigned and user-assigned managed identities in your IoT hub for different functionalities. ## Prerequisites -- Read the documentation of [managed identities for Azure resources](./../active-directory/managed-identities-azure-resources/overview.md) to understand the differences between system-assigned and user-assigned managed identity.
+- Understand the managed identity differences between *system-assigned* and *user-assigned* in [What are managed identities for Azure resources?](./../active-directory/managed-identities-azure-resources/overview.md)
-- If you donΓÇÖt have an IoT hub, [create one](iot-hub-create-through-portal.md) before continuing.
+- An [IoT hub](iot-hub-create-through-portal.md)
## System-assigned managed identity
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
You can use the [IoT Hub resource provider REST API](/rest/api/iothub/iothubreso
## Prerequisites
-* Visual Studio.
+* Visual Studio
-* [Azure PowerShell](/powershell/azure/install-Az-ps).
+* [Azure PowerShell module](/powershell/azure/install-az-ps)
[!INCLUDE [iot-hub-prepare-resource-manager](../../includes/iot-hub-prepare-resource-manager.md)]
iot-hub Iot Hub Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template.md
You can use Azure Resource Manager to create and manage Azure IoT hubs programma
To complete this tutorial, you need the following:
-* Visual Studio.
-* An active Azure account. <br/>If you don't have an account, you can create a [free account][lnk-free-trial] in just a couple of minutes.
-* An [Azure Storage account][lnk-storage-account] where you can store your Azure Resource Manager template files.
-* [Azure PowerShell 1.0][lnk-powershell-install] or later.
+* Visual Studio
+* An [Azure Storage account][lnk-storage-account] where you can store your Azure Resource Manager template files
+* [Azure PowerShell module][lnk-powershell-install]
[!INCLUDE [iot-hub-prepare-resource-manager](../../includes/iot-hub-prepare-resource-manager.md)]
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
* Limit of 100 IP addresses in the backend pool for IP based LBs * The backend resources must be in the same virtual network as the load balancer for IP based LBs * A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service
+ * [Private endpoint resources](/azure/private-link/private-endpoint-overview) can't be placed in a IP based backend pool
* ACI containers aren't currently supported by IP based LBs * Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer * Inbound NAT Rules canΓÇÖt be specified by IP address
logic-apps Logic Apps Exception Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-exception-handling.md
Previously updated : 08/23/2022 Last updated : 09/07/2022 # Handle errors and exceptions in Azure Logic Apps
The way that any integration architecture appropriately handles downtime or issu
For the most basic exception and error handling, you can use the *retry policy* when supported on a trigger or action, such as the [HTTP action](logic-apps-workflow-actions-triggers.md#http-trigger). If the trigger or action's original request times out or fails, resulting in a 408, 429, or 5xx response, the retry policy specifies that the trigger or action resend the request per policy settings.
-### Retry policy types
-
-By default, the retry policy is set to the **Default** type.
-
-| Retry policy | Description |
-|--|-|
-| **Default** | This policy sends up to 4 retries at *exponentially increasing* intervals, which scale by 7.5 seconds but are capped between 5 and 45 seconds. For more information, review the [Default](#default) policy type. |
-| **None** | Don't resend the request. For more information, review the [None](#none) policy type. |
-| **Exponential Interval** | This policy waits a random interval, which is selected from an exponentially growing range before sending the next request. For more information, review the [Exponential Interval](#exponential-interval) policy type. |
-| **Fixed Interval** | This policy waits the specified interval before sending the next request. For more information, review the [Fixed Interval](#fixed-interval) policy type. |
-|||
- <a name="retry-policy-limits"></a> ### Retry policy limits For more information about retry policies, settings, limits, and other options, review [Retry policy limits](logic-apps-limits-and-config.md#retry-policy-limits).
+### Retry policy types
+
+Connector operations that support retry policies use the **Default** policy unless you select a different retry policy.
+
+| Retry policy | Description |
+|--|-|
+| **Default** | For most operations, the **Default** retry policy is an [exponential interval policy](#exponential-interval) that sends up to 4 retries at *exponentially increasing* intervals. These intervals scale by 7.5 seconds but are capped between 5 and 45 seconds. Several operations use a different **Default** retry policy, such as a [fixed interval policy](#fixed-interval). For more information, review the [Default retry policy type](#default). |
+| **None** | Don't resend the request. For more information, review [None - No retry policy](#none). |
+| **Exponential Interval** | This policy waits a random interval, which is selected from an exponentially growing range before sending the next request. For more information, review the [exponential interval policy type](#exponential-interval). |
+| **Fixed Interval** | This policy waits the specified interval before sending the next request. For more information, review the [fixed interval policy type](#fixed-interval). |
+ ### Change retry policy type in the designer 1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer.
-1. Based on your [logic app type](logic-apps-overview.md#resource-environment-differences), open the trigger or action's **Settings**.
+1. Based on whether you're working on a Consumption or Standard workflow, open the trigger or action's **Settings**.
* **Consumption**: On the action shape, open the ellipses menu (**...**), and select **Settings**.
For more information about retry policies, settings, limits, and other options,
| `type` | <*retry-policy-type*> | String | The retry policy type to use: `default`, `none`, `fixed`, or `exponential` | | `count` | <*retry-attempts*> | Integer | For `fixed` and `exponential` policy types, the number of retry attempts, which is a value from 1 - 90. For more information, review [Fixed Interval](#fixed-interval) and [Exponential Interval](#exponential-interval). | | `interval`| <*retry-interval*> | String | For `fixed` and `exponential` policy types, the retry interval value in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). For the `exponential` policy, you can also specify [optional maximum and minimum intervals](#optional-max-min-intervals). For more information, review [Fixed Interval](#fixed-interval) and [Exponential Interval](#exponential-interval). <br><br>**Consumption**: 5 seconds (`PT5S`) to 1 day (`P1D`). <br>**Standard**: For stateful workflows, 5 seconds (`PT5S`) to 1 day (`P1D`). For stateless workflows, 1 second (`PT1S`) to 1 minute (`PT1M`). |
- |||||
<a name="optional-max-min-intervals"></a>
For more information about retry policies, settings, limits, and other options,
|-|-||-| | `maximumInterval` | <*maximum-interval*> | String | For the `exponential` policy, the largest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default value is 1 day (`P1D`). For more information, review [Exponential Interval](#exponential-interval). | | `minimumInterval` | <*minimum-interval*> | String | For the `exponential` policy, the smallest interval for the randomly selected interval in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The default value is 5 seconds (`PT5S`). For more information, review [Exponential Interval](#exponential-interval). |
- |||||
<a name="default"></a> #### Default retry policy
-If you don't specify a retry policy, the action uses the default policy. The default is actually an [exponential interval policy](#exponential-interval) that sends up to four retries at exponentially increasing intervals, which scales by 7.5 seconds. The interval is capped between 5 and 45 seconds.
+Connector operations that support retry policies use the **Default** policy unless you select a different retry policy. For most operations, the **Default** retry policy is an exponential interval policy that sends up to 4 retries at *exponentially increasing* intervals. These intervals scale by 7.5 seconds but are capped between 5 and 45 seconds. Several operations use a different **Default** retry policy, such as a fixed interval policy.
-Though not explicitly defined in your action or trigger, the following example shows how the default policy behaves in an example HTTP action:
+In your workflow definition, the trigger or action definition doesn't explicitly define the default policy, but the following example shows how the default retry policy behaves for the HTTP action:
```json "HTTP": {
The exponential interval retry policy specifies that the trigger or action waits
||-|-|-| | Maximum delay | Default: 1 day | Default: 1 hour | To change the default limit in a Consumption logic app workflow, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in a Standard logic app workflow, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | | Minimum delay | Default: 5 sec | Default: 5 sec | To change the default limit in a Consumption logic app workflow, use the [retry policy parameter](logic-apps-exception-handling.md#retry-policies). <p><p>To change the default limit in a Standard logic app workflow, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-|||||
**Random variable ranges**
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes.md
Previously updated : 08/03/2022 Last updated : 08/29/2022 # Azure Machine Learning Python SDK release notes
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://docs.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2022-08-29
+
+### Azure Machine Learning SDK for Python v1.45.0
+ + **azureml-automl-runtime**
+ + Fixed a bug where the sample_weight column was not properly validated.
+ + Added rolling_forecast() public method to the forecasting pipeline wrappers for all supported forecasting models. This method replaces the deprecated rolling_evaluation() method.
+ + Fixed an issue where AutoML Regression tasks may fall back to train-valid split for model evaluation, when CV would have been a more appropriate choice.
+ + **azureml-core**
+ + New cloud configuration suffix added, "aml_discovery_endpoint".
+ + Updated the vendored azure-storage package from version 2 to version 12.
+ + **azureml-mlflow**
+ + New cloud configuration suffix added, "aml_discovery_endpoint".
+ + **azureml-responsibleai**
+ + update azureml-responsibleai package and curated images to raiwidgets and responsibleai 0.21.0
+ + **azureml-sdk**
+ + The azureml-sdk package now allow Python 3.9.
++ ## 2022-08-01 ### Azure Machine Learning SDK for Python v1.44.0
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 06/08/2022 Last updated : 09/06/2022 ms.devlang: azurecli
In this article, learn about the network communication requirements when securin
> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * For securing inference, see the following documents:
-> * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md)
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* For securing inference, see the following documents:
- * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
- * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
Previously updated : 05/13/2022 Last updated : 09/07/2022 # Network Isolation Change with Our New API Platform on Azure Resource Manager
As mentioned in the previous section, there are two types of operations; with AR
With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/2022-05-01/jobs/create-or-update) api sends metadata, and [parameters](./reference-yaml-job-command.md).
-> [!TIP]
-> * Public ARM operations do not surface data in your storage account on public networks.
-> * Your communication with public ARM is encrypted using TLS 1.2.
+> [!IMPORTANT]
+> For most people, using the public ARM communications is OK:
+> * Public ARM communications is the standard for management operations with Azure services. For example, creating an Azure Storage Account or Azure Virtual Network uses ARM.
+> * The Azure Machine Learning operations do not expose data in your storage account (or other storage in the VNet) on public networks. For example, a training job that runs on a compute cluster in the VNet, and uses data from a storage account in the VNet, would securely access the data directly using the VNet.
+> * All communication with public ARM is encrypted using TLS 1.2.
If you need time to evaluate the new v2 API before adopting it in your enterprise solutions, or have a company policy that prohibits sending communication over public networks, you can enable the *v1_legacy_mode* parameter. When enabled, this parameter disables the v2 API for your workspace.
-> [!IMPORTANT]
+> [!WARNING]
> Enabling v1_legacy_mode may prevent you from using features provided by the v2 API. For example, some features of Azure Machine Learning studio may be unavailable. ## Scenarios and Required Actions
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
---+++ Previously updated : 06/28/2022 Last updated : 08/29/2022 # Configure a private endpoint for an Azure Machine Learning workspace +
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [SDK v1](v1/how-to-configure-private-link.md)
+> * [SDK v2 (current version)](how-to-configure-private-link.md)
+ In this document, you learn how to configure a private endpoint for your Azure Machine Learning workspace. For information on creating a virtual network for Azure Machine Learning, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md). Azure Private Link enables you to connect to your workspace using a private endpoint. The private endpoint is a set of private IP addresses within your virtual network. You can then limit access to your workspace to only occur over the private IP addresses. A private endpoint helps reduce the risk of data exfiltration. To learn more about private endpoints, see the [Azure Private Link](../private-link/private-link-overview.md) article.
Azure Private Link enables you to connect to your workspace using a private endp
> * [Virtual network isolation and privacy overview](how-to-network-security-overview.md). > * [Secure workspace resources](how-to-secure-workspace-vnet.md). > * [Secure training environments](how-to-secure-training-vnet.md).
-> * For securing inference, see the following documents:
-> * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+> * [Secure the inference environment](how-to-secure-inferencing-vnet.md).
> * [Use Azure Machine Learning studio in a VNet](how-to-enable-studio-virtual-network.md).
-> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md)
+> * [API platform network isolation](how-to-configure-network-isolation-with-v2.md).
## Prerequisites
Azure Private Link enables you to connect to your workspace using a private endp
## Limitations * If you enable public access for a workspace secured with private endpoint and use Azure Machine Learning studio over the public internet, some features such as the designer may fail to access your data. This problem happens when the data is stored on a service that is secured behind the VNet. For example, an Azure Storage Account.
-* You may encounter problems trying to access the private endpoint for your workspace if you are using Mozilla Firefox. This problem may be related to DNS over HTTPS in Mozilla. We recommend using Microsoft Edge or Google Chrome as a workaround.
-* Using a private endpoint does not affect Azure control plane (management operations) such as deleting the workspace or managing compute resources. For example, creating, updating, or deleting a compute target. These operations are performed over the public Internet as normal. Data plane operations, such as using Azure Machine Learning studio, APIs (including published pipelines), or the SDK use the private endpoint.
+* You may encounter problems trying to access the private endpoint for your workspace if you're using Mozilla Firefox. This problem may be related to DNS over HTTPS in Mozilla Firefox. We recommend using Microsoft Edge or Google Chrome as a workaround.
+* Using a private endpoint doesn't affect Azure control plane (management operations) such as deleting the workspace or managing compute resources. For example, creating, updating, or deleting a compute target. These operations are performed over the public Internet as normal. Data plane operations, such as using Azure Machine Learning studio, APIs (including published pipelines), or the SDK use the private endpoint.
* When creating a compute instance or compute cluster in a workspace with a private endpoint, the compute instance and compute cluster must be in the same Azure region as the workspace.
-* When creating or attaching an Azure Kubernetes Service cluster to a workspace with a private endpoint, the cluster must be in the same region as the workspace.
+* When attaching an Azure Kubernetes Service cluster to a workspace with a private endpoint, the cluster must be in the same region as the workspace.
* When using a workspace with multiple private endpoints, one of the private endpoints must be in the same VNet as the following dependency * Azure Storage Account that provides the default storage for the workspace
Use one of the following methods to create a workspace with a private endpoint.
> [!TIP] > If you'd like to create a workspace, private endpoint, and virtual network at the same time, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](how-to-create-workspace-template.md).
-# [Python](#tab/python)
-
-The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) class, which can be used with [Workspace.create()](/python/api/azureml-core/azureml.core.workspace.workspace#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basictags-none--friendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--adb-workspace-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--private-endpoint-config-none--private-endpoint-auto-approval-true--exist-ok-false--show-output-true-) to create a workspace with a private endpoint. This class requires an existing virtual network.
+# [Azure CLI extension 2.0](#tab/azurecliextensionv2)
-
-```python
-from azureml.core import Workspace
-from azureml.core import PrivateEndPointConfig
-
-pe = PrivateEndPointConfig(name='myprivateendpoint', vnet_name='myvnet', vnet_subnet_name='default')
-ws = Workspace.create(name='myworkspace',
- subscription_id='<my-subscription-id>',
- resource_group='myresourcegroup',
- location='eastus2',
- private_endpoint_config=pe,
- private_endpoint_auto_approval=True,
- show_output=True)
-```
-
-# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
-
-When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), a YAML document is used to configure the workspace. The following is an example of creating a new workspace using a YAML configuration:
+When using the Azure CLI [extension 2.0 CLI for machine learning](how-to-configure-cli.md), a YAML document is used to configure the workspace. The following example demonstrates creating a new workspace using a YAML configuration:
> [!TIP] > When using private link, your workspace cannot use Azure Container Registry tasks compute for image building. The `image_build_compute` property in this configuration specifies a CPU compute cluster name to use for Docker image environment building. You can also specify whether the private link workspace should be accessible over the internet using the `public_network_access` property.
az network private-endpoint dns-zone-group add \
--zone-name privatelink.notebooks.azure.net ```
-# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-
-If you are using the Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md), use the [az ml workspace create](/cli/azure/ml/workspace#az-ml-workspace-create) command. The following parameters for this command can be used to create a workspace with a private network, but it requires an existing virtual network:
-
-* `--pe-name`: The name of the private endpoint that is created.
-* `--pe-auto-approval`: Whether private endpoint connections to the workspace should be automatically approved.
-* `--pe-resource-group`: The resource group to create the private endpoint in. Must be the same group that contains the virtual network.
-* `--pe-vnet-name`: The existing virtual network to create the private endpoint in.
-* `--pe-subnet-name`: The name of the subnet to create the private endpoint in. The default value is `default`.
-
-These parameters are in addition to other required parameters for the create command. For example, the following command creates a new workspace in the West US region, using an existing resource group and VNet:
-
-```azurecli
-az ml workspace create -r myresourcegroup \
- -l westus \
- -n myworkspace \
- --pe-name myprivateendpoint \
- --pe-auto-approval \
- --pe-resource-group myresourcegroup \
- --pe-vnet-name myvnet \
- --pe-subnet-name mysubnet
-```
- # [Portal](#tab/azure-portal) The __Networking__ tab in Azure Machine Learning studio allows you to configure a private endpoint. However, it requires an existing virtual network. For more information, see [Create workspaces in the portal](how-to-manage-workspace.md).
Use one of the following methods to add a private endpoint to an existing worksp
> > If you have any existing compute targets associated with this workspace, and they are not behind the same virtual network tha the private endpoint is created in, they will not work.
-# [Python](#tab/python)
--
-```python
-from azureml.core import Workspace
-from azureml.core import PrivateEndPointConfig
-
-pe = PrivateEndPointConfig(name='myprivateendpoint', vnet_name='myvnet', vnet_subnet_name='default')
-ws = Workspace.from_config()
-ws.add_private_endpoint(private_endpoint_config=pe, private_endpoint_auto_approval=True, show_output=True)
-```
-
-For more information on the classes and methods used in this example, see [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) and [Workspace.add_private_endpoint](/python/api/azureml-core/azureml.core.workspace(class)#add-private-endpoint-private-endpoint-config--private-endpoint-auto-approval-true--location-none--show-output-true--tags-none-).
-
-# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
+# [Azure CLI extension 2.0](#tab/azurecliextensionv2)
-When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), use the [Azure networking CLI commands](/cli/azure/network/private-endpoint#az-network-private-endpoint-create) to create a private link endpoint for the workspace.
+When using the Azure CLI [extension 2.0 CLI for machine learning](how-to-configure-cli.md), use the [Azure networking CLI commands](/cli/azure/network/private-endpoint#az-network-private-endpoint-create) to create a private link endpoint for the workspace.
```azurecli-interactive az network private-endpoint create \
az network private-endpoint dns-zone-group add \
--zone-name 'privatelink.notebooks.azure.net' ```
-# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-
-The Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint add](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-add) command.
-
-```azurecli
-az ml workspace private-endpoint add -w myworkspace --pe-name myprivateendpoint --pe-auto-approval --pe-vnet-name myvnet
-```
- # [Portal](#tab/azure-portal) From the Azure Machine Learning workspace in the portal, select __Private endpoint connections__ and then select __+ Private endpoint__. Use the fields to create a new private endpoint.
Finally, select __Create__ to create the private endpoint.
## Remove a private endpoint
-You can remove one or all private endpoints for a workspace. Removing a private endpoint removes the workspace from the VNet that the endpoint was associated with. This may prevent the workspace from accessing resources in that VNet, or resources in the VNet from accessing the workspace. For example, if the VNet does not allow access to or from the public internet.
+You can remove one or all private endpoints for a workspace. Removing a private endpoint removes the workspace from the VNet that the endpoint was associated with. Removing the private endpoint may prevent the workspace from accessing resources in that VNet, or resources in the VNet from accessing the workspace. For example, if the VNet doesn't allow access to or from the public internet.
> [!WARNING] > Removing the private endpoints for a workspace __doesn't make it publicly accessible__. To make the workspace publicly accessible, use the steps in the [Enable public access](#enable-public-access) section. To remove a private endpoint, use the following information:
-# [Python](#tab/python)
-
-To remove a private endpoint, use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-). The following example demonstrates how to remove a private endpoint:
--
-```python
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-# get the connection name
-_, _, connection_name = ws.get_details()['privateEndpointConnections'][0]['id'].rpartition('/')
-ws.delete_private_endpoint_connection(private_endpoint_connection_name=connection_name)
-```
-# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
+# [Azure CLI extension 2.0](#tab/azurecliextensionv2)
-When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), use the following command to remove the private endpoint:
+When using the Azure CLI [extension 2.0 CLI for machine learning](how-to-configure-cli.md), use the following command to remove the private endpoint:
```azurecli az network private-endpoint delete \
az network private-endpoint delete \
--resource-group <resource-group-name> \ ```
-# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-
-The Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint delete](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-delete) command.
- # [Portal](#tab/azure-portal) 1. From the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace.
To enable public access, use the following steps:
> > Microsoft recommends using `public_network_access` to enable or disable public access to a workspace.
-# [Python](#tab/python)
-
-To enable public access, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `allow_public_access_when_behind_vnet=True`.
--
-```python
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-ws.update(allow_public_access_when_behind_vnet=True)
-```
-
-# [Azure CLI extension 2.0 preview](#tab/azurecliextensionv2)
+# [Azure CLI extension 2.0](#tab/azurecliextensionv2)
When using the Azure CLI [extension 2.0 CLI preview for machine learning](how-to-configure-cli.md), use the `az ml update` command to enable `public_network_access` for the workspace:
az ml workspace update \
You can also enable public network access by using a YAML file. For more information, see the [workspace YAML reference](reference-yaml-workspace.md).
-# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
-
-The Azure CLI [extension 1.0 for machine learning](v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
- # [Portal](#tab/azure-portal) 1. From the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace.
Adding multiple private endpoints uses the same steps as described in the [Add a
### Scenario: Isolated clients
-If you want to isolate the development clients, so they do not have direct access to the compute resources used by Azure Machine Learning, use the following steps:
+If you want to isolate the development clients, so they don't have direct access to the compute resources used by Azure Machine Learning, use the following steps:
> [!NOTE] > These steps assume that you have an existing workspace, Azure Storage Account, Azure Key Vault, and Azure Container Registry. Each of these services has a private endpoints in an existing VNet. 1. Create another VNet for the clients. This VNet might contain Azure Virtual Machines that act as your clients, or it may contain a VPN Gateway used by on-premises clients to connect to the VNet. 1. Add a new private endpoint for the Azure Storage Account, Azure Key Vault, and Azure Container Registry used by your workspace. These private endpoints should exist in the client VNet.
-1. If you have additional storage that is used by your workspace, add a new private endpoint for that storage. The private endpoint should exist in the client VNet and have private DNS zone integration enabled.
+1. If you have another storage that is used by your workspace, add a new private endpoint for that storage. The private endpoint should exist in the client VNet and have private DNS zone integration enabled.
1. Add a new private endpoint to your workspace. This private endpoint should exist in the client VNet and have private DNS zone integration enabled. 1. Use the steps in the [Use studio in a virtual network](how-to-enable-studio-virtual-network.md#datastore-azure-storage-account) article to enable studio to access the storage account(s).
machine-learning How To Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-custom-dns.md
Previously updated : 06/14/2022 Last updated : 09/06/2022
When using an Azure Machine Learning workspace with a private endpoint, there ar
> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * For securing inference, see the following documents:
-> * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) ## Prerequisites
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* For securing inference, see the following documents:
- * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
- * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md) For information on integrating Private Endpoints into your DNS configuration, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).-
-For information on deploying models with a custom DNS name or TLS security, see [Secure web services using TLS](./v1/how-to-secure-web-service.md).
machine-learning How To Enable Studio Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md
In this article, you learn how to:
> * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * For securing inference, see the following documents:
-> * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
> * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md) >
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* For securing inference, see the following documents:
- * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
- * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
* [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Kubernetes Inference Routing Azureml Fe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md
AzureML inference router handles autoscaling for all model deployments on the Ku
> [!IMPORTANT] > * **Do not enable Kubernetes Horizontal Pod Autoscaler (HPA) for model deployments**. Doing so would cause the two auto-scaling components to compete with each other. Azureml-fe is designed to auto-scale models deployed by AzureML, where HPA would have to guess or approximate model utilization from a generic metric like CPU usage or a custom metric configuration. >
-> * **Azureml-fe does not scale the nuzmber of nodes in an AKS cluster**, because this could lead to unexpected cost increases. Instead, **it scales the number of replicas for the model** within the physical cluster boundaries. If you need to scale the number of nodes within the cluster, you can manually scale the cluster or [configure the AKS cluster autoscaler](../aks/cluster-autoscaler.md).
+> * **Azureml-fe does not scale the number of nodes in an AKS cluster**, because this could lead to unexpected cost increases. Instead, **it scales the number of replicas for the model** within the physical cluster boundaries. If you need to scale the number of nodes within the cluster, you can manually scale the cluster or [configure the AKS cluster autoscaler](../aks/cluster-autoscaler.md).
Autoscaling can be controlled by `scale_settings` property in deployment YAML. The following example demonstrates how to enable autoscaling:
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Previously updated : 08/08/2022 Last updated : 08/19/2022
<!-- # Virtual network isolation and privacy overview --> # Secure Azure Machine Learning workspace resources using virtual networks (VNets) +
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [SDK/CLI v1](v1/how-to-network-security-overview.md)
+> * [SDK/CLI v2 (current version)](how-to-network-security-overview.md)
+ Secure Azure Machine Learning workspace resources and compute environments using virtual networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network. > [!TIP]
Secure Azure Machine Learning workspace resources and compute environments using
> > * [Secure the workspace resources](how-to-secure-workspace-vnet.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * For securing inference, see the following documents:
-> * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
The next sections show you how to secure the network scenario described above. T
1. Secure the [**workspace and associated resources**](#secure-the-workspace-and-associated-resources). 1. Secure the [**training environment**](#secure-the-training-environment).
-1. Secure the **inferencing environment** [v1](#secure-the-inferencing-environment-v1) or [v2](#secure-the-inferencing-environment-v1).
+1. Secure the [**inferencing environment**](#secure-the-inferencing-environment).
1. Optionally: [**enable studio functionality**](#optional-enable-studio-functionality). 1. Configure [**firewall settings**](#configure-firewall-settings). 1. Configure [**DNS name resolution**](#custom-dns).
In this section, you learn how Azure Machine Learning securely communicates betw
- Azure Compute Instance and Azure Compute Clusters must be in the same VNet, region, and subscription as the workspace and its associated resources.
-## Secure the inferencing environment (v2)
-
+## Secure the inferencing environment
You can enable network isolation for managed online endpoints to secure the following network traffic:
You can enable network isolation for managed online endpoints to secure the foll
For more information, see [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md).
-## Secure the inferencing environment (v1)
--
-In this section, you learn the options available for securing an inferencing environment when using the Azure CLI extension for ML v1 or the Azure ML Python SDK v1. When doing a v1 deployment, we recommend that you use Azure Kubernetes Services (AKS) clusters for high-scale, production deployments.
-
-You have two options for AKS clusters in a virtual network:
--- Deploy or attach a default AKS cluster to your VNet.-- Attach a private AKS cluster to your VNet.-
-**Default AKS clusters** have a control plane with public IP addresses. You can add a default AKS cluster to your VNet during the deployment or attach a cluster after it's created.
-
-**Private AKS clusters** have a control plane, which can only be accessed through private IPs. Private AKS clusters must be attached after the cluster is created.
-
-For detailed instructions on how to add default and private clusters, see [Secure an inferencing environment](./v1/how-to-secure-inferencing-vnet.md).
-
-Regardless default AKS cluster or private AKS cluster used, if your AKS cluster is behind of VNET, your workspace and its associate resources (storage, key vault, and ACR) must have private endpoints or service endpoints in the same VNET as the AKS cluster.
-
-The following network diagram shows a secured Azure Machine Learning workspace with a private AKS cluster attached to the virtual network.
--- ## Optional: Enable public access You can secure the workspace behind a VNet using a private endpoint and still allow access over the public internet. The initial configuration is the same as [securing the workspace and associated resources](#secure-the-workspace-and-associated-resources).
This article is part of a series on securing an Azure Machine Learning workflow.
* [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* For securing inference, see the following documents:
- * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
- * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md
+
+ Title: Secure inferencing environments with virtual networks
+
+description: Use an isolated Azure Virtual Network to secure your Azure Machine Learning inferencing environment.
+++++++ Last updated : 09/06/2022+++
+# Secure an Azure Machine Learning inferencing environment with virtual networks
+
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [SDK/CLI v1](v1/how-to-secure-inferencing-vnet.md)
+> * [SDK/CLI v2 (current version)](how-to-secure-inferencing-vnet.md)
+
+In this article, you learn how to secure inferencing environments (online endpoints) with a virtual network in Azure Machine Learning. There are two inference options that can be secured using a VNet:
+
+* Azure Machine Learning managed online endpoints
+* Azure Kubernetes Service
+
+> [!TIP]
+> This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
+>
+> * [Virtual network overview](how-to-network-security-overview.md)
+> * [Secure the workspace resources](how-to-secure-workspace-vnet.md)
+> * [Secure the training environment](how-to-secure-training-vnet.md)
+> * [Enable studio functionality](how-to-enable-studio-virtual-network.md)
+> * [Use custom DNS](how-to-custom-dns.md)
+> * [Use a firewall](how-to-access-azureml-behind-firewall.md)
+>
+> For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
+
+## Prerequisites
+++ Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture.+++ An existing virtual network and subnet, that is used to secure the Azure Machine Learning workspace.+++ To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):+
+ - "Microsoft.Network/virtualNetworks/join/action" on the virtual network resource.
+ - "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource.
+
+ For more information on Azure RBAC with networking, see the [Networking built-in roles](/azure/role-based-access-control/built-in-roles#networking).
+++ If using Azure Kubernetes Service (AKS), you must have an existing AKS cluster secured as described in the [Secure Azure Kubernetes Service inference environment](how-to-secure-kubernetes-inferencing-environment.md) article.+
+## Secure managed online endpoints
+
+For information on securing managed online endpoints, see the [Use network isolation with managed online endpoints (preview)](how-to-secure-online-endpoint.md) article.
+
+## Secure Azure Kubernetes Service
+
+To configure and attach an Azure Kubernetes Service cluster for secure inference, use the following steps:
+
+1. Create or configure a [secure Kubernetes inferencing environment](how-to-secure-kubernetes-inferencing-environment.md).
+1. [Attach the Kubernetes cluster to the workspace](how-to-attach-kubernetes-anywhere.md).
+
+Afterwards, you can use the cluster for inference deployments to online endpoints. For more information, see [How to deploy an online endpoint](how-to-deploy-managed-online-endpoints.md).
+
+## Limit outbound connectivity from the virtual network
+
+If you don't want to use the default outbound rules and you do want to limit the outbound access of your virtual network, you must allow access to Azure Container Registry. For example, make sure that your Network Security Groups (NSG) contains a rule that allows access to the __AzureContainerRegistry.RegionName__ service tag where `{RegionName} is the name of an Azure region.
+
+## Next steps
+
+This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
+
+* [Virtual network overview](how-to-network-security-overview.md)
+* [Secure the workspace resources](how-to-secure-workspace-vnet.md)
+* [Secure the training environment](how-to-secure-training-vnet.md)
+* [Enable studio functionality](how-to-enable-studio-virtual-network.md)
+* [Use custom DNS](how-to-custom-dns.md)
+* [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Secure Kubernetes Inferencing Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-inferencing-environment.md
If you have an Azure Kubernetes (AKS) cluster behind of VNet, you would need to
* What is a secure AKS inferencing environment * How to configure a secure AKS inferencing environment
+## Limitations
+
+* If your AKS cluster is behind of a VNet, your workspace and its associated resources (storage, key vault, Azure Container Registry) must have private endpoints or service endpoints in the same VNet as AKS cluster's VNet. For more information on securing the workspace and associated resources, see [create a secure workspace](tutorial-create-secure-workspace.md).
+* If your workspace has a __private endpoint__, the Azure Kubernetes Service cluster must be in the same Azure region as the workspace.
+* Using a [public fully qualified domain name (FQDN) with a private AKS cluster](/azure/aks/private-clusters) is __not supported__ with Azure Machine learning.
+ ## What is a secure AKS inferencing environment Azure Machine Learning AKS inferencing environment consists of workspace, your AKS cluster, and workspace associated resources - Azure Storage, Azure Key Vault, and Azure Container Services(ARC). The following table compares how services access different part of Azure Machine Learning network with or without a VNet.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
ms.devlang: azurecli
# Secure an Azure Machine Learning training environment with virtual networks
-In this article, you learn how to secure training environments with a virtual network in Azure Machine Learning.
+
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [SDK v1](./v1/how-to-secure-training-vnet.md)
+> * [SDK v2 (current version)](how-to-secure-training-vnet.md)
+
+In this article, you learn how to secure training environments with a virtual network in Azure Machine Learning. You'll learn how to secure training environments through the Azure Machine Learning __studio__ and Python SDK __v2__.
> [!TIP] > This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: > > * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the workspace resources](how-to-secure-workspace-vnet.md)
-> * For securing inference, see the following documents:
-> * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
For information on using a firewall solution, see [Use a firewall with Azure Mac
## <a name="compute-cluster"></a>Compute clusters
-Use the tabs below to select how you plan to create a compute cluster:
-
-# [Studio](#tab/azure-studio)
- Use the following steps to create a compute cluster in the Azure Machine Learning studio: 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/), and then select your subscription and workspace.
Use the following steps to create a compute cluster in the Azure Machine Learnin
1. Select __Create__ to create the compute cluster.
-# [Python](#tab/python)
--
-The following code creates a new Machine Learning Compute cluster in the `default` subnet of a virtual network named `mynetwork`:
-
-```python
-from azureml.core.compute import ComputeTarget, AmlCompute
-from azureml.core.compute_target import ComputeTargetException
-
-# The Azure virtual network name, subnet, and resource group
-vnet_name = 'mynetwork'
-subnet_name = 'default'
-vnet_resourcegroup_name = 'mygroup'
-
-# Choose a name for your CPU cluster
-cpu_cluster_name = "cpucluster"
-
-# Verify that cluster does not exist already
-try:
- cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
- print("Found existing cpucluster")
-except ComputeTargetException:
- print("Creating new cpucluster")
-
- # Specify the configuration for the new cluster
- compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_D2_V2",
- min_nodes=0,
- max_nodes=4,
- location="westus2",
- vnet_resourcegroup_name=vnet_resourcegroup_name,
- vnet_name=vnet_name,
- subnet_name=subnet_name)
-
- # Create the cluster with the specified name and configuration
- cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
-
- # Wait for the cluster to be completed, show the output log
- cpu_cluster.wait_for_completion(show_output=True)
-```
-- When the creation process finishes, you train your model by using the cluster in an experiment.
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md)
-* For securing inference, see the following documents:
- * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
- * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article, you learn how to secure an Azure Machine Learning workspace and
> > * [Virtual network overview](how-to-network-security-overview.md) > * [Secure the training environment](how-to-secure-training-vnet.md)
-> * For securing inference, see the following documents:
-> * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
-> * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
> * [Enable studio functionality](how-to-enable-studio-virtual-network.md) > * [Use custom DNS](how-to-custom-dns.md) > * [Use a firewall](how-to-access-azureml-behind-firewall.md)
This article is part of a series on securing an Azure Machine Learning workflow.
* [Virtual network overview](how-to-network-security-overview.md) * [Secure the training environment](how-to-secure-training-vnet.md)
-* [Secure online endpoints (inference)](how-to-secure-online-endpoint.md)
-* For securing inference, see the following documents:
- * If using CLI v1 or SDK v1 - [Secure inference environment](./v1/how-to-secure-inferencing-vnet.md)
- * If using CLI v2 or SDK v2 - [Network isolation for managed online endpoints](how-to-secure-online-endpoint.md)
+* [Secure the inference environment](how-to-secure-inferencing-vnet.md)
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md) * [Use a firewall](how-to-access-azureml-behind-firewall.md)
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
- Previously updated : 10/21/2021+ Last updated : 09/07/2022 # Create and use managed online endpoints in the studio
In this article, you learn how to:
## Create a managed online endpoint
-Use the studio to create a managed online endpoint directly in your browser. When you create a managed online endpoint in the studio, you must define an initial deployment. You cannot create an empty managed online endpoint.
+Use the studio to create a managed online endpoint directly in your browser. When you create a managed online endpoint in the studio, you must define an initial deployment. You can't create an empty managed online endpoint.
1. Go to the [Azure Machine Learning studio](https://ml.azure.com). 1. In the left navigation bar, select the **Endpoints** page.
Use the studio to create a managed online endpoint directly in your browser. Whe
:::image type="content" source="media/how-to-create-managed-online-endpoint-studio/online-endpoint-wizard.png" lightbox="media/how-to-create-managed-online-endpoint-studio/online-endpoint-wizard.png" alt-text="A screenshot of a managed online endpoint create wizard.":::
-### Follow the setup wizard to configure your managed online endpoint.
+### Register the model
+
+A model registration is a logical entity in the workspace that may contain a single model file, or a directory containing multiple files. The steps in this article assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model.
+
+To register the example model using Azure Machine Learning studio, use the following steps:
+
+1. Go to the [Azure Machine Learning studio](https://ml.azure.com).
+1. In the left navigation bar, select the **Models** page.
+1. Select **Register**, and then **From local files**.
+1. Select __Unspecified type__ for the __Model type__, then select __Browse__, and __Browse folder__.
+
+ :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/register-model-folder.png" alt-text="A screenshot of the browse folder option.":::
-1. You can use our sample [model](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) and [scoring script](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py) from [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1)
-1. On the **Environment** step of the wizard, you can select the **AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference** curated environment.
+1. Select the `\azureml-examples\cli\endpoints\online\model-1\model` folder from the local copy of the repo you downloaded earlier. When prompted, select __Upload__. Once the upload completes, select __Next__.
+1. Enter a friendly __Name__ for the model. The steps in this article assume it's named `model-1`.
+1. Select __Next__, and then __Register__ to complete registration.
+
+For more information on working with registered models, see [Register and work with models](how-to-manage-models.md).
+
+### Follow the setup wizard to configure your managed online endpoint.
You can also create a managed online endpoint from the **Models** page in the studio. This is an easy way to add a model to an existing managed online deployment.
You can also create a managed online endpoint from the **Models** page in the st
1. Select a model by checking the circle next to the model name. 1. Select **Deploy** > **Deploy to real-time endpoint**.
+ :::image type="content" source="media/how-to-create-managed-online-endpoint-studio/deploy-from-models-page.png" lightbox="media/how-to-create-managed-online-endpoint-studio/deploy-from-models-page.png" alt-text="A screenshot of creating a managed online endpoint from the Models UI.":::
+
+1. Enter an __Endpoint name__ and select __Managed__ as the compute type.
+1. Select __Next__, accepting defaults, until you're prompted for the environment. Here, select the following:
+
+ * __Select scoring file and dependencies__: Browse and select the `\azureml-examples\cli\endpoints\online\model-1\onlinescoring\score.py` file from the repo you downloaded earlier.
+ * __Choose an environment__ section: Select the **Scikit-learn 0.24.1** curated environment.
+
+1. Select __Next__, accepting defaults, until you're prompted to create the deployment. Select the __Create__ button.
## View managed online endpoints
To use the monitoring tab, you must select "**Enable Application Insight diagnos
:::image type="content" source="media/how-to-create-managed-online-endpoint-studio/monitor-endpoint.png" lightbox="media/how-to-create-managed-online-endpoint-studio/monitor-endpoint.png" alt-text="A screenshot of monitoring endpoint-level metrics in the studio.":::
-For more information on how viewing additional monitors and alerts, see [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md).
+For more information on how viewing other monitors and alerts, see [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md).
## Add a deployment to a managed online endpoint
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-### Azure Curated Environment for PyTorch (preview)
+### Azure Container for PyTorch (ACPT) (preview)
-**Description**: The Azure Curated Environment for PyTorch is optimized for large, distributed deep learning workloads. it comes pre-packaged with the best of Microsoft technologies for accelerated training, e.g., OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc.
+**Name**: AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu
+**Description**: The Azure Curated Environment for PyTorch is our latest PyTorch curated environment. It is optimized for large, distributed deep learning workloads and comes pre-packaged with the best of Microsoft technologies for accelerated training, e.g., OnnxRuntime Training (ORT), DeepSpeed, MSCCL, etc.
The following configurations are supported: | Environment Name | OS | GPU Version| Python Version | PyTorch Version | ORT-training Version | DeepSpeed Version | torch-ort Version | | | | | | | | | |
-| AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
-| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
+| AzureML-ACPT-pytorch-1.11-py38-cuda11.5-gpu | Ubuntu 20.04 | cu115 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
+| AzureML-ACPT-pytorch-1.11-py38-cuda11.3-gpu | Ubuntu 20.04 | cu113 | 3.8 | 1.11.0 | 1.11.1 | 0.7.1 | 1.11.0 |
### PyTorch
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-private-link.md
+
+ Title: Configure a private endpoint v1
+
+description: 'Use a private endpoint to securely access your Azure Machine Learning workspace (v1) from a virtual network.'
++++++++ Last updated : 08/29/2022++
+# Configure a private endpoint for an Azure Machine Learning workspace with SDK and CLI v1
++
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [SDK v1](how-to-configure-private-link.md)
+> * [SDK v2 (current version)](../how-to-configure-private-link.md)
+
+In this document, you learn how to configure a private endpoint for your Azure Machine Learning workspace. For information on creating a virtual network for Azure Machine Learning, see [Virtual network isolation and privacy overview](how-to-network-security-overview.md).
+
+Azure Private Link enables you to connect to your workspace using a private endpoint. The private endpoint is a set of private IP addresses within your virtual network. You can then limit access to your workspace to only occur over the private IP addresses. A private endpoint helps reduce the risk of data exfiltration. To learn more about private endpoints, see the [Azure Private Link](../../private-link/private-link-overview.md) article.
+
+> [!WARNING]
+> Securing a workspace with private endpoints does not ensure end-to-end security by itself. You must secure all of the individual components of your solution. For example, if you use a private endpoint for the workspace, but your Azure Storage Account is not behind the VNet, traffic between the workspace and storage does not use the VNet for security.
+>
+> For more information on securing resources used by Azure Machine Learning, see the following articles:
+>
+> * [Virtual network isolation and privacy overview](how-to-network-security-overview.md).
+> * [Secure workspace resources](../how-to-secure-workspace-vnet.md).
+> * [Secure training environments (v1)](how-to-secure-training-vnet.md).
+> * [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md)
+> * [Use Azure Machine Learning studio in a VNet](../how-to-enable-studio-virtual-network.md).
+> * [API platform network isolation](../how-to-configure-network-isolation-with-v2.md).
+
+## Prerequisites
+
+* You must have an existing virtual network to create the private endpoint in.
+* [Disable network policies for private endpoints](/azure/private-link/disable-private-endpoint-network-policy) before adding the private endpoint.
+
+## Limitations
+
+* If you enable public access for a workspace secured with private endpoint and use Azure Machine Learning studio over the public internet, some features such as the designer may fail to access your data. This problem happens when the data is stored on a service that is secured behind the VNet. For example, an Azure Storage Account.
+* You may encounter problems trying to access the private endpoint for your workspace if you're using Mozilla Firefox. This problem may be related to DNS over HTTPS in Mozilla Firefox. We recommend using Microsoft Edge or Google Chrome as a workaround.
+* Using a private endpoint doesn't affect Azure control plane (management operations) such as deleting the workspace or managing compute resources. For example, creating, updating, or deleting a compute target. These operations are performed over the public Internet as normal. Data plane operations, such as using Azure Machine Learning studio, APIs (including published pipelines), or the SDK use the private endpoint.
+* When creating a compute instance or compute cluster in a workspace with a private endpoint, the compute instance and compute cluster must be in the same Azure region as the workspace.
+* When creating or attaching an Azure Kubernetes Service cluster to a workspace with a private endpoint, the cluster must be in the same region as the workspace.
+* When using a workspace with multiple private endpoints, one of the private endpoints must be in the same VNet as the following dependency
+
+ * Azure Storage Account that provides the default storage for the workspace
+ * Azure Key Vault for the workspace
+ * Azure Container Registry for the workspace.
+
+ For example, one VNet ('services' VNet) would contain a private endpoint for the dependency services and the workspace. This configuration allows the workspace to communicate with the services. Another VNet ('clients') might only contain a private endpoint for the workspace, and be used only for communication between client development machines and the workspace.
+
+## Create a workspace that uses a private endpoint
+
+Use one of the following methods to create a workspace with a private endpoint. Each of these methods __requires an existing virtual network__:
+
+> [!TIP]
+> If you'd like to create a workspace, private endpoint, and virtual network at the same time, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](../how-to-create-workspace-template.md).
+
+# [Python](#tab/python)
+
+The Azure Machine Learning Python SDK provides the [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) class, which can be used with [Workspace.create()](/python/api/azureml-core/azureml.core.workspace.workspace#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basictags-none--friendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--adb-workspace-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--private-endpoint-config-none--private-endpoint-auto-approval-true--exist-ok-false--show-output-true-) to create a workspace with a private endpoint. This class requires an existing virtual network.
++
+```python
+from azureml.core import Workspace
+from azureml.core import PrivateEndPointConfig
+
+pe = PrivateEndPointConfig(name='myprivateendpoint', vnet_name='myvnet', vnet_subnet_name='default')
+ws = Workspace.create(name='myworkspace',
+ subscription_id='<my-subscription-id>',
+ resource_group='myresourcegroup',
+ location='eastus2',
+ private_endpoint_config=pe,
+ private_endpoint_auto_approval=True,
+ show_output=True)
+```
+
+# [Azure CLI](#tab/azurecliextensionv1)
+
+If you're using the Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md), use the [az ml workspace create](/cli/azure/ml/workspace#az-ml-workspace-create) command. The following parameters for this command can be used to create a workspace with a private network, but it requires an existing virtual network:
+
+* `--pe-name`: The name of the private endpoint that is created.
+* `--pe-auto-approval`: Whether private endpoint connections to the workspace should be automatically approved.
+* `--pe-resource-group`: The resource group to create the private endpoint in. Must be the same group that contains the virtual network.
+* `--pe-vnet-name`: The existing virtual network to create the private endpoint in.
+* `--pe-subnet-name`: The name of the subnet to create the private endpoint in. The default value is `default`.
+
+These parameters are in addition to other required parameters for the create command. For example, the following command creates a new workspace in the West US region, using an existing resource group and VNet:
++
+```azurecli
+az ml workspace create -r myresourcegroup \
+ -l westus \
+ -n myworkspace \
+ --pe-name myprivateendpoint \
+ --pe-auto-approval \
+ --pe-resource-group myresourcegroup \
+ --pe-vnet-name myvnet \
+ --pe-subnet-name mysubnet
+```
+++
+## Add a private endpoint to a workspace
+
+Use one of the following methods to add a private endpoint to an existing workspace:
+
+> [!WARNING]
+>
+> If you have any existing compute targets associated with this workspace, and they are not behind the same virtual network tha the private endpoint is created in, they will not work.
+
+# [Python1](#tab/python)
++
+```python
+from azureml.core import Workspace
+from azureml.core import PrivateEndPointConfig
+
+pe = PrivateEndPointConfig(name='myprivateendpoint', vnet_name='myvnet', vnet_subnet_name='default')
+ws = Workspace.from_config()
+ws.add_private_endpoint(private_endpoint_config=pe, private_endpoint_auto_approval=True, show_output=True)
+```
+
+For more information on the classes and methods used in this example, see [PrivateEndpointConfig](/python/api/azureml-core/azureml.core.privateendpointconfig) and [Workspace.add_private_endpoint](/python/api/azureml-core/azureml.core.workspace(class)#add-private-endpoint-private-endpoint-config--private-endpoint-auto-approval-true--location-none--show-output-true--tags-none-).
+
+# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
+
+The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint add](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-add) command.
++
+```azurecli
+az ml workspace private-endpoint add -w myworkspace --pe-name myprivateendpoint --pe-auto-approval --pe-vnet-name myvnet
+```
+++
+## Remove a private endpoint
+
+You can remove one or all private endpoints for a workspace. Removing a private endpoint removes the workspace from the VNet that the endpoint was associated with. Removing the private endpoint may prevent the workspace from accessing resources in that VNet, or resources in the VNet from accessing the workspace. For example, if the VNet doesn't allow access to or from the public internet.
+
+> [!WARNING]
+> Removing the private endpoints for a workspace __doesn't make it publicly accessible__. To make the workspace publicly accessible, use the steps in the [Enable public access](#enable-public-access) section.
+
+To remove a private endpoint, use the following information:
+
+# [Python](#tab/python)
+
+To remove a private endpoint, use [Workspace.delete_private_endpoint_connection](/python/api/azureml-core/azureml.core.workspace(class)#delete-private-endpoint-connection-private-endpoint-connection-name-). The following example demonstrates how to remove a private endpoint:
++
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+# get the connection name
+_, _, connection_name = ws.get_details()['privateEndpointConnections'][0]['id'].rpartition('/')
+ws.delete_private_endpoint_connection(private_endpoint_connection_name=connection_name)
+```
+
+# [Azure CLI extension 1.0](#tab/azurecliextensionv1)
++
+The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace private-endpoint delete](/cli/azure/ml(v1)/workspace/private-endpoint#az-ml-workspace-private-endpoint-delete) command.
+++
+## Enable public access
+
+In some situations, you may want to allow someone to connect to your secured workspace over a public endpoint, instead of through the VNet. Or you may want to remove the workspace from the VNet and re-enable public access.
+
+> [!IMPORTANT]
+> Enabling public access doesn't remove any private endpoints that exist. All communications between components behind the VNet that the private endpoint(s) connect to are still secured. It enables public access only to the workspace, in addition to the private access through any private endpoints.
+
+> [!WARNING]
+> When connecting over the public endpoint while the workspace uses a private endpoint to communicate with other resources:
+> * __Some features of studio will fail to access your data__. This problem happens when the _data is stored on a service that is secured behind the VNet_. For example, an Azure Storage Account.
+> * Using Jupyter, JupyterLab, and RStudio on a compute instance, including running notebooks, __is not supported__.
+
+To enable public access, use the following steps:
+
+> [!TIP]
+> There are two possible properties that you can configure:
+> * `allow_public_access_when_behind_vnet` - used by the Python SDK and CLI v2
+> * `public_network_access` - used by the Python SDK and CLI v2
+> Each property overrides the other. For example, setting `public_network_access` will override any previous setting to `allow_public_access_when_behind_vnet`.
+>
+> Microsoft recommends using `public_network_access` to enable or disable public access to a workspace.
+
+# [Python](#tab/python)
+
+To enable public access, use [Workspace.update](/python/api/azureml-core/azureml.core.workspace(class)#update-friendly-name-none--description-none--tags-none--image-build-compute-none--service-managed-resources-settings-none--primary-user-assigned-identity-none--allow-public-access-when-behind-vnet-none-) and set `allow_public_access_when_behind_vnet=True`.
++
+```python
+from azureml.core import Workspace
+
+ws = Workspace.from_config()
+ws.update(allow_public_access_when_behind_vnet=True)
+```
+
+# [Azure CLI](#tab/azurecliextensionv1)
++
+The Azure CLI [extension 1.0 for machine learning](reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml/workspace#az-ml-workspace-update) command. To enable public access to the workspace, add the parameter `--allow-public-access true`.
+++
+## Securely connect to your workspace
++
+## Multiple private endpoints
+
+Azure Machine Learning supports multiple private endpoints for a workspace. Multiple private endpoints are often used when you want to keep different environments separate. The following are some scenarios that are enabled by using multiple private endpoints:
+
+* Client development environments in a separate VNet.
+* An Azure Kubernetes Service (AKS) cluster in a separate VNet.
+* Other Azure services in a separate VNet. For example, Azure Synapse and Azure Data Factory can use a Microsoft managed virtual network. In either case, a private endpoint for the workspace can be added to the managed VNet used by those services. For more information on using a managed virtual network with these services, see the following articles:
+
+ * [Synapse managed private endpoints](/azure/synapse-analytics/security/synapse-workspace-managed-private-endpoints).
+ * [Azure Data Factory managed virtual network](/azure/data-factory/managed-virtual-network-private-endpoint).
+
+ > [!IMPORTANT]
+ > [Synapse's data exfiltration protection](/azure/synapse-analytics/security/workspace-data-exfiltration-protection) is not supported with Azure Machine Learning.
+
+> [!IMPORTANT]
+> Each VNet that contains a private endpoint for the workspace must also be able to access the Azure Storage Account, Azure Key Vault, and Azure Container Registry used by the workspace. For example, you might create a private endpoint for the services in each VNet.
+
+Adding multiple private endpoints uses the same steps as described in the [Add a private endpoint to a workspace](#add-a-private-endpoint-to-a-workspace) section.
+
+### Scenario: Isolated clients
+
+If you want to isolate the development clients, so they don't have direct access to the compute resources used by Azure Machine Learning, use the following steps:
+
+> [!NOTE]
+> These steps assume that you have an existing workspace, Azure Storage Account, Azure Key Vault, and Azure Container Registry. Each of these services has a private endpoints in an existing VNet.
+
+1. Create another VNet for the clients. This VNet might contain Azure Virtual Machines that act as your clients, or it may contain a VPN Gateway used by on-premises clients to connect to the VNet.
+1. Add a new private endpoint for the Azure Storage Account, Azure Key Vault, and Azure Container Registry used by your workspace. These private endpoints should exist in the client VNet.
+1. If you have another storage that is used by your workspace, add a new private endpoint for that storage. The private endpoint should exist in the client VNet and have private DNS zone integration enabled.
+1. Add a new private endpoint to your workspace. This private endpoint should exist in the client VNet and have private DNS zone integration enabled.
+1. Use the steps in the [Use studio in a virtual network](../how-to-enable-studio-virtual-network.md#datastore-azure-storage-account) article to enable studio to access the storage account(s).
+
+The following diagram illustrates this configuration. The __Workload__ VNet contains computes created by the workspace for training & deployment. The __Client__ VNet contains clients or client ExpressRoute/VPN connections. Both VNets contain private endpoints for the workspace, Azure Storage Account, Azure Key Vault, and Azure Container Registry.
++
+### Scenario: Isolated Azure Kubernetes Service
+
+If you want to create an isolated Azure Kubernetes Service used by the workspace, use the following steps:
+
+> [!NOTE]
+> These steps assume that you have an existing workspace, Azure Storage Account, Azure Key Vault, and Azure Container Registry. Each of these services has a private endpoints in an existing VNet.
+
+1. Create an Azure Kubernetes Service instance. During creation, AKS creates a VNet that contains the AKS cluster.
+1. Add a new private endpoint for the Azure Storage Account, Azure Key Vault, and Azure Container Registry used by your workspace. These private endpoints should exist in the client VNet.
+1. If you have other storage that is used by your workspace, add a new private endpoint for that storage. The private endpoint should exist in the client VNet and have private DNS zone integration enabled.
+1. Add a new private endpoint to your workspace. This private endpoint should exist in the client VNet and have private DNS zone integration enabled.
+1. Attach the AKS cluster to the Azure Machine Learning workspace. For more information, see [Create and attach an Azure Kubernetes Service cluster](how-to-create-attach-kubernetes.md#attach-an-existing-aks-cluster).
++
+## Next steps
+
+* For more information on securing your Azure Machine Learning workspace, see the [Virtual network isolation and privacy overview](how-to-network-security-overview.md) article.
+
+* If you plan on using a custom DNS solution in your virtual network, see [how to use a workspace with a custom DNS server](../how-to-custom-dns.md).
+
+* [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-network-security-overview.md
+
+ Title: Secure workspace resources using virtual networks (v1)
+
+description: Secure Azure Machine Learning workspace resources and compute environments using an isolated Azure Virtual Network. SDK/CLI v1.
++++++ Last updated : 08/08/2022++++
+<!-- # Virtual network isolation and privacy overview -->
+# Secure Azure Machine Learning workspace resources using virtual networks (SDK/CLI v1)
++
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [SDK v1](how-to-network-security-overview.md)
+> * [SDK v2 (current version)](../how-to-network-security-overview.md)
+
+Secure Azure Machine Learning workspace resources and compute environments using virtual networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network.
+
+> [!TIP]
+> This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
+>
+> * [Secure the workspace resources](../how-to-secure-workspace-vnet.md)
+> * [Secure the training environment (v1)](how-to-secure-training-vnet.md)
+> * [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md)
+> * [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
+> * [Use custom DNS](../how-to-custom-dns.md)
+> * [Use a firewall](../how-to-access-azureml-behind-firewall.md)
+> * [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)
+>
+> For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace](../tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](../tutorial-create-secure-workspace-template.md).
+
+## Prerequisites
+
+This article assumes that you have familiarity with the following topics:
++ [Azure Virtual Networks](/azure/virtual-network/virtual-networks-overview)++ [IP networking](/azure/virtual-network/ip-services/public-ip-addresses)++ [Azure Machine Learning workspace with private endpoint](how-to-configure-private-link.md)++ [Network Security Groups (NSG)](/azure/virtual-network/network-security-groups-overview)++ [Network firewalls](/azure/firewall/overview)
+## Example scenario
+
+In this section, you learn how a common network scenario is set up to secure Azure Machine Learning communication with private IP addresses.
+
+The following table compares how services access different parts of an Azure Machine Learning network with and without a VNet:
+
+| Scenario | Workspace | Associated resources | Training compute environment | Inferencing compute environment |
+|-|-|-|-|-|-|
+|**No virtual network**| Public IP | Public IP | Public IP | Public IP |
+|**Public workspace, all other resources in a virtual network** | Public IP | Public IP (service endpoint) <br> **- or -** <br> Private IP (private endpoint) | Public IP | Private IP |
+|**Secure resources in a virtual network**| Private IP (private endpoint) | Public IP (service endpoint) <br> **- or -** <br> Private IP (private endpoint) | Private IP | Private IP |
+
+* **Workspace** - Create a private endpoint for your workspace. The private endpoint connects the workspace to the vnet through several private IP addresses.
+ * **Public access** - You can optionally enable public access for a secured workspace.
+* **Associated resource** - Use service endpoints or private endpoints to connect to workspace resources like Azure storage, Azure Key Vault. For Azure Container Services, use a private endpoint.
+ * **Service endpoints** provide the identity of your virtual network to the Azure service. Once you enable service endpoints in your virtual network, you can add a virtual network rule to secure the Azure service resources to your virtual network. Service endpoints use public IP addresses.
+ * **Private endpoints** are network interfaces that securely connect you to a service powered by Azure Private Link. Private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet.
+* **Training compute access** - Access training compute targets like Azure Machine Learning Compute Instance and Azure Machine Learning Compute Clusters with public or private IP addresses.
+* **Inference compute access** - Access Azure Kubernetes Services (AKS) compute clusters with private IP addresses.
++
+The next sections show you how to secure the network scenario described above. To secure your network, you must:
+
+1. Secure the [**workspace and associated resources**](#secure-the-workspace-and-associated-resources).
+1. Secure the [**training environment** (v1)](#secure-the-training-environment).
+1. Secure the [**inferencing environment** (v1)](#secure-the-inferencing-environment-v1).
+1. Optionally: [**enable studio functionality**](#optional-enable-studio-functionality).
+1. Configure [**firewall settings**](#configure-firewall-settings).
+1. Configure [**DNS name resolution**](#custom-dns).
+
+## Public workspace and secured resources
+
+If you want to access the workspace over the public internet while keeping all the associated resources secured in a virtual network, use the following steps:
+
+1. Create an [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) that will contain the resources used by the workspace.
+1. Use __one__ of the following options to create a publicly accessible workspace:
+
+ * Create an Azure Machine Learning workspace that __does not__ use the virtual network. For more information, see [Manage Azure Machine Learning workspaces](../how-to-manage-workspace.md).
+ * Create a [Private Link-enabled workspace](../how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. Then [enable public access to the workspace](#optional-enable-public-access).
+
+1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these
+
+ | Service | Endpoint information | Allow trusted information |
+ | -- | -- | -- |
+ | __Azure Key Vault__| [Service endpoint](/azure/key-vault/general/overview-vnet-service-endpoints)</br>[Private endpoint](/azure/key-vault/general/private-link-service) | [Allow trusted Microsoft services to bypass this firewall](../how-to-secure-workspace-vnet.md#secure-azure-key-vault) |
+ | __Azure Storage Account__ | [Service and private endpoint](../how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](../how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access to trusted Azure services](/azure/storage/common/storage-network-security#grant-access-to-trusted-azure-services) |
+ | __Azure Container Registry__ | [Private endpoint](/azure/container-registry/container-registry-private-link) | [Allow trusted services](/azure/container-registry/allow-access-trusted-services) |
+
+1. In properties for the Azure Storage Account(s) for your workspace, add your client IP address to the allowed list in firewall settings. For more information, see [Configure firewalls and virtual networks](/azure/storage/common/storage-network-security#configuring-access-from-on-premises-networks).
+
+## Secure the workspace and associated resources
+
+Use the following steps to secure your workspace and associated resources. These steps allow your services to communicate in the virtual network.
+
+1. Create an [Azure Virtual Networks](/azure/virtual-network/virtual-networks-overview) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](../how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace.
+1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these
+
+ | Service | Endpoint information | Allow trusted information |
+ | -- | -- | -- |
+ | __Azure Key Vault__| [Service endpoint](/azure/key-vault/general/overview-vnet-service-endpoints)</br>[Private endpoint](/azure/key-vault/general/private-link-service) | [Allow trusted Microsoft services to bypass this firewall](../how-to-secure-workspace-vnet.md#secure-azure-key-vault) |
+ | __Azure Storage Account__ | [Service and private endpoint](../how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts)</br>[Private endpoint](../how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts) | [Grant access from Azure resource instances](/azure/storage/common/storage-network-security#grant-access-from-azure-resource-instances)</br>**or**</br>[Grant access to trusted Azure services](/azure/storage/common/storage-network-security#grant-access-to-trusted-azure-services) |
+ | __Azure Container Registry__ | [Private endpoint](/azure/container-registry/container-registry-private-link) | [Allow trusted services](/azure/container-registry/allow-access-trusted-services) |
+++
+For detailed instructions on how to complete these steps, see [Secure an Azure Machine Learning workspace](../how-to-secure-workspace-vnet.md).
+
+### Limitations
+
+Securing your workspace and associated resources within a virtual network have the following limitations:
+- All resources must be behind the same VNet. However, subnets within the same VNet are allowed.
+
+## Secure the training environment
+
+In this section, you learn how to secure the training environment in Azure Machine Learning. You also learn how Azure Machine Learning completes a training job to understand how the network configurations work together.
+
+To secure the training environment, use the following steps:
+
+1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster) to run the training job.
+1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+
+ > [!TIP]
+ > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
++
+For detailed instructions on how to complete these steps, see [Secure a training environment](how-to-secure-training-vnet.md).
+
+### Example training job submission
+
+In this section, you learn how Azure Machine Learning securely communicates between services to submit a training job. This shows you how all your configurations work together to secure communication.
+
+1. The client uploads training scripts and training data to storage accounts that are secured with a service or private endpoint.
+
+1. The client submits a training job to the Azure Machine Learning workspace through the private endpoint.
+
+1. Azure Batch service receives the job from the workspace. It then submits the training job to the compute environment through the public load balancer for the compute resource.
+
+1. The compute resource receives the job and begins training. The compute resource uses information stored in key vault to access storage accounts to download training files and upload output.
+
+### Limitations
+
+- Azure Compute Instance and Azure Compute Clusters must be in the same VNet, region, and subscription as the workspace and its associated resources.
+
+## Secure the inferencing environment (v1)
++
+In this section, you learn the options available for securing an inferencing environment when using the Azure CLI extension for ML v1 or the Azure ML Python SDK v1. When doing a v1 deployment, we recommend that you use Azure Kubernetes Services (AKS) clusters for high-scale, production deployments.
+
+You have two options for AKS clusters in a virtual network:
+
+- Deploy or attach a default AKS cluster to your VNet.
+- Attach a private AKS cluster to your VNet.
+
+**Default AKS clusters** have a control plane with public IP addresses. You can add a default AKS cluster to your VNet during the deployment or attach a cluster after it's created.
+
+**Private AKS clusters** have a control plane, which can only be accessed through private IPs. Private AKS clusters must be attached after the cluster is created.
+
+For detailed instructions on how to add default and private clusters, see [Secure an inferencing environment](how-to-secure-inferencing-vnet.md).
+
+Regardless default AKS cluster or private AKS cluster used, if your AKS cluster is behind of VNET, your workspace and its associate resources (storage, key vault, and ACR) must have private endpoints or service endpoints in the same VNET as the AKS cluster.
+
+The following network diagram shows a secured Azure Machine Learning workspace with a private AKS cluster attached to the virtual network.
++
+## Optional: Enable public access
+
+You can secure the workspace behind a VNet using a private endpoint and still allow access over the public internet. The initial configuration is the same as [securing the workspace and associated resources](#secure-the-workspace-and-associated-resources).
+
+After securing the workspace with a private endpoint, use the following steps to enable clients to develop remotely using either the SDK or Azure Machine Learning studio:
+
+1. [Enable public access](how-to-configure-private-link.md#enable-public-access) to the workspace.
+1. [Configure the Azure Storage firewall](/azure/storage/common/storage-network-security?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet.
+
+## Optional: Enable studio functionality
+
+If your storage is in a VNet, you must use extra configuration steps to enable full functionality in studio. By default, the following features are disabled:
+
+* Preview data in the studio.
+* Visualize data in the designer.
+* Deploy a model in the designer.
+* Submit an AutoML experiment.
+* Start a labeling project.
+
+To enable full studio functionality, see [Use Azure Machine Learning studio in a virtual network](../how-to-enable-studio-virtual-network.md).
+
+### Limitations
+
+[ML-assisted data labeling](../how-to-create-image-labeling-projects.md#use-ml-assisted-data-labeling) doesn't support a default storage account behind a virtual network. Instead, use a storage account other than the default for ML assisted data labeling.
+
+> [!TIP]
+> As long as it is not the default storage account, the account used by data labeling can be secured behind the virtual network.
+
+## Configure firewall settings
+
+Configure your firewall to control traffic between your Azure Machine Learning workspace resources and the public internet. While we recommend Azure Firewall, you can use other firewall products.
+
+For more information on firewall settings, see [Use workspace behind a Firewall](../how-to-access-azureml-behind-firewall.md).
+
+## Custom DNS
+
+If you need to use a custom DNS solution for your virtual network, you must add host records for your workspace.
+
+For more information on the required domain names and IP addresses, see [how to use a workspace with a custom DNS server](../how-to-custom-dns.md).
+
+## Microsoft Sentinel
+
+Microsoft Sentinel is a security solution that can integrate with Azure Machine Learning. For example, using Jupyter notebooks provided through Azure Machine Learning. For more information, see [Use Jupyter notebooks to hunt for security threats](/azure/sentinel/notebooks).
+
+### Public access
+
+Microsoft Sentinel can automatically create a workspace for you if you are OK with a public endpoint. In this configuration, the security operations center (SOC) analysts and system administrators connect to notebooks in your workspace through Sentinel.
+
+For information on this process, see [Create an Azure ML workspace from Microsoft Sentinel](/azure/sentinel/notebooks-hunt?tabs=public-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
++
+### Private endpoint
+
+If you want to secure your workspace and associated resources in a VNet, you must create the Azure Machine Learning workspace first. You must also create a virtual machine 'jump box' in the same VNet as your workspace, and enable Azure Bastion connectivity to it. Similar to the public configuration, SOC analysts and administrators can connect using Microsoft Sentinel, but some operations must be performed using Azure Bastion to connect to the VM.
+
+For more information on this configuration, see [Create an Azure ML workspace from Microsoft Sentinel](/azure/sentinel/notebooks-hunt?tabs=private-endpoint#create-an-azure-ml-workspace-from-microsoft-sentinel)
++
+## Next steps
+
+This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
+
+* [Secure the workspace resources](../how-to-secure-workspace-vnet.md)
+* [Secure the training environment (v1)](how-to-secure-training-vnet.md)
+* [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md)
+* [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
+* [Use custom DNS](../how-to-custom-dns.md)
+* [Use a firewall](../how-to-access-azureml-behind-firewall.md)
+* [API platform network isolation](../how-to-configure-network-isolation-with-v2.md)
machine-learning How To Secure Inferencing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-inferencing-vnet.md
Title: Secure inferencing environments with virtual networks
+ Title: Secure v1 inferencing environments with virtual networks
-description: Use an isolated Azure Virtual Network to secure your Azure Machine Learning inferencing environment.
+description: Use an isolated Azure Virtual Network to secure your Azure Machine Learning inferencing environment (v1).
Last updated 07/28/2022
-# Secure an Azure Machine Learning inferencing environment with virtual networks
+# Secure an Azure Machine Learning inferencing environment with virtual networks (v1)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../../../includes/machine-learning-cli-v1.md)]
-In this article, you learn how to secure inferencing environments with a virtual network in Azure Machine Learning.
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK or CLI version you are using:"]
+> * [SDK/CLI v1](how-to-secure-inferencing-vnet.md)
+> * [SDK/CLI v2 (current version)](../how-to-secure-inferencing-vnet.md)
+
+In this article, you learn how to secure inferencing environments with a virtual network in Azure Machine Learning. This article is specific to the SDK/CLI v1 deployment workflow of deploying a model as a web service.
> [!TIP] > This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md
+
+ Title: Secure training environments with virtual networks (v1)
+
+description: Use an isolated Azure Virtual Network to secure your Azure Machine Learning training environment. SDK v1
+++++++ Last updated : 08/29/2022+++
+# Secure an Azure Machine Learning training environment with virtual networks (SDKv1)
++
+> [!div class="op_single_selector" title1="Select the Azure Machine Learning SDK version you are using:"]
+> * [SDK v1](how-to-secure-training-vnet.md)
+> * [SDK v2 (current version)](../how-to-secure-training-vnet.md)
+
+In this article, you learn how to secure training environments with a virtual network in Azure Machine Learning using the Python SDK v1.
+
+> [!TIP]
+> For information on using the Azure Machine Learning __studio__ and the Python SDK __v2__, see [Secure training environment (v2)](../how-to-secure-training-vnet.md).
+>
+> For a tutorial on creating a secure workspace, see [Tutorial: Create a secure workspace in Azure portal](../tutorial-create-secure-workspace.md) or [Tutorial: Create a secure workspace using a template](../tutorial-create-secure-workspace-template.md).
+
+In this article you learn how to secure the following training compute resources in a virtual network:
+> [!div class="checklist"]
+> - Azure Machine Learning compute cluster
+> - Azure Machine Learning compute instance
+> - Azure Databricks
+> - Virtual Machine
+> - HDInsight cluster
+
+## Prerequisites
+++ Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture.+++ An existing virtual network and subnet to use with your compute resources.+++ To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):+
+ - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission isn't needed for Azure Resource Manager (ARM) template deployments.
+ - "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource.
+
+ For more information on Azure RBAC with networking, see the [Networking built-in roles](/azure/role-based-access-control/built-in-roles#networking)
+
+### Azure Machine Learning compute cluster/instance
+
+* Compute clusters and instances create the following resources. If they're unable to create these resources (for example, if there's a resource lock on the resource group) then creation, scale out, or scale in, may fail.
+
+ * IP address.
+ * Network Security Group (NSG).
+ * Load balancer.
+
+* The virtual network must be in the same subscription as the Azure Machine Learning workspace.
+* The subnet used for the compute instance or cluster must have enough unassigned IP addresses.
+
+ * A compute cluster can dynamically scale. If there aren't enough unassigned IP addresses, the cluster will be partially allocated.
+ * A compute instance only requires one IP address.
+
+* To create a compute cluster or instance [without a public IP address](#no-public-ip-for-compute-clusters-preview) (a preview feature), your workspace must use a private endpoint to connect to the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
+* If you plan to secure the virtual network by restricting traffic, see the [Required public internet access](#required-public-internet-access) section.
+* The subnet used to deploy compute cluster/instance shouldn't be delegated to any other service. For example, it shouldn't be delegated to ACI.
+
+### Azure Databricks
+
+* The virtual network must be in the same subscription and region as the Azure Machine Learning workspace.
+* If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster.
+
+## Limitations
+
+### Azure Machine Learning compute cluster/instance
+
+* If put multiple compute instances or clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
+
+ * One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+
+ * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
+ * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
+
+ The following screenshot shows an example of these rules:
+
+ :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" lightbox="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of the network security group.":::
++
+ > [!TIP]
+ > If your compute cluster or instance does not use a public IP address (a preview feature), these inbound NSG rules are not required.
+
+ * For compute cluster or instance, it's now possible to remove the public IP address (a preview feature). If you have Azure Policy assignments prohibiting Public IP creation, then deployment of the compute cluster or instance will succeed.
+
+ * One load balancer
+
+ For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
+
+ For a compute instance, these resources are kept until the instance is deleted. Stopping the instance doesn't remove the resources.
+
+ > [!IMPORTANT]
+ > These resources are limited by the subscription's [resource quotas](/azure/azure-resource-manager/management/azure-subscription-service-limits). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
+
+* If you create a compute instance and plan to use the no public IP address configuration, your Azure Machine Learning workspace's managed identity must be assigned the __Reader__ role for the virtual network that contains the workspace. For more information on assigning roles, see [Steps to assign an Azure role](/azure/role-based-access-control/role-assignments-steps).
+
+* If you have configured Azure Container Registry for your workspace behind the virtual network, you must use a compute cluster to build Docker images. You can't use a compute cluster with the no public IP address configuration. For more information, see [Enable Azure Container Registry](../how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
+
+* If the Azure Storage Accounts for the workspace are also in the virtual network, use the following guidance on subnet limitations:
+
+ * If you plan to use Azure Machine Learning __studio__ to visualize data or use designer, the storage account must be __in the same subnet as the compute instance or cluster__.
+ * If you plan to use the __SDK__, the storage account can be in a different subnet.
+
+ > [!NOTE]
+ > Adding a resource instance for your workspace or selecting the checkbox for "Allow trusted Microsoft services to access this account" is not sufficient to allow communication from the compute.
+
+* When your workspace uses a private endpoint, the compute instance can only be accessed from inside the virtual network. If you use a custom DNS or hosts file, add an entry for `<instance-name>.<region>.instances.azureml.ms`. Map this entry to the private IP address of the workspace private endpoint. For more information, see the [custom DNS](../how-to-custom-dns.md) article.
+* Virtual network service endpoint policies don't work for compute cluster/instance system storage accounts.
+* If storage and compute instance are in different regions, you may see intermittent timeouts.
+* If the Azure Container Registry for your workspace uses a private endpoint to connect to the virtual network, you canΓÇÖt use a managed identity for the compute instance. To use a managed identity with the compute instance, don't put the container registry in the VNet.
+* If you want to use Jupyter Notebooks on a compute instance:
+
+ * Don't disable websocket communication. Make sure your network allows websocket communication to `*.instances.azureml.net` and `*.instances.azureml.ms`.
+ * Make sure that your notebook is running on a compute resource behind the same virtual network and subnet as your data. When creating the compute instance, use **Advanced settings** > **Configure virtual network** to select the network and subnet.
+
+* __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply:
+
+ * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](/azure/virtual-network/virtual-network-peering-overview).
+ * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+
+ Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace.
+
+ > [!WARNING]
+ > If you are using a __private endpoint-enabled workspace__, creating the cluster in a different region is __not supported__.
+
+### Azure Databricks
+
+* In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
+* Azure Databricks doesn't use a private endpoint to communicate with the virtual network.
+
+For more information on using Azure Databricks in a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
+
+### Azure HDInsight or virtual machine
+
+* Azure Machine Learning supports only virtual machines that are running Ubuntu.
+
+## Required public internet access
++
+For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](../how-to-access-azureml-behind-firewall.md).
+
+## Compute cluster
++
+The following code creates a new Machine Learning Compute cluster in the `default` subnet of a virtual network named `mynetwork`:
+
+```python
+from azureml.core.compute import ComputeTarget, AmlCompute
+from azureml.core.compute_target import ComputeTargetException
+
+# The Azure virtual network name, subnet, and resource group
+vnet_name = 'mynetwork'
+subnet_name = 'default'
+vnet_resourcegroup_name = 'mygroup'
+
+# Choose a name for your CPU cluster
+cpu_cluster_name = "cpucluster"
+
+# Verify that cluster does not exist already
+try:
+ cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
+ print("Found existing cpucluster")
+except ComputeTargetException:
+ print("Creating new cpucluster")
+
+ # Specify the configuration for the new cluster
+ compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_D2_V2",
+ min_nodes=0,
+ max_nodes=4,
+ location="westus2",
+ vnet_resourcegroup_name=vnet_resourcegroup_name,
+ vnet_name=vnet_name,
+ subnet_name=subnet_name)
+
+ # Create the cluster with the specified name and configuration
+ cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
+
+ # Wait for the cluster to be completed, show the output log
+ cpu_cluster.wait_for_completion(show_output=True)
+```
+
+When the creation process finishes, you train your model by using the cluster in an experiment. For more information, see [Select and use a compute target for training](../how-to-set-up-training-targets.md).
++
+### No public IP for compute clusters (preview)
+
+When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have.
+
+> [!WARNING]
+> By default, you do not have public internet access from No Public IP Compute Cluster. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) with a public IP.
+
+A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877** and inbound from source **AzureLoadBalancer** and any port source to destination **VirtualNetwork** and port **44224** destination.
+
+**No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
+A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](/azure/private-link/disable-private-link-service-network-policy) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
+
+For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](../how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute cluster is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
+
+You can use a service endpoint or private endpoint for your Azure container registry and Azure storage in the subnet in which cluster is deployed.
+
+To create a no public IP address compute cluster (a preview feature) in studio, set **No public IP** checkbox in the virtual network section.
+You can also create no public IP compute cluster through an ARM template. In the ARM template set enableNodePublicIP parameter to false.
++
+**Troubleshooting**
+
+* If you get this error message during creation of cluster `The specified subnet has PrivateLinkServiceNetworkPolicies or PrivateEndpointNetworkEndpoints enabled`, follow the instructions from [Disable network policies for Private Link service](/azure/private-link/disable-private-link-service-network-policy) and [Disable network policies for Private Endpoint](/azure/private-link/disable-private-endpoint-network-policy).
+
+* If job execution fails with connection issues to ACR or Azure Storage, verify that customer has added ACR and Azure Storage service endpoint/private endpoints to subnet and ACR/Azure Storage allows the access from the subnet.
+
+* To ensure that you've created a no public IP cluster, in Studio when looking at cluster details you'll see **No Public IP** property is set to **true** under resource properties.
+
+## Compute instance
+
+For steps on how to create a compute instance deployed in a virtual network, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
+
+### No public IP for compute instances (preview)
+
+When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
+
+> [!WARNING]
+> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) with a public IP.
+
+For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](../how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
+
+A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
+
+A compute instance with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service source IP](/azure/private-link/disable-private-link-service-network-policy) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
+
+To create a no public IP address compute instance (a preview feature) in studio, set **No public IP** checkbox in the virtual network section.
+You can also create no public IP compute instance through an ARM template. In the ARM template set enableNodePublicIP parameter to false.
+
+Next steps:
+* [Use custom DNS](../how-to-custom-dns.md)
+* [Use a firewall](../how-to-access-azureml-behind-firewall.md)
++
+## Inbound traffic
++
+For more information on input and output traffic requirements for Azure Machine Learning, see [Use a workspace behind a firewall](../how-to-access-azureml-behind-firewall.md).
+
+## Next steps
+
+This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
+
+* [Virtual network overview (v1)](how-to-network-security-overview.md)
+* [Secure the workspace resources](../how-to-secure-workspace-vnet.md)
+* [Secure inference environment (v1)](how-to-secure-inferencing-vnet.md)
+* [Enable studio functionality](../how-to-enable-studio-virtual-network.md)
+* [Use custom DNS](../how-to-custom-dns.md)
+* [Use a firewall](../how-to-access-azureml-behind-firewall.md)
marketplace What Is New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/what-is-new.md
Learn about important updates in the commercial marketplace program of Partner C
| Category | Description | Date | | | | |
+| Policy | We've updated the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). For change history, see [Change history for Microsoft Publisher Agreement version 8.0 ΓÇô Octrober 2022 update](/legal/marketplace/mpa-change-history-october-2022). | 2022-09-14 |
| Policy | We've updated the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). For change history, see [Change history for Microsoft Publisher Agreement version 8.0 ΓÇô May 2022 update](/legal/marketplace/mpa-change-history-may-2022). | 2022-04-15 | | Offers | Added new articles to lead you step-by-step through the process of [testing a SaaS offer](test-saas-overview.md). | 2022-03-30 | | Payouts | We updated the payment schedule for [Payout schedules and processes](/partner-center/payout-policy-details). | 2022-01-19 |
migrate How To Discover Sql Existing Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-sql-existing-project.md
Last updated 03/23/2021
This article describes how to discover web apps and SQL Server instances and databases in an [Azure Migrate](./migrate-services-overview.md) project that was created before the preview of Azure SQL assessment feature and/or before the preview of Azure App Service assessment feature.
-Discovering ASP.NET web apps and SQL Server instances and databases running on on-premises machines helps identify and tailor a migration path to Azure. The Azure Migrate appliance performs this discovery using the Windows OS domain or non-domain credentials or SQL Server authentication credentials that have access to the SQL Server instances and databases running on the targeted servers.
+Discovering web apps and SQL Server instances and databases running on on-premises machines helps identify and tailor a migration path to Azure. The Azure Migrate appliance performs this discovery using the Windows OS domain or non-domain credentials or SQL Server authentication credentials that have access to the SQL Server instances and databases running on the targeted servers.
This discovery process is agentless that is, nothing is installed on the target servers. ## Before you start
This discovery process is agentless that is, nothing is installed on the target
- Created an [Azure Migrate project](./create-manage-projects.md) before the announcement of SQL and web apps assessment feature for your region - Added the [Azure Migrate: Discovery and assessment](./how-to-assess.md) tool to a project - Review [app-discovery support and requirements](./migrate-support-matrix-vmware.md#vmware-requirements).-- In case you are discovering assets on VMware environment then, Make sure servers where you're running app-discovery have PowerShell version 2.0 or later installed, and VMware Tools (later than 10.2.0) is installed.
+- In case you're discovering assets on VMware environment then, Make sure servers where you're running app-discovery have PowerShell version 2.0 or later installed, and VMware Tools (later than 10.2.0) is installed.
- Check the [requirements](./migrate-appliance.md) for deploying the Azure Migrate appliance. - Verify that you have the [required roles](./create-manage-projects.md#verify-permissions) in the subscription to create resources. - Ensure that your appliance has access to the internet
This discovery process is agentless that is, nothing is installed on the target
> Even though the processes in this document are covered for VMware, the processes are similar for Microsoft Hyper-V and Physical environment. > Discovery and assessment for SQL Server instances and databases is available across the Microsoft Hyper-V and Physical environment also.
-## Enable discovery of ASP.NET web apps and SQL Server instances and databases
+## Enable discovery of web apps and SQL Server instances and databases
1. In your Azure Migrate project, either - Select **Not enabled** on the Hub tile, or :::image type="content" source="./media/how-to-discover-sql-existing-project/hub-not-enabled.png" alt-text="Azure Migrate hub tile with SQL and web apps discovery not enabled"::: - Select **Not enabled** on any entry in the Server discovery page under SQL instances or Web apps column :::image type="content" source="./media/how-to-discover-sql-existing-project/discovery-not-enabled.png" alt-text="Azure Migrate discovered servers blade with SQL and web apps discovery not enabled":::
-2. To Discover ASP.NET web apps and SQL Server instances and databases follow the steps entailed:
+2. To discover web apps and SQL Server instances and databases follow the steps entailed:
- Select **Upgrade**, to create the required resource. :::image type="content" source="./media/how-to-discover-sql-existing-project/discovery-upgrade-appliance.png" alt-text="Button to upgrade the Azure Migrate appliance"::: - Validate that the services running on the appliance are updated to the latest versions. To do so, launch the Appliance configuration manager from your appliance server and select view appliance services from the Setup prerequisites panel. - Appliance and its components are automatically updated :::image type="content" source="./media/how-to-discover-sql-existing-project/appliance-services-version.png" alt-text="Check the appliance version"::: - In the manage credentials and discovery sources panel of the Appliance configuration manager, add Domain or SQL Server Authentication credentials that have Sysadmin access on the SQL Server instance and databases to be discovered.
- - ASP.NET web apps discovery works with both domain and non-domain Windows OS credentials as long as the account used has local admin privileges on servers.
+ - Web apps discovery works with both domain and non-domain Windows OS credentials as long as the account used has local admin privileges on servers.
You can leverage the automatic credential-mapping feature of the appliance, as highlighted [here](./tutorial-discover-vmware.md#start-continuous-discovery). Some points to note:
- - Ensure that software inventory is enabled already, or provide Domain or Non-domain credentials to enable the same. Software inventory must be performed to discover SQL Server instances and ASP.NET web apps.
- - Appliance will attempt to validate the Domain credentials with AD, as they are added. Ensure that appliance server has network line of sight to the AD server associated with the credentials. Non-domain credentials and credentials associated with SQL Server Authentication are not validated.
+ - Ensure that software inventory is enabled already, or provide Domain or Non-domain credentials to enable the same. Software inventory must be performed to discover SQL Server instances and web apps.
+ - Appliance will attempt to validate the Domain credentials with AD, as they're added. Ensure that appliance server has network line of sight to the AD server associated with the credentials. Non-domain credentials and credentials associated with SQL Server Authentication aren't validated.
-3. Once the desired credentials are added, please select Start Discovery, to begin the scan.
+3. Once the desired credentials are added, select Start Discovery, to begin the scan.
> [!Note]
->Please allow web apps and SQL discovery to run for sometime before creating assessments for Azure App Service or Azure SQL. If the discovery of web apps and SQL Server instances and databases is not allowed to complete, the respective instances are marked as **Unknown** in the assessment report.
+> Allow web apps and SQL discovery to run for sometime before creating assessments for Azure App Service or Azure SQL. If the discovery of web apps and SQL Server instances and databases is not allowed to complete, the respective instances are marked as **Unknown** in the assessment report.
## Next steps
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
Learn more about [assessments](concepts-assessment-calculation.md).
VMware | Details | **vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses are not supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
-**Permissions** | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory and agentless dependency analysis, the account must have privileges for guest operations on VMware VMs.
+**Permissions** | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory, agentless dependency analysis, web apps and SQL discovery, the account must have privileges for guest operations on VMware VMs.
## Server requirements
Support | Details
> > However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose.
-## ASP.NET web apps discovery requirements
+## Web apps discovery requirements
-[Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server.
-User can add both domain and non-domain credentials on appliance. Please make sure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in source environment.
-After the appliance is connected, it gathers configuration data for IIS web server and ASP.NET web apps. Web apps configuration data is updated once every 24 hours.
+[Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have a web server installed, Azure Migrate discovers web apps on the server.
+The user can add both domain and non-domain credentials on the appliance. Ensure that the account used has local admin privileges on source servers. Azure Migrate automatically maps credentials to the respective servers, so one doesnΓÇÖt have to map them manually. Most importantly, these credentials are never sent to Microsoft and remain on the appliance running in the source environment.
+After the appliance is connected, it gathers configuration data for ASP.NET web apps(IIS web server) and Java web apps(Tomcat servers). Web apps configuration data is updated once every 24 hours.
-Support | Details
- |
-**Supported servers** | Currently supported only for windows servers running IIS in your VMware environment.
-**Windows servers** | Windows Server 2008 R2 and later are supported.
-**Linux servers** | Currently not supported.
-**IIS access** | Web apps discovery requires a local admin user account.
-**IIS versions** | IIS 7.5 and later are supported.
+Support | ASP.NET web apps | Java web apps
+ | |
+**Stack** | VMware only. | VMware only.
+**Windows servers** | Windows Server 2008 R2 and later are supported. | Not supported.
+**Linux servers** | Not supported. | Ubuntu Linux 16.04/18.04/20.04, Debian 7/8, CentOS 6/7, Red Hat Enterprise Linux 5/6/7.
+**Web server versions** | IIS 7.5 and later. | Tomcat 8 or later.
+**Required privileges** | local admin | root or sudo user
> [!NOTE] > Data is always encrypted at rest and during transit.
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
Requirement | Details
| **vCenter Server/ESXi host** | You need a server running vCenter Server version 6.7, 6.5, 6.0, or 5.5.<br /><br /> Servers must be hosted on an ESXi host running version 5.5 or later.<br /><br /> On the vCenter Server, allow inbound connections on TCP port 443 so that the appliance can collect configuration and performance metadata.<br /><br /> The appliance connects to vCenter Server on port 443 by default. If the server running vCenter Server listens on a different port, you can modify the port when you provide the vCenter Server details in the appliance configuration manager.<br /><br /> On the ESXi hosts, make sure that inbound access is allowed on TCP port 443 for discovery of installed applications and for agentless dependency analysis on servers. **Azure Migrate appliance** | vCenter Server must have these resources to allocate to a server that hosts the Azure Migrate appliance:<br /><br /> - 32 GB of RAM, 8 vCPUs, and approximately 80 GB of disk storage.<br /><br /> - An external virtual switch and internet access on the appliance server, directly or via a proxy.
-**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#aspnet-web-apps-discovery-requirements).
+**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless).<br /><br /> For discovery of installed applications and for agentless dependency analysis, VMware Tools (version 10.2.1 or later) must be installed and running on servers. Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover SQL Server instances and databases, check [supported SQL Server and Windows OS versions and editions](migrate-support-matrix-vmware.md#sql-server-instance-and-database-discovery-requirements) and Windows authentication mechanisms.<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).
## Prepare an Azure user account
The appliance must connect to vCenter Server to discover the configuration and p
### Provide server credentials
-In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of ASP.NET web apps in your VMware environment.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can skip this step and proceed with vCenter Server discovery. You can change this option at any time.
+In **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis, discovery of SQL Server instances and databases and discovery of web apps in your VMware environment.**, you can provide multiple server credentials. If you don't want to use any of these appliance features, you can skip this step and proceed with vCenter Server discovery. You can change this option at any time.
:::image type="content" source="./media/tutorial-discover-vmware/appliance-server-credentials-mapping.png" alt-text="Screenshot that shows providing credentials for software inventory, dependency analysis, and s q l server discovery.":::
To start vCenter Server discovery, select **Start discovery**. After the discove
* Discovery of installed applications might take longer than 15 minutes. The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server. Web apps configuration data is updated once every 24 hours. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
-* ASP.NET web apps and SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
+* Web apps and SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
* By default, Azure Migrate uses the most secure way of connecting to SQL instances that is, Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) to understand what to choose. :::image type="content" source="./media/tutorial-discover-vmware/sql-connection-properties.png" alt-text="Screenshot that shows how to edit SQL Server connection properties.":::
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
## Update (August 2022) -- SQL discovery and assessment for Microsoft Hyper-V and Physical/ Bare-metal environments as well as IaaS services of other public cloud.
+- SQL discovery and assessment for Microsoft Hyper-V and Physical/Bare-metal environments as well as IaaS services of other public clouds.
+- Java web apps discovery on Apache Tomcat running on Linux servers hosted in VMware environment.
+- Enhanced discovery data collection including detection of database connecting strings, application directories, and authentication mechanisms for ASP.NET web apps.
## Update (June 2022)
open-datasets Dataset Oxford Covid Government Response Tracker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-oxford-covid-government-response-tracker.md
As of June 8, 2020 they contained 27,919 rows (CSV 4.9 MB, JSON 20.9 MB, JSONL 2
## Data source
-The source of this data is Thomas Hale, Sam Webster, Anna Petherick, Toby Phillips, and Beatriz Kira. (2020). Oxford COVID-19 Government Response Tracker. [Blavatnik School of Government](https://www.bsg.ox.ac.uk/). Raw data is ingested daily from the [latest OxCGRT csv file](https://github.com/OxCGRT/covid-policy-tracker/blob/master/data/OxCGRT_latest.csv). For more information on this dataset, including how it is collected, see the [Government tracker response site](https://www.bsg.ox.ac.uk/research/research-projects/covid-19-government-response-tracker).
+The source of this data is Thomas Hale, Sam Webster, Anna Petherick, Toby Phillips, and Beatriz Kira. (2020). Oxford COVID-19 Government Response Tracker. [Blavatnik School of Government](https://www.bsg.ox.ac.uk/). Raw data is ingested daily from the [latest OxCGRT csv file](https://github.com/OxCGRT/covid-policy-tracker/blob/master/data/OxCGRT_nat_latest.csv). For more information on this dataset, including how it is collected, see the [Government tracker response site](https://www.bsg.ox.ac.uk/research/research-projects/covid-19-government-response-tracker).
## Data quality The OxCGRT does not guarantee the accuracy or timeliness of the data. For more information, see the [data quality statement](https://github.com/OxCGRT/covid-policy-tracker#data-quality).
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| Brazil South | :heavy_check_mark: (v3 only) | :x: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Central India | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: |
+| Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | China East 3 | :heavy_check_mark: | :x: | :x:| | China North 3 | :heavy_check_mark: | :x: | :x:|
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
For more information, see the Eset documentation.
| **DCR support** | [Workspace transformation DCR](../azure-monitor/logs/tutorial-workspace-transformations-portal.md) | | **Kusto function alias:** | ExabeamEvent | | **Kusto function URL:** | https://aka.ms/Sentinel-Exabeam-parser |
-| **Vendor documentation/<br>installation instructions** | [Configure Advanced Analytics system activity notifications](https://docs.exabeam.com/en/advanced-analytics/i54/advanced-analytics-administration-guide/113254-configure-advanced-analytics.html#UUID-7ce5ff9d-56aa-93f0-65de-c5255b682a08) |
+| **Vendor documentation/<br>installation instructions** | [Configure Advanced Analytics system activity notifications](https://docs.exabeam.com/en/advanced-analytics/i56/advanced-analytics-administration-guide/125371-configure-advanced-analytics.html#UUID-6d28da8d-6d3e-5aa7-7c12-e67dc804f894) |
| **Supported by** | Microsoft |
site-recovery Azure To Azure About Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-about-networking.md
-This article provides networking guidance when you're replicating and recovering Azure VMs from one region to another, using [Azure Site Recovery](site-recovery-overview.md).
+This article provides networking guidance for platform connectivity when you're replicating Azure VMs from one region to another, using [Azure Site Recovery](site-recovery-overview.md).
## Before you start
spring-apps How To Enable Redundancy And Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-redundancy-and-disaster-recovery.md
The following limitations apply when you create an Azure Spring Apps Service ins
- Zone redundancy is not available in basic tier. - You can enable zone redundancy only when you create a new Azure Spring Apps Service instance. - If you enable your own resource in Azure Spring Apps, such as your own persistent storage, make sure to enable zone redundancy for the resource. For more information, see [How to enable your own persistent storage in Azure Spring Apps](how-to-custom-persistent-storage.md).-- Zone redundancy ensures that underlying VM nodes are distributed evenly across all availability zones but does not guarantee even distribution of app instances. If an app instance fails because its located zone goes down, Azure Spring Apps creates a new app instance for this app on nodes in other availability zones.
+- Zone redundancy ensures that underlying VM nodes are distributed evenly across all availability zones but does not guarantee even distribution of app instances. If an app instance fails because its located zone goes down, Azure Spring Apps creates a new app instance for this app on a node in another availability zone.
- Geo-disaster recovery is not the purpose of zone redundancy. To protect your service from regional outages, see the [Customer-managed geo-disaster recovery](#customer-managed-geo-disaster-recovery) section later in this article. ## Create an Azure Spring Apps instance with zone redundancy enabled
static-web-apps Deploy Nuxtjs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nuxtjs.md
Title: "Tutorial: Deploy server-rendered Nuxt.js websites on Azure Static Web Apps"
+ Title: "Tutorial: Deploy static-rendered Nuxt.js websites on Azure Static Web Apps"
description: "Generate and deploy Nuxt.js dynamic sites with Azure Static Web Apps."
storage Blob Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-storage-monitoring-scenarios.md
Title: Best practices for monitoring Azure Blob Storage description: Learn best practice guidelines and how to them when using metrics and logs to monitor your Azure Blob Storage.
+recommendations: false
storage Monitor Blob Storage Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage-reference.md
Title: Azure Blob Storage monitoring data reference | Microsoft Docs description: Log and metrics reference for monitoring data from Azure Blob Storage.
+recommendations: false
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Title: Monitoring Azure Blob Storage-
+recommendations: false
description: Learn how to monitor the performance and availability of Azure Blob Storage. Monitor Azure Blob Storage data, learn about configuration, and analyze metric and log data.
storage Storage Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-insights-overview.md
Title: Monitor Azure Storage services with Azure Monitor Storage insights | Microsoft Docs description: This article describes the Storage insights feature of Azure Monitor that provides storage admins with a quick understanding of performance and utilization issues with their Azure Storage accounts.
+recommendations: false
Each workbook is saved in the storage account that you saved it in. Try to find
- Configure [metric alerts](../../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerting to aid in detecting issues. - Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md).--- For an in-depth guide on using Storage Analytics and other tools to identify, diagnose, and troubleshoot Azure Storage-related issues, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md).
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
Title: No code stream processing using Azure Stream Analytics
-description: Learn about processing your real time data streams in Azure Event Hubs using the Azure Stream Analytics no code editor.
+ Title: No-code stream processing through Azure Stream Analytics
+description: Learn about processing your real-time data streams in Azure Event Hubs by using the Azure Stream Analytics no-code editor.
Last updated 08/26/2022
-# No code stream processing using Azure Stream Analytics (Preview)
+# No-code stream processing through Azure Stream Analytics (preview)
-You can process your real time data streams in Azure Event Hubs using Azure Stream Analytics. The no code editor allows you to easily develop a Stream Analytics job without writing a single line of code. Within minutes, you can develop and run a job that tackles many scenarios, including:
+You can process your real-time data streams in Azure Event Hubs by using Azure Stream Analytics. The no-code editor allows you to develop a Stream Analytics job without writing a single line of code. In minutes, you can develop and run a job that tackles many scenarios, including:
-- Filtering and ingesting to Azure Synapse SQL-- Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2-- Materializing data in Azure Cosmos DB
+- Filtering and ingesting to Azure Synapse SQL.
+- Capturing your Event Hubs data in Parquet format in Azure Data Lake Storage Gen2.
+- Materializing data in Azure Cosmos DB.
The experience provides a canvas that allows you to connect to input sources to quickly see your streaming data. Then you can transform it before writing to your destination of choice in Azure. You can: -- Modify input schema-- Perform data preparation operations like joins and filters-- Tackle advanced scenarios such as time-window aggregations (tumbling, hopping, and session windows) for group-by operations
+- Modify input schemas.
+- Perform data preparation operations like joins and filters.
+- Approach advanced scenarios like time-window aggregations (tumbling, hopping, and session windows) for group-by operations.
After you create and run your Stream Analytics jobs, you can easily operationalize production workloads. Use the right set of [built-in metrics](stream-analytics-job-metrics.md) for monitoring and troubleshooting purposes. Stream Analytics jobs are billed according to the [pricing model](https://azure.microsoft.com/pricing/details/stream-analytics/) when they're running. ## Prerequisites
-Before you develop your Stream Analytics jobs using the no code editor, you must meet these requirements.
+Before you develop your Stream Analytics jobs by using the no-code editor, you must meet these requirements:
-- The Azure Event Hubs namespace and any target destination resource where you want to write must be publicly accessible and can't be in an Azure Virtual Network.
+- The Azure Event Hubs namespace and any target destination resource where you want to write must be publicly accessible and can't be in an Azure virtual network.
- You must have the required permissions to access the streaming input and output resources. - You must maintain permissions to create and modify Azure Stream Analytics resources.
Before you develop your Stream Analytics jobs using the no code editor, you must
A Stream Analytics job is built on three main components: _streaming inputs_, _transformations_, and _outputs_. You can have as many components as you want, including multiple inputs, parallel branches with multiple transformations, and multiple outputs. For more information, see [Azure Stream Analytics documentation](index.yml).
-To use the no code editor to easily create a Stream Analytics job, open an Event Hubs instance. Select Process Data and then select any template.
+To use the no-code editor to create a Stream Analytics job, open an Event Hubs instance. Select **Process Data**, and then select any template.
The following screenshot shows a finished Stream Analytics job. It highlights all the sections available to you while you author.
-1. **Ribbon** - On the ribbon, sections follow the order of a classic/ analytics process: Event Hubs as input (also known as data source), transformations (streaming ETL operations), outputs, a button to save your progress and a button to start the job.
-2. **Diagram view** - A graphical representation of your Stream Analytics job, from input to operations to outputs.
-3. **Side pane** - Depending on which component you selected in the diagram view, you'll have settings to modify input, transformation, or output.
-4. **Tabs for data preview, authoring errors, runtime logs, and metrics** - For each tile shown, the data preview will show you results for that step (live for inputs and on-demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning will select that transform. It also provides the job metrics for you to monitor running job's health.
+1. **Ribbon**: On the ribbon, sections follow the order of a classic analytics process: an event hub as input (also known as a data source), transformations (streaming ETL operations), outputs, a button to save your progress, and a button to start the job.
+2. **Diagram view**: This is a graphical representation of your Stream Analytics job, from input to operations to outputs.
+3. **Side pane**: Depending on which component you selected in the diagram view, you'll have settings to modify input, transformation, or output.
+4. **Tabs for data preview, authoring errors, runtime logs, and metrics**: For each tile, the data preview will show you results for that step (live for inputs; on demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning will select that transform. It also provides the job metrics for you to monitor the running job's health.
## Event Hubs as the streaming input
-Azure Event Hubs is a big-data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
+Azure Event Hubs is a big-data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored through any real-time analytics provider or batching/storage adapter.
-To configure an event hub as an input for your job, select the **Event Hub** symbol. A tile appears in the diagram view, including a side pane for its configuration and connection.
-When connecting to your event hub in no code editor, it is recommended that you create a new Consumer Group (which is the default option). This helps in avoiding the event hub reach the concurrent readersΓÇÖ limit. To understand more about Consumer groups and whether you should select an existing Consumer Group or create a new one, see [Consumer groups](../event-hubs/event-hubs-features.md). If your event hub is in Basic tier, you can only use the existing $Default Consumer group. If your event hub is in Standard or Premium tiers, you can create a new consumer group.
+To configure an event hub as an input for your job, select the **Event Hub** icon. A tile appears in the diagram view, including a side pane for its configuration and connection.
- ![Consumer group selection while setting up Event Hub](./media/no-code-stream-processing/consumer-group-nocode.png)
+When you're connecting to your event hub in the no-code editor, we recommend that you create a new consumer group (which is the default option). This approach helps prevent the event hub from reaching the concurrent readers' limit. To understand more about consumer groups and whether you should select an existing consumer group or create a new one, see [Consumer groups](../event-hubs/event-hubs-features.md).
-When connecting to the Event Hubs, if you choose ΓÇÿManaged IdentityΓÇÖ as Authentication mode, then the Azure Event Hubs Data owner role will be granted to the Managed Identity for the Stream Analytics job. To learn more about Managed Identity for Event Hubs, see [Event Hubs Managed Identity](event-hubs-managed-identity.md).
-Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+If your event hub is in the Basic tier, you can use only the existing **$Default** consumer group. If your event hub is in a Standard or Premium tier, you can create a new consumer group.
- ![Authentication method is selected as Managed Identity](./media/no-code-stream-processing/msi-eh-nocode.png)
+![Screenshot that shows consumer group selection while setting up an event hub.](./media/no-code-stream-processing/consumer-group-nocode.png)
-After you set up your event hub's details and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed. When Stream Analytics job detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
+When you're connecting to the event hub, if you select **Managed Identity** as the authentication mode, the Azure Event Hubs Data Owner role will be granted to the managed identity for the Stream Analytics job. To learn more about managed identities for an event hub, see [Use managed identities to access an event hub from an Azure Stream Analytics job](event-hubs-managed-identity.md).
-You can always edit the field names, or remove or change the data type, by selecting the three dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image.
+Managed identities eliminate the limitations of user-based authentication methods. These limitations include the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+![Screenshot that shows managed identity selected as the authentication method.](./media/no-code-stream-processing/msi-eh-nocode.png)
+
+After you set up your event hub's details and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed.
+
+When Stream Analytics jobs detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
+
+You can always edit the field names, or remove or change the data type, by selecting the three-dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image.
+ The available data types are: -- **DateTime** - Date and time field in ISO format-- **Float** - Decimal number-- **Int** - Integer number-- **Record** - Nested object with multiple records-- **String** - Text
+- **DateTime**: Date and time field in ISO format.
+- **Float**: Decimal number.
+- **Int**: Integer number.
+- **Record**: Nested object with multiple records.
+- **String**: Text.
## Reference data inputs
-Reference data is either static or changes slowly over time. It is typically used to enrich incoming streaming and do lookups in your job. For example, you might join data in the data stream input to data in the reference data, much as you would perform a SQL join to look up static values.For more information about reference data inputs, see [Using reference data for lookups in Stream Analytics](stream-analytics-use-reference-data.md).
-
-No-code editor now supports two reference data sources:
-- Azure Data Lake Storage (ADLS) Gen2-- Azure SQL database
+Reference data is static or changes slowly over time. It's typically used to enrich incoming streams and do lookups in your job. For example, you might join data stream input to reference data, much as you would perform a SQL join to look up static values. For more information about reference data inputs, see [Use reference data for lookups in Stream Analytics](stream-analytics-use-reference-data.md).
+The no-code editor now supports two reference data sources:
+- Azure Data Lake Storage Gen2
+- Azure SQL Database
-### ADLS Gen2 as reference data
-Reference data is modeled as a sequence of blobs in ascending order of the date/time specified in the blob name. Blobs can only be added to the end of the sequence by using a date/time greater than the one specified by the last blob in the sequence. Blobs are defined in the input configuration. For more information, see [Use reference data from Blob Storage for a Stream Analytics job](stream-analytics-use-reference-data.md).
+### Azure Data Lake Storage Gen2 as reference data
-First, you have to select **Reference ADLS Gen2** under **Inputs** section on the ribbon. To see details about each field, see Azure Blob Storage section in [Azure Blob Storage Reference data input](stream-analytics-use-reference-data.md#azure-blob-storage).
+Reference data is modeled as a sequence of blobs in ascending order of the date/time combination specified in the blob name. You can add blobs to the end of the sequence only by using a date/time greater than the one that the last blob specified in the sequence. Blobs are defined in the input configuration.
- ![Configure ADLS Gen2 as reference data input in no code editor](./media/no-code-stream-processing/blob-referencedata-nocode.png)
+First, under the **Inputs** section on the ribbon, select **Reference ADLS Gen2**. To see details about each field, see the section about Azure Blob Storage in [Use reference data for lookups in Stream Analytics](stream-analytics-use-reference-data.md#azure-blob-storage).
-Then, upload a JSON of array file and the fields in the file will be detected. Use this reference data to perform transformation with Streaming input data from Event Hubs.
+![Screenshot that shows fields for configuring Azure Data Lake Storage Gen2 as input in the no-code editor.](./media/no-code-stream-processing/blob-referencedata-nocode.png)
- ![Upload JSON for reference data](./media/no-code-stream-processing/blob-referencedata-upload-nocode.png)
+Then, upload a JSON array file. The fields in the file will be detected. Use this reference data to perform transformation with streaming input data from Event Hubs.
-### SQL Database as reference data
+![Screenshot that shows selections for uploading JSON for reference data.](./media/no-code-stream-processing/blob-referencedata-upload-nocode.png)
-Azure Stream Analytics supports Azure SQL Database as a source of input for reference data as well. For more information, see [Azure SQL Database Reference data input](stream-analytics-use-reference-data.md#azure-sql-database). You can use SQL Database as reference data for your Stream Analytics job in the no-code editor.
+### Azure SQL Database as reference data
-To configure SQL database as reference data input, simply select the **Reference SQL Database** under **Inputs** section on the ribbon.
+You can use Azure SQL Database as reference data for your Stream Analytics job in the no-code editor. For more information, see the section about SQL Database in [Use reference data for lookups in Stream Analytics](stream-analytics-use-reference-data.md#azure-sql-database).
-Then fill in the needed information to connect your reference database and select the table with your needed columns. You can also fetch the reference data from your table by editing the SQL query manually.
+To configure SQL Database as reference data input, select **Reference SQL Database** under the **Inputs** section on the ribbon. Then fill in the information to connect your reference database and select the table with your needed columns. You can also fetch the reference data from your table by editing the SQL query manually.
## Transformations
-Streaming data transformations are inherently different from batch data transformations. Almost all streaming data has a time component, which affects any data preparation tasks involved.
+Streaming data transformations are inherently different from batch data transformations. Almost all streaming data has a time component, which affects any data-preparation tasks involved.
-To add a streaming data transformation to your job, select the transformation symbol under **Operations** section on the ribbon for that transformation. The respective tile will be dropped in the diagram view. After you select it, you'll see the side pane for that transformation to configure it.
+To add a streaming data transformation to your job, select the transformation symbol under the **Operations** section on the ribbon for that transformation. The respective tile will be dropped in the diagram view. After you select it, you'll see the side pane for that transformation to configure it.
### Filter Use the **Filter** transformation to filter events based on the value of a field in the input. Depending on the data type (number or text), the transformation will keep the values that match the selected condition. > [!NOTE]
-> Inside every tile, you'll see information about what else is needed for the transformation to be ready. For example, when you're adding a new tile, you'll see a `Set-up required` message. If you're missing a node connector, you'll see either an *Error* or a *Warning* message.
+> Inside every tile, you'll see information about what else the transformation needs to be ready. For example, when you're adding a new tile, you'll see a **Setup required** message. If you're missing a node connector, you'll see either an **Error** message or a **Warning** message.
### Manage fields The **Manage fields** transformation allows you to add, remove, or rename fields coming in from an input or another transformation. The settings on the side pane give you the option of adding a new one by selecting **Add field** or adding all fields at once. > [!TIP]
-> After you configure a tile, the diagram view gives you a glimpse of the settings within the tile itself. For example, in the **Manage fields** area of the preceding image, you can see the first three fields being managed and the new names assigned to them. Each tile has information relevant to it.
+> After you configure a tile, the diagram view gives you a glimpse of the settings within the tile. For example, in the **Manage fields** area of the preceding image, you can see the first three fields being managed and the new names assigned to them. Each tile has information that's relevant to it.
### Aggregate
You can use the **Aggregate** transformation to calculate an aggregation (**Sum*
To add an aggregation, select the transformation symbol. Then connect an input, select the aggregation, add any filter or slice dimensions, and select the period of time over which the aggregation will be calculated. In this example, we're calculating the sum of the toll value by the state where the vehicle is from over the last 10 seconds. To add another aggregation to the same transformation, select **Add aggregate function**. Keep in mind that the filter or slice will apply to all aggregations in the transformation.
To add another aggregation to the same transformation, select **Add aggregate fu
Use the **Join** transformation to combine events from two inputs based on the field pairs that you select. If you don't select a field pair, the join will be based on time by default. The default is what makes this transformation different from a batch one.
-As with regular joins, you have different options for your join logic:
+As with regular joins, you have options for your join logic:
-- **Inner join** - Include only records from both tables where the pair matches. In this example, that's where the license plate matches both inputs.-- **Left outer join** - Include all records from the left (first) table and only the records from the second one that match the pair of fields. If there's no match, the fields from the second input will be blank.
+- **Inner join**: Include only records from both tables where the pair matches. In this example, that's where the license plate matches both inputs.
+- **Left outer join**: Include all records from the left (first) table and only the records from the second one that match the pair of fields. If there's no match, the fields from the second input will be blank.
To select the type of join, select the symbol for the preferred type on the side pane.
Finally, select over what period you want the join to be calculated. In this exa
By default, all fields from both tables are included. Prefixes left (first node) and right (second node) in the output help you differentiate the source. ### Group by
-Use the **Group by** transformation to calculate aggregations across all events within a certain time window. You can group by the values in one or more fields. It's like the **Aggregate** transformation but provides more options for aggregations. It also includes more complex time-window options. Also like **Aggregate**, you can add more than one aggregation per transformation.
+Use the **Group by** transformation to calculate aggregations across all events within a certain time window. You can group by the values in one or more fields. It's like the **Aggregate** transformation but provides more options for aggregations. It also includes more complex options for time windows. Also like **Aggregate**, you can add more than one aggregation per transformation.
The aggregations available in the transformation are:
To configure the transformation:
1. Select your preferred aggregation. 2. Select the field that you want to aggregate on.
-3. Select an optional group-by field if you want to get the aggregate calculation over another dimension or category. For example, **State**.
+3. Select an optional group-by field if you want to get the aggregate calculation over another dimension or category. For example: **State**.
4. Select your function for time windows. To add another aggregation to the same transformation, select **Add aggregate function**. Keep in mind that the **Group by** field and the windowing function will apply to all aggregations in the transformation.
-A time stamp for the end of the time window is provided as part of the transformation output for reference. For more information about time windows supported by Stream Analytics jobs, see [Windowing functions (Azure Stream Analytics)](/stream-analytics-query/windowing-azure-stream-analytics).
+A time stamp for the end of the time window appears as part of the transformation output for reference. For more information about time windows that Stream Analytics jobs support, see [Windowing functions (Azure Stream Analytics)](/stream-analytics-query/windowing-azure-stream-analytics).
### Union
-Use the **Union** transformation to connect two or more inputs to add events with shared fields (with the same name and data type) into one table. Fields that don't match will be dropped and not included in the output.
+Use the **Union** transformation to connect two or more inputs to add events that have shared fields (with the same name and data type) into one table. Fields that don't match will be dropped and not included in the output.
-### Expand
+### Expand array
-Expand array is to create a new row for each value within an array.
+Use the **Expand array** transformation to create a new row for each value within an array.
## Streaming outputs
-The no-code drag-and-drop experience currently supports several output sinks to store your processed real time data.
+The no-code drag-and-drop experience currently supports several output sinks to store your processed real-time data.
### Azure Data Lake Storage Gen2
-Data Lake Storage Gen2 makes Azure Storage the foundation for building enterprise data lakes on Azure. It's designed from the start to service multiple petabytes of information while sustaining hundreds of gigabits of throughput. It allows you to easily manage massive amounts of data. Azure Blob storage offers a cost-effective and scalable solution for storing large amounts of unstructured data in the cloud.
+Data Lake Storage Gen2 makes Azure Storage the foundation for building enterprise data lakes on Azure. It's designed to service multiple petabytes of information while sustaining hundreds of gigabits of throughput. It allows you to easily manage massive amounts of data. Azure Blob Storage offers a cost-effective and scalable solution for storing large amounts of unstructured data in the cloud.
+
+Under the **Outputs** section on the ribbon, select **ADLS Gen2** as the output for your Stream Analytics job. Then select the container where you want to send the output of the job. For more information about Azure Data Lake Gen2 output for a Stream Analytics job, see [Blob Storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
-Select **ADLS Gen2** under **Outputs** section on the ribbon as output for your Stream Analytics job and select the container where you want to send the output of the job. For more information about Azure Data Lake Gen2 output for a Stream Analytics job, see [Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics](blob-storage-azure-data-lake-gen2-output.md).
+When you're connecting to Azure Data Lake Storage Gen2, if you select **Managed Identity** as the authentication mode, then the Storage Blob Data Contributor role will be granted to the managed identity for the Stream Analytics job. To learn more about managed identities for Azure Data Lake Storage Gen2, see [Use managed identities to authenticate your Azure Stream Analytics job to Azure Blob Storage](blob-output-managed-identity.md).
-When connecting to ADLS Gen2, if you choose ‘Managed Identity’ as Authentication mode, then the Storage Blob Data Contributor role will be granted to the Managed Identity for the Stream Analytics job. To learn more about Managed Identity for ADLS Gen2, see [Storage Blob Managed Identity](blob-output-managed-identity.md). Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+Managed identities eliminate the limitations of user-based authentication methods. These limitations include the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
- ![Managed identity for ADLS Gen2](./media/no-code-stream-processing/msi-adls-nocode.png)
+![Screenshot that shows selecting managed identity as the authentication method for Azure Data Lake Storage Gen2](./media/no-code-stream-processing/msi-adls-nocode.png)
### Azure Synapse Analytics
-Azure Stream Analytics jobs can output to a dedicated SQL pool table in Azure Synapse Analytics and can process throughput rates up to 200MB/sec. It supports the most demanding real-time analytics and hot-path data processing needs for workloads such as reporting and dashboarding.
+Azure Stream Analytics jobs can send output to a dedicated SQL pool table in Azure Synapse Analytics and can process throughput rates up to 200 MB per second. Stream Analytics supports the most demanding real-time analytics and hot-path data processing needs for workloads like reporting and dashboarding.
> [!IMPORTANT] > The dedicated SQL pool table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
-Select **Synapse** under **Outputs** section on the ribbon as output for your Stream Analytics job and select the SQL pool table where you want to send the output of the job. For more information about Synapse output for a Stream Analytics job, see [Azure Synapse Analytics output from Azure Stream Analytics](azure-synapse-analytics-output.md).
+Under the **Outputs** section on the ribbon, select **Synapse** as the output for your Stream Analytics job. Then select the SQL pool table where you want to send the output of the job. For more information about Azure Synapse output for a Stream Analytics job, see [Azure Synapse Analytics output from Azure Stream Analytics](azure-synapse-analytics-output.md).
### Azure Cosmos DB
-Azure Cosmos DB is a globally distributed database service that offers limitless elastic scale around the globe, rich query, and automatic indexing over schema-agnostic data models.
+Azure Cosmos DB is a globally distributed database service that offers limitless elastic scale around the globe. It also offers rich queries and automatic indexing over schema-agnostic data models.
-Select **CosmosDB** under **Outputs** section on the ribbon as output for your Stream Analytics job. For more information about Cosmos DB output for a Stream Analytics job, see [Azure Cosmos DB output from Azure Stream Analytics](azure-cosmos-db-output.md).
+Under the **Outputs** section on the ribbon, select **CosmosDB** as the output for your Stream Analytics job. For more information about Azure Cosmos DB output for a Stream Analytics job, see [Azure Cosmos DB output from Azure Stream Analytics](azure-cosmos-db-output.md).
-When connecting to Azure Cosmos DB, if you choose ‘Managed Identity’ as Authentication mode, then the Contributor role will be granted to the Managed Identity for the Stream Analytics job.To learn more about Managed Identity for Cosmos DB, see [Cosmos DB Managed Identity](cosmos-db-managed-identity.md). Managed identities eliminate the limitations of user-based authentication methods, like the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+When you're connecting to Azure Cosmos DB, if you select **Managed Identity** as the authentication mode, then the Contributor role will be granted to the managed identity for the Stream Analytics job. To learn more about managed identities for Azure Cosmos DB, see [Use managed identities to access Azure Cosmos DB from an Azure Stream Analytics job (preview)](cosmos-db-managed-identity.md).
- ![Managed identity for Cosmos DB](./media/no-code-stream-processing/msi-cosmosdb-nocode.png)
+Managed identities eliminate the limitations of user-based authentication methods. These limitations include the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
+![Screenshot that shows selecting managed identity as the authentication method for Azure Cosmos DB.](./media/no-code-stream-processing/msi-cosmosdb-nocode.png)
-### Azure SQL database
+### Azure SQL Database
-[Azure SQL Database](https://azure.microsoft.com/services/sql-database/) is a fully managed platform as a service (PaaS) database engine that can help you to create a highly available and high-performance data storage layer for the applications and solutions in Azure. Azure Stream Analytics jobs can be configured to write the processed data to an existing table in SQL Database with no-code editor experience.
+[Azure SQL Database](https://azure.microsoft.com/services/sql-database/) is a fully managed platform as a service (PaaS) database engine that can help you to create a highly available and high-performance data storage layer for the applications and solutions in Azure. By using the no-code editor, you can configure Azure Stream Analytics jobs to write the processed data to an existing table in SQL Database.
> [!IMPORTANT]
-> The Azure SQL database table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
+> The Azure SQL Database table must exist before you can add it as output to your Stream Analytics job. The table's schema must match the fields and their types in your job's output.
-To configure SQL database as output, simply select the **SQL Database** under **Outputs** section on the editor ribbon. Then fill in the needed information to connect your SQL database and select the table you want to write data to.
+To configure Azure SQL Database as output, select **SQL Database** under the **Outputs** section on the ribbon. Then fill in the needed information to connect your SQL database and select the table that you want to write data to.
-For more information about Azure SQL database output for a Stream Analytics job, see [Azure SQL Database output from Azure Stream Analytics](./sql-database-output.md).
+For more information about Azure SQL Database output for a Stream Analytics job, see [Azure SQL Database output from Azure Stream Analytics](./sql-database-output.md).
## Data preview, authoring errors, runtime logs, and metrics
-The no code drag-and-drop experience provides tools to help you author, troubleshoot, and evaluate the performance of your analytics pipeline for streaming data.
+The no-code drag-and-drop experience provides tools to help you author, troubleshoot, and evaluate the performance of your analytics pipeline for streaming data.
### Live data preview for inputs
As shown in the following screenshot, if you want to see or drill down into some
You can also see the details of a specific record, a _cell_ in the table, by selecting it and then selecting **Show/Hide details** (2). The screenshot shows the detailed view of a nested object in a record. ### Static preview for transformations and outputs After you add and set up any steps in the diagram view, you can test their behavior by selecting **Get static preview**. After you do, the Stream Analytics job evaluates all transformations and outputs to make sure they're configured correctly. Stream Analytics then displays the results in the static data preview, as shown in the following image.
-You can refresh the preview by selecting **Refresh static preview** (1). When you refresh the preview, the Stream Analytics job takes new data from the input and evaluates all transformations. Then it outputs again with any updates that you might have performed. The **Show/Hide details** option is also available (2).
+You can refresh the preview by selecting **Refresh static preview** (1). When you refresh the preview, the Stream Analytics job takes new data from the input and evaluates all transformations. Then it sends output again with any updates that you might have performed. The **Show/Hide details** option is also available (2).
### Authoring errors
-If you have any authoring errors or warnings, the Authoring errors tab will list them, as shown in the following screenshot. The list includes details about the error or warning, the type of card (input, transformation, or output), the error level, and a description of the error or warning.
+If you have any authoring errors or warnings, the **Authoring errors** tab will list them, as shown in the following screenshot. The list includes details about the error or warning, the type of card (input, transformation, or output), the error level, and a description of the error or warning.
### Runtime logs
-Runtime logs are Warning/Error/Information level logs when job is running. These logs are helpful when you want to edit your Stream Analytics job topology/configuration for troubleshooting. It is highly recommended to turn on diagnostic logs and sending them to Log Analytics workspace in **Setting** to have more insights of your running jobs for debugging.
+Runtime logs appear at the warning, error, or information level when a job is running. These logs are helpful when you want to edit your Stream Analytics job topology or configuration for troubleshooting. We highly recommend that you turn on diagnostic logs and send them to Log Analytics workspace in **Settings** to have more insights into your running jobs for debugging.
-In the following screenshot example, the user has configured SQL database output with a table schema that is not matching with the fields of the job output.
+In the following screenshot example, the user has configured SQL Database output with a table schema that doesn't match with the fields of the job output.
### Metrics
-If the job is running, you can monitor the health of your job by navigating to Metrics tab. The four metrics shown by default are Watermark delay, Input events, Backlogged input events, Output events. You can use these to understand if the events are flowing in & output of the job without any input backlog. You can select more metrics from the list.To understand all the metrics in details, see [Stream Analytics metrics](stream-analytics-job-metrics.md).
+If the job is running, you can monitor the health of your job on the **Metrics** tab. The four metrics shown by default are **Watermark delay**, **Input events**, **Backlogged input events**, and **Output events**. You can use these metrics to understand if the events are flowing in and out of the job without any input backlog.
+
+You can select more metrics from the list. To understand all the metrics in detail, see [Azure Stream Analytics job metrics](stream-analytics-job-metrics.md).
## Start a Stream Analytics job
-You can save the job anytime while creating it. Once you have configured the event hub, transformations, and Streaming outputs for the job, you can Start the job.
-**Note**: While the no code editor is in Preview, the Azure Stream Analytics service is Generally Available.
+You can save the job anytime while creating it. After you configure the event hub, transformations, and streaming outputs for the job, you can start the job.
+
+> [!NOTE]
+> Although the no-code editor is in preview, the Azure Stream Analytics service is generally available.
-- **Output start time** - When you start a job, you select a time for the job to start creating output.
- - **Now** - Makes the starting point of the output event stream the same as when the job is started.
- - **Custom** - You can choose the starting point of the output.
- - **When last stopped** - This option is available when the job was previously started but was stopped manually or failed. When you choose this option, the last output time will be used to restart the job, so no data is lost.
-- **Streaming units** - Streaming Units represent the amount of compute and memory assigned to the job while running. If you're unsure how many SUs to choose, we recommend that you start with three and adjust as needed.-- **Output data error handling** - Output data error handling policies only apply when the output event produced by a Stream Analytics job doesn't conform to the schema of the target sink. You can configure the policy by choosing either **Retry** or **Drop**. For more information, see [Azure Stream Analytics output error policy](stream-analytics-output-error-policy.md).-- **Start** - Starts the Stream Analytics job.
+You can configure these options:
+- **Output start time**: When you start a job, you select a time for the job to start creating output.
+ - **Now**: This option makes the starting point of the output event stream the same as when the job is started.
+ - **Custom**: You can choose the starting point of the output.
+ - **When last stopped**: This option is available when the job was previously started but was stopped manually or failed. When you choose this option, the last output time will be used to restart the job, so no data is lost.
+- **Streaming units**: Streaming units (SUs) represent the amount of compute and memory assigned to the job while it's running. If you're not sure how many SUs to choose, we recommend that you start with three and adjust as needed.
+- **Output data error handling**: Policies for output data error handling apply only when the output event produced by a Stream Analytics job doesn't conform to the schema of the target sink. You can configure the policy by choosing either **Retry** or **Drop**. For more information, see [Azure Stream Analytics output error policy](stream-analytics-output-error-policy.md).
+- **Start**: This button starts the Stream Analytics job.
+ ## Stream Analytics jobs list
-You can see the list of all Stream Analytics jobs created by no-code drag and drop under **Process data** > **Stream Analytics jobs**.
+To see a list of all Stream Analytics jobs that you created by using the no-code drag-and-drop experience, select **Process data** > **Stream Analytics jobs**.
+
+These are the elements of the **Stream Analytics jobs** tab:
-- **Filter** - You can filter the list by job name.-- **Refresh** - The list doesn't auto-refresh currently. Use the option to refresh the list and see the latest status.-- **Job name** - The name you provided in the first step of job creation. You can't edit it. Select the job name to open the job in the no-code drag and drop experience where you can Stop the job, edit it, and Start it again.-- **Status** - The status of the job. Select Refresh on top of the list to see the latest status.-- **Streaming units** - The number of Streaming units selected when you started the job.-- **Output watermark** - An indicator of liveliness for the data produced by the job. All events before the timestamp are already computed.-- **Job monitoring** - Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics you can use to monitor your Stream Analytics job, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).-- **Operations** - Start, stop, or delete the job.
+- **Filter**: You can filter the list by job name.
+- **Refresh**: Currently, the list doesn't refresh itself automatically. Use the **Refresh** button to refresh the list and see the latest status.
+- **Job name**: The name in this area is the one that you provided in the first step of job creation. You can't edit it. Select the job name to open the job in the no-code drag-and-drop experience, where you can stop the job, edit it, and start it again.
+- **Status**: This area shows the status of the job. Select **Refresh** on top of the list to see the latest status.
+- **Streaming units**: This area shows the number of streaming units that you selected when you started the job.
+- **Output watermark**: This area provides an indicator of liveliness for the data that the job has produced. All events before the time stamp are already computed.
+- **Job monitoring**: Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics that you can use to monitor your Stream Analytics job, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
+- **Operations**: Start, stop, or delete the job.
## Next steps
-Learn how to use the no code editor to address common scenarios using predefined templates:
+Learn how to use the no-code editor to address common scenarios by using predefined templates:
- [Capture Event Hubs data in Parquet format](capture-event-hub-data-parquet.md) - [Filter and ingest to Azure Synapse SQL](filter-ingest-synapse-sql.md)
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Previously updated : 04/15/2022 Last updated : 06/29/2022 # What's new in Azure Synapse Analytics?
-This article lists updates to Azure Synapse Analytics that are published in April 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
+This article lists updates to Azure Synapse Analytics that are published in June 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
## General
Now, Azure Synapse Analytics provides built-in support for deep learning infrast
To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). ## Next steps
-[Get started with Azure Synapse Analytics](get-started.md)
+[Get started with Azure Synapse Analytics](get-started.md)
virtual-desktop Configure Rdp Shortpath Limit Ports Public Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath-limit-ports-public-networks.md
+
+ Title: Limit the port range when using RDP Shortpath for public networks - Azure Virtual Desktop
+description: Learn how to limit the port range used by clients when using RDP Shortpath for public networks for Azure Virtual Desktop, which establishes a UDP-based transport between a Remote Desktop client and session host.
++ Last updated : 09/06/2022++
+# Limit the port range when using RDP Shortpath for public networks
+
+By default, RDP Shortpath for public networks uses an ephemeral port range of 49152 to 65535 to establish a direct path between server and client. However, you may want to configure your session hosts to use a smaller, predictable port range.
+
+You can specify a smaller default range of ports 38300 to 39299 by configuring the `ICEEnableClientPortRange` registry value your session hosts, but in addition you can also specify the ports you want to use. When enabled on your session hosts, the Remote Desktop client will randomly select the port from the range you specify for every connection. If this range is exhausted, clients will fall back to using the default port range (49154-65535).
+
+When choosing the base and pool size, consider the number of ports you choose. The range must be between 1024 and 49151, after which the ephemeral port range begins.
+
+## Prerequisites
+
+- A client device running the [Remote Desktop client for Windows](user-documentation/connect-windows-7-10.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
+- Internet access for both clients and session hosts. Session hosts require outbound UDP connectivity from your session hosts to the internet. For more information you can use to configure firewalls and Network Security Group, see [Network configurations for RDP Shortpath](rdp-shortpath.md#network-configuration).
+
+## Enable a limited port range
+
+1. To enable a limited port range when using RDP Shortpath for public networks, open an elevated PowerShell prompt on your session hosts and run the following command to add the required registry value:
+
+ ```powershell
+ New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server" -Name ICEEnableClientPortRange -PropertyType DWORD -Value 1
+ ```
+
+2. To further specify the port range to use, open an elevated PowerShell prompt on your session hosts and run the following commands, where the value for `ICEClientPortBase` is the start of the range, and `ICEClientPortRange` is the number of ports to use from the start of the range. For example, if you select 25000 as a port base and 1000 as pool size, the upper bound will be 25999.
+
+ ```powershell
+ New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services" -Name ICEClientPortBase -PropertyType DWORD -Value 25000
+ New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services" -Name ICEClientPortRange -PropertyType DWORD -Value 1000
+ ```
virtual-desktop Configure Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath.md
+
+ Title: Configure RDP Shortpath - Azure Virtual Desktop
+description: Learn how to configure RDP Shortpath for Azure Virtual Desktop, which establishes a UDP-based transport between a Remote Desktop client and session host.
++ Last updated : 09/06/2022++
+# Configure RDP Shortpath for Azure Virtual Desktop
+
+RDP Shortpath is a feature of Azure Virtual Desktop that establishes a direct UDP-based transport between a supported Windows Remote Desktop client and session host. This article shows you how to configure RDP Shortpath for managed networks and public networks. For more information, see [RDP Shortpath](rdp-shortpath.md).
+
+## Prerequisites
+
+Before you can enable RDP Shortpath, you'll need to meet the prerequisites. Select a tab below for your scenario.
+
+# [Managed networks](#tab/managed-networks)
+
+- A client device running the [Remote Desktop client for Windows](user-documentation/connect-windows-7-10.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
+- Direct line of sight connectivity between the client and the session host. Having direct line of sight connectivity means that the client can connect directly to the session host on port 3390 (default) without being blocked by firewalls (including the Windows Firewall) or Network Security Group, and using a managed network such as:
+ - [ExpressRoute private peering](../expressroute/expressroute-circuit-peerings.md).
+ - Site-to-site or Point-to-site VPN (IPsec), such as [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md).
+
+# [Public networks](#tab/public-networks)
+
+> [!TIP]
+> RDP Shortpath for public networks will work automatically without any additional configuration, providing networks and firewalls allow the traffic through and RDP transport settings in the Windows operating system for session hosts and clients are using their default values.
+>
+> The steps to configure RDP Shortpath for public networks are provided for session hosts and clients in case these defaults have been changed.
+
+- A client device running the [Remote Desktop client for Windows](user-documentation/connect-windows-7-10.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
+- Internet access for both clients and session hosts. Session hosts require outbound UDP connectivity from your session hosts to the internet. To reduce the number of ports required, you can [limit the port range used by clients for public networks](configure-rdp-shortpath-limit-ports-public-networks.md). For more information you can use to configure firewalls and Network Security Group, see [Network configurations for RDP Shortpath](rdp-shortpath.md#network-configuration).
+- Check your client can connect to the STUN endpoints and verify that basic UDP functionality works by running the `Test-Shortpath.ps1` PowerShell script. For steps of how to do this, see [Verifying STUN server connectivity and NAT type](troubleshoot-rdp-shortpath.md#verifying-stun-server-connectivity-and-nat-type).
+++
+## Enable RDP Shortpath
+
+The steps to enable RDP Shortpath differ for session hosts depending on whether you want to enable it for managed networks or public networks, but are the same for clients. Select a tab below for your scenario.
+
+### Session hosts
+
+# [Managed networks](#tab/managed-networks)
+
+To enable RDP Shortpath for managed networks, you need to enable the RDP Shortpath listener on your session hosts. You can do this using Group Policy, either centrally from your domain for session hosts that are joined to an Active Directory (AD) domain, or locally for session hosts that are joined to Azure Active Directory (Azure AD).
+
+1. Download the [Azure Virtual Desktop administrative template](https://aka.ms/avdgpo) and extract the contents of the .cab file and .zip archive.
+
+1. Depending on whether you want to configure Group Policy centrally from your domain, or locally for each session host:
+
+ **AD Domain**:
+ 1. Copy and paste the **terminalserver-avd.admx** file to the Central Store for your domain, for example `\\contoso.com\SYSVOL\contoso.com\policies\PolicyDefinitions`, where *contoso.com* is your domain name. Then copy the **en-us\terminalserver-avd.adml** file to the `en-us` subfolder.
+ 1. Open the **Group Policy Management Console** (GPMC) and create or edit a policy that targets your session hosts.
+
+ **Locally**:
+ 1. Copy and paste the **terminalserver-avd.admx** file to `%windir%\PolicyDefinitions`. Then copy the **en-us\terminalserver-avd.adml** file to the `en-us` subfolder.
+ 1. Open the **Local Group Policy Editor** on the session host.
+
+1. Browse to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see policy settings for Azure Virtual Desktop, as shown in the following screenshot:
+
+ :::image type="content" source="media/azure-virtual-desktop-gpo.png" alt-text="Screenshot of the Group Policy Editor showing Azure Virtual Desktop policy settings." lightbox="media/azure-virtual-desktop-gpo.png":::
+
+1. Open the policy setting **Enable RDP Shortpath for managed networks** and set it to **Enabled**. If you enable this policy setting, you can also configure the port number that Azure Virtual Desktop session hosts will use to listen for incoming connections. The default port is **3390**.
+
+1. Select OK and restart your session hosts to apply the policy setting.
+
+# [Public networks](#tab/public-networks)
+
+If you need to configure session hosts and clients to enable RDP Shortpath for public networks because their default settings have been changed, follow these steps. You can do this using Group Policy, either centrally from your domain for session hosts that are joined to an Active Directory (AD) domain, or locally for session hosts that are joined to Azure Active Directory (Azure AD).
+
+1. Depending on whether you want to configure Group Policy centrally from your domain, or locally for each session host:
+
+ **AD Domain**:
+ 1. Open the **Group Policy Management Console** (GPMC) and create or edit a policy that targets your session hosts.
+
+ **Locally**:
+ 1. Open the **Local Group Policy Editor** on the session host.
+
+1. Browse to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Connections**.
+
+1. Open the policy setting **Select RDP transport protocols**. Set it to **Enabled**, then for **Select Transport Type**, select **Use both UDP and TCP**.
+
+1. Select OK and restart your session hosts to apply the policy setting.
+++
+### Windows clients
+
+The steps to ensure your clients are configured correctly are the same regardless of whether you want to use RDP Shortpath for managed networks or public networks. You can do this using Group Policy for managed clients that are joined to an Active Directory domain, Intune for managed clients that are joined to Azure Active Directory (Azure AD) and enrolled in Intune, or local Group Policy for clients that are not managed.
+
+> [!NOTE]
+> By default in Windows, RDP traffic will attempt to use both TCP and UDP protocols. You will only need to follow these steps if the client has previously been configured to use TCP only.
+
+#### Enable RDP Shortpath on managed and unmanaged Windows clients using Group Policy
+
+To configure managed and unmanaged Windows clients using Group Policy:
+
+1. Depending on whether you want to configure managed or unmanaged clients:
+ 1. For managed clients, open the **Group Policy Management Console** (GPMC) and create or edit a policy that targets your clients.
+ 1. For unmanaged clients, open the **Local Group Policy Editor** on the client.
+
+1. Browse to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client**.
+
+1. Open the policy setting **Turn Off UDP On Client** and set it to **Not Configured**.
+
+1. Select OK and restart your clients to apply the policy setting.
+
+#### Enable RDP Shortpath on Windows clients using Intune
+
+To configure managed Windows clients using Intune:
+
+1. Sign in to the [Endpoint Manager admin center](https://endpoint.microsoft.com/).
+
+1. Create or edit a configuration profile for **Windows 10 and later** devices, using Administrative templates.
+
+1. Browse to **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client**.
+
+1. Select the setting **Turn Off UDP On Client** and set it to **Disabled**. Select **OK**, then select **Next**.
+
+1. Apply the configuration profile, then restart your clients.
+
+### Teredo support
+
+While not required for RDP Shortpath, Teredo adds extra NAT traversal candidates and increases the chance of the successful RDP Shortpath connection in IPv4-only networks. You can enable Teredo on both session hosts and clients by running the following command from an elevated PowerShell prompt:
+
+```powershell
+Set-NetTeredoConfiguration -Type Enterpriseclient
+```
+
+## Verify RDP Shortpath is working
+
+Next, you'll need to make sure your clients are connecting using RDP Shortpath. You can verify the transport with either the *Connection Information* dialog from the Remote Desktop client, or by using Log Analytics.
+
+### Connection Information dialog
+
+To make sure connections are using RDP Shortpath, you can check the connection information on the client:
+
+1. Connect to Azure Virtual Desktop.
+
+1. Open the *Connection Information* dialog by going to the **Connection** tool bar on the top of the screen and select the signal strength icon, as shown in the following screenshot:
+
+ :::image type="content" source="media/rdp-shortpath-connection-bar.png" alt-text="Screenshot of Remote Desktop Connection Bar of Remote Desktop client.":::
+
+1. You can verify in the output that UDP is enabled, as shown in the following screenshot:
+
+ :::image type="content" source="media/rdp-shortpath-connection-info.png" alt-text="Screenshot of Remote Desktop Connection Info dialog.":::
+
+### Event Viewer
+
+To make sure connections are using RDP Shortpath, you can check the event logs on the session host:
+
+1. Connect to Azure Virtual Desktop.
+
+1. On the session host, open **Event Viewer**.
+
+1. Browse to **Applications and Services Logs** > **Microsoft** > **Windows** > **RemoteDesktopServices-RdpCoreCDV** > **Operational**.
+
+1. Filter by Event ID **135**. Connections using RDP Shortpath will state the transport type is using UDP with the message **The multi-transport connection finished for tunnel: 1, its transport type set to UDP**.
+
+### Log Analytics
+
+If you're using [Azure Log Analytics](./diagnostics-log-analytics.md), you can monitor connections by querying the [WVDConnections table](/azure/azure-monitor/reference/tables/wvdconnections). A column named UdpUse indicates whether Azure Virtual Desktop RDP Stack is using UDP protocol on the current user connection.
+The possible values are:
+
+- **0** - user connection isn't using RDP Shortpath.
+- **1** - The user connection is using RDP Shortpath for managed networks.
+- **2** - The user connection is using RDP Shortpath for public networks.
+
+The following query lets you review connection information. You can run this query in the [Log Analytics query editor](../azure-monitor/logs/log-analytics-tutorial.md#write-a-query). For each query, replace `user@contoso.com` with the UPN of the user you want to look up.
+
+```kusto
+let Events = WVDConnections | where UserName == "user@contoso.com" ;
+Events
+| where State == "Connected"
+| project CorrelationId, UserName, ResourceAlias, StartTime=TimeGenerated, UdpUse, SessionHostName, SessionHostSxSStackVersion
+| join (Events
+| where State == "Completed"
+| project EndTime=TimeGenerated, CorrelationId, UdpUse)
+on CorrelationId
+| project StartTime, Duration = EndTime - StartTime, ResourceAlias, UdpUse, SessionHostName, SessionHostSxSStackVersion
+| sort by StartTime asc
+```
+
+You can verify if RDP Shortpath is enabled for a specific user session by running the following Log Analytics query:
+
+```kusto
+WVDCheckpoints
+| where Name contains "Shortpath"
+```
+
+## Disable RDP Shortpath
+
+The steps to disable RDP Shortpath differ for session hosts depending on whether you want to disable it for managed networks only, public networks only, or both. Select a tab below for your scenario.
+
+### Session hosts
+
+# [Managed networks](#tab/managed-networks)
+
+To disable RDP Shortpath for managed networks on your session hosts, you need to disable the RDP Shortpath listener. You can do this using Group Policy, either centrally from your domain for session hosts that are joined to an AD domain, or locally for session hosts that are joined to Azure AD.
+
+Alternatively, you can block port **3390** (default) to your session hosts on a firewall or Network Security Group.
+
+1. Depending on whether you want to configure Group Policy centrally from your domain, or locally for each session host:
+
+ **AD Domain**:
+ 1. Open the **Group Policy Management Console** (GPMC) and edit the existing policy that targets your session hosts.
+
+ **Locally**:
+ 1. Open the **Local Group Policy Editor** on the session host.
+
+1. Browse to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see policy settings for Azure Virtual Desktop providing you have the administrative template from when you enabled RDP Shortpath for managed networks.
+
+1. Open the policy setting **Enable RDP Shortpath for managed networks** and set it to **Not Configured**.
+
+1. Select OK and restart your session hosts to apply the policy setting.
+
+# [Public networks](#tab/public-networks)
+
+To disable RDP Shortpath for public networks on your session hosts, you can set RDP transport protocols to only allow TCP. You can do this using Group Policy, either centrally from your domain for session hosts that are joined to an AD domain, or locally for session hosts that are joined to Azure AD.
+
+> [!CAUTION]
+> This will also disable RDP Shortpath for managed networks.
+
+Alternatively, if you want to disable RDP Shortpath for public networks only, you'll need to block access to the STUN endpoints on a firewall or Network Security Group. The IP addresses for the STUN endpoints can be found in the table for [Session host virtual network](rdp-shortpath.md#session-host-virtual-network).
+
+1. Depending on whether you want to configure Group Policy centrally from your domain, or locally for each session host:
+
+ **AD Domain**:
+ 1. Open the **Group Policy Management Console** (GPMC) and edit the existing policy that targets your session hosts.
+
+ **Locally**:
+ 1. Open the **Local Group Policy Editor** on the session host.
+
+1. Browse to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Connections**.
+
+1. Open the policy setting **Select RDP transport protocols**. Set it to **Enabled**, then for **Select Transport Type**, select **Use only TCP**.
+
+1. Select OK and restart your session hosts to apply the policy setting.
+++
+### Windows clients
+
+On client devices, you can disable RDP Shortpath for managed networks and public networks by configuring RDP traffic to only use TCP. You can do this using Group Policy for managed clients that are joined to an Active Directory domain, Intune for managed clients that are joined to (Azure AD) and enrolled in Intune, or local Group Policy for clients that aren't managed.
+
+> [!IMPORTANT]
+> If you have previously set RDP traffic to attempt to use both TCP and UDP protocols using Group Policy or Intune, ensure the settings don't conflict.
+>
+#### Disable RDP Shortpath on managed and unmanaged Windows clients using Group Policy
+
+To configure managed and unmanaged Windows clients using Group Policy:
+
+1. Depending on whether you want to configure managed or unmanaged clients:
+ 1. For managed clients, open the **Group Policy Management Console** (GPMC) and create or edit a policy that targets your clients.
+ 1. For unmanaged clients, open the **Local Group Policy Editor** on the client.
+
+1. Browse to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client**.
+
+1. Open the policy setting **Turn Off UDP On Client** and set it to **Enabled**.
+
+1. Select OK and restart your clients to apply the policy setting.
+
+#### Disable RDP Shortpath on Windows clients using Intune
+
+To configure managed Windows clients using Intune:
+
+1. Sign in to the [Endpoint Manager admin center](https://endpoint.microsoft.com/).
+
+1. Create or edit a configuration profile for **Windows 10 and later** devices, using Administrative templates.
+
+1. Browse to **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client**.
+
+1. Select the setting **Turn Off UDP On Client** and set it to **Enabled**. Select **OK**, then select **Next**.
+
+1. Apply the configuration profile, then restart your clients.
+
+## Deleting the preview of RDP Shortpath for public networks
+
+If you've participated in the preview of RDP Shortpath for public networks, you need to delete the following registry key as it is no longer required. Open an elevated PowerShell prompt and run the following command:
+
+```powershell
+Remove-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations" -Name ICEControl -Force
+```
+
+## Next steps
+
+- Learn how to [limit the port range used by clients](configure-rdp-shortpath-limit-ports-public-networks.md) using RDP Shortpath for public networks.
+- If you're having trouble establishing a connection using the RDP Shortpath transport for public networks, see [Troubleshoot RDP Shortpath](troubleshoot-rdp-shortpath.md).
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
+
+ Title: RDP Shortpath - Azure Virtual Desktop
+description: Learn about RDP Shortpath for Azure Virtual Desktop, which establishes a UDP-based transport between a Remote Desktop client and session host.
++ Last updated : 09/06/2022++
+# RDP Shortpath for Azure Virtual Desktop
+
+Connections to Azure Virtual Desktop use Transmission Control Protocol (TCP) or User Datagram Protocol (UDP). RDP Shortpath is a feature of Azure Virtual Desktop that establishes a direct UDP-based transport between a supported Windows Remote Desktop client and session host. Remote Desktop Protocol (RDP) by default uses a TCP-based reverse connect transport as it provides the best compatibility with various networking configurations and has a high success rate for establishing RDP connections. However, if RDP Shortpath can be used instead, this UDP-based transport offers better connection reliability and more consistent latency.
+
+RDP Shortpath can be used in two ways:
+
+- **Managed networks**, where direct connectivity is established between the client and the session host when using a private connection, such as a virtual private network (VPN).
+
+- **Public networks**, where direct connectivity is established between the client and the session host through a NAT gateway, provided as part of the Azure Desktop service, when using a public connection.
+
+The transport used for RDP Shortpath is based on the [Universal Rate Control Protocol (URCP)](https://www.microsoft.com/research/publication/urcp-universal-rate-control-protocol-for-real-time-communication-applications/). URCP enhances UDP with active monitoring of the network conditions and provides fair and full link utilization. URCP operates at low delay and loss levels as needed.
+
+## Key benefits
+
+Using RDP Shortpath has the following key benefits:
+
+- Using URCP to enhance UDP achieves the best performance by dynamically learning network parameters and providing the protocol with a rate control mechanism.
+- The removal of extra relay points reduces round-trip time, which improves connection reliability and user experience with latency-sensitive applications and input methods.
+- In addition, for managed networks:
+ - RDP Shortpath brings support for configuring Quality of Service (QoS) priority for RDP connections through Differentiated Services Code Point (DSCP) marks.
+ - The RDP Shortpath transport allows limiting outbound network traffic by specifying a throttle rate for each session.
+
+## How RDP Shortpath works
+
+To learn how RDP Shortpath works for managed networks and public networks, select each of the following tabs.
+
+# [Managed networks](#tab/managed-networks)
+
+You can achieve the direct line of sight connectivity required to use RDP Shortpath with managed networks using the following methods. Having direct line of sight connectivity means that the client can connect directly to the session host without being blocked by firewalls.
+
+- [ExpressRoute private peering](../expressroute/expressroute-circuit-peerings.md)
+- Site-to-site or Point-to-site VPN (IPsec), such as [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md)
+
+> [!NOTE]
+> - If you're using other VPN types to connect to Azure, we recommend using a UDP-based VPN. While most TCP-based VPN solutions support nested UDP, they add inherited overhead of TCP congestion control, which slows down RDP performance.
+
+To use RDP Shortpath for managed networks, you must enable a UDP listener on your session hosts. By default, port **3390** is used, although you can use a different port.
+
+The following diagram gives a high-level overview of the RDP Shortpath network connection:
++
+### Connection sequence
+
+All connections begin by establishing a TCP-based [reverse connect transport](network-connectivity.md) over the Azure Virtual Desktop Gateway. Then, the client and session host establish the initial RDP transport, and start exchanging their capabilities. These capabilities are negotiated using the following process:
+
+1. The session host sends the list of its IPv4 and IPv6 addresses to the client.
+
+1. The client starts the background thread to establish a parallel UDP-based transport directly to one of the session host's IP addresses.
+
+1. While the client is probing the provided IP addresses, it continues to establish the initial connection over the reverse connect transport to ensure there's no delay in the user connection.
+
+1. If the client has a direct connection to the session host, the client establishes a secure TLS connection.
+
+1. After establishing the RDP Shortpath transport, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection, are moved to the new transport. However, if a firewall or network topology prevents the client from establishing direct UDP connectivity, RDP continues with a reverse connect transport.
+
+If your users have both RDP Shortpath for managed network and public networks available to them, then the first algorithm found will be used. Whichever connection gets established first is what the user will use for that session.
+
+# [Public networks](#tab/public-networks)
+
+When connecting to Azure Virtual Desktop using a public network, RDP Shortpath uses a standardized set of methods for traversal of NAT gateways. As a result, user sessions directly establish a UDP flow between the client and session host. More specifically, RDP Shortpath uses Simple Traversal Underneath NAT (STUN) protocol to discover the external IP address of the NAT router.
+
+Each RDP session uses a dynamically assigned UDP port from an ephemeral port range (49152ΓÇô65535 by default) that accepts the RDP Shortpath traffic. You can also use a smaller, predictable port range. For more information, see [Limit the port range used by clients for public networks](configure-rdp-shortpath-limit-ports-public-networks.md).
+
+There are four primary components used to establish the RDP Shortpath data flow for public networks:
+
+- Remote Desktop client
+
+- Session host
+
+- Azure Virtual Desktop Gateway
+
+- Azure Virtual Desktop STUN Server
+
+> [!TIP]
+> RDP Shortpath for public networks will work automatically without any additional configuration, providing networks and firewalls allow the traffic through and RDP transport settings in the Windows operating system for session hosts and clients are using their default values.
+
+### Network Address Translation and firewalls
+
+Most Azure Virtual Desktop clients run on computers on the private network. Internet access is provided through a Network Address Translation (NAT) gateway device. Therefore, the NAT gateway modifies all network requests from the private network and destined to the Internet. Such modification intends to share a single public IP address across all of the computers on the private network.
+
+Because of IP packet modification, the recipient of the traffic will see the public IP address of the NAT gateway instead of the actual sender. When traffic comes back to the NAT gateway, it will take care to forward it to the intended recipient without the sender's knowledge. In most scenarios, the devices hidden behind such a NAT aren't aware translation is happening and don't know the network address of the NAT gateway.
+
+NAT is applicable to the Azure Virtual Networks where all session hosts reside. When a session host tries to reach the network address on the Internet, the NAT Gateway (either your own or default provided by Azure), or Azure Load Balancer performs the address translation. For more information about various types of Source Network Address Translation, see [Use Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md).
+
+Most networks typically include firewalls that inspect traffic and block it based on rules. Most customers configure their firewalls to prevent incoming connections (that is, unsolicited packets from the Internet sent without a request). Firewalls employ different techniques to track data flow to distinguish between solicited and unsolicited traffic. In the context of TCP, the firewall tracks SYN and ACK packets, and the process is straightforward. UDP firewalls usually use heuristics based on packet addresses to associate traffic with UDP flows and allow or block it.
+
+There are many different NAT implementations available. In most cases, NAT gateway and firewall are the functions of the same physical or virtual device.
+
+### Connection sequence
+
+All connections begin by establishing a TCP-based [reverse connect transport](network-connectivity.md) over the Azure Virtual Desktop Gateway. Then, the client and session host establish the initial RDP transport, and start exchanging their capabilities. If RDP Shortpath for public networks is enabled on the session host, the session host then initiates a process called *candidate gathering*:
+
+1. The session host enumerates all network interfaces assigned to a session host, including virtual interfaces like VPN and Teredo.
+
+1. The Windows service *Remote Desktop Services* (TermService) allocates UDP sockets on each interface and stores the *IP:Port* pair in the candidate table as a *local candidate*.
+
+1. The Remote Desktop Services service uses each UDP socket allocated in the previous step to try reaching the Azure Virtual Desktop STUN Server on the public internet. Communication is done by sending a small UDP packet to port **3478**.
+
+1. If the packet reaches the STUN server, the STUN server responds with the public IP (specified by you or provided by Azure) and port. This information is stored in the candidate table as a *reflexive candidate*.
+
+1. After the session host gathers all the candidates, the session host uses the established reverse connect transport to pass the candidate list to the client.
+
+1. When the client receives the list of candidates from the session host, the client also performs candidate gathering on its side. Then the client sends its candidate list to the session host.
+
+1. After the session host and client exchange their candidate lists, both parties attempt to connect with each other using all the gathered candidates. This connection attempt is simultaneous on both sides. Many NAT gateways are configured to allow the incoming traffic to the socket as soon as the outbound data transfer initializes it. This behavior of NAT gateways is the reason the simultaneous connection is essential.
+
+1. After the initial packet exchange, the client and session host may establish one or many data flows. From these data flows, RDP chooses the fastest network path. The client then establishes a secure TLS connection with the session host and initiates RDP Shortpath transport.
+
+1. After RDP establishes the RDP Shortpath transport, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection move to the new transport.
+
+If your users have both RDP Shortpath for managed network and public networks available to them, then the first algorithm found will be used. Whichever connection gets established first is what the user will use for that session.
+
+> [!IMPORTANT]
+> When using a TCP-based transport, outbound traffic from session host to client is through the Azure Virtual Desktop Gateway. With RDP Shortpath, outbound traffic is established directly between session host and client over the internet. This removes a hop which improves latency and end user experience. However, due to the changes in data flow between session host and client where the Gateway is no longer used, there will be standard [Azure egress network charges](https://azure.microsoft.com/pricing/details/bandwidth/) billed in addition per subscription for the internet bandwidth consumed. To learn more about estimating the bandwidth used by RDP, see [RDP bandwidth requirements](rdp-bandwidth.md).
+
+### Network configuration
+
+To support RDP Shortpath for public networks, you typically don't need any particular configuration. The session host and client will automatically discover the direct data flow if it's possible in your network configuration. However, every environment is unique, and some network configurations may negatively affect the rate of success of the direct connection. Follow the recommendations below to increase the probability of a direct data flow.
+
+As RDP Shortpath uses UDP to establish a data flow, if a firewall on your network blocks UDP traffic, RDP Shortpath will fail and the connection will fall back to TCP-based reverse connect transport. Azure Virtual Desktop uses STUN servers provided by Azure Communication Services and Microsoft Teams. By the nature of the feature, outbound connectivity from the session hosts to the client is required. Unfortunately, you can't predict where your users are located in most cases. Therefore, we recommend allowing outbound UDP connectivity from your session hosts to the internet. To reduce the number of ports required, you can [limit the port range used by clients](configure-rdp-shortpath-limit-ports-public-networks.md) for the UDP flow. Use the following tables for reference when configuring firewalls for RDP Shortpath.
+
+If your users are in a scenario where RDP Shortpath for both managed network and public networks is available to them, then the first algorithm found will be used. Whichever connection gets established first is what the user will use for that session.
+
+> [!NOTE]
+> RDP Shortpath doesn't support Symmetric NAT, which is the mapping of a single private source *IP:Port* to a unique public destination *IP:Port*. This is because RDP Shortpath needs to reuse the same external port (or NAT binding) used in the initial connection. Where multiple paths are used, for example a highly available firewall pair, external port reuse cannot be guaranteed. For more information about NAT with Azure virtual networks, see [Source Network Address Translation with virtual networks](../virtual-network/nat-gateway/nat-gateway-resource.md#source-network-address-translation).
+
+#### Session host virtual network
+
+| Name | Source | Source Port | Destination | Destination Port | Protocol | Action |
+|-|--|-|--||-|--|
+| RDP Shortpath Server Endpoint | VM subnet | Any | Any | 1024-65535 | UDP | Allow |
+| STUN Access | VM subnet | Any | - 13.107.17.41/32<br />- 13.107.64.0/18<br />- 20.202.0.0/16<br />- 52.112.0.0/14<br />- 52.120.0.0/14 | 3478 | UDP | Allow |
+
+#### Client network
+
+| Name | Source | Source Port | Destination | Destination Port | Protocol | Action |
+|-|-|-|--||-|--|
+| RDP Shortpath Server Endpoint | Client network | Any | Public IP addresses assigned to NAT Gateway or Azure Firewall (provided by the STUN endpoint) | 1024-65535 | UDP | Allow |
+| STUN Access | Client network | Any | - 13.107.17.41/32<br />- 13.107.64.0/18<br />- 20.202.0.0/16<br />- 52.112.0.0/14<br />- 52.120.0.0/14 | 3478 | UDP | Allow |
+
+### Teredo support
+
+While not required for RDP Shortpath, Teredo adds extra NAT traversal candidates and increases the chance of the successful RDP Shortpath connection in IPv4-only networks. To learn how to enable Teredo on session hosts and clients, see [Teredo support](configure-rdp-shortpath.md#teredo-support).
+
+### UPnP support
+
+To improve the chances of a direct connection, on the side of the Remote Desktop client, RDP Shortpath may use [UPnP](/windows/win32/upnp/universal-plug-and-play-start-page) to configure a port mapping on the NAT router. UPnP is a standard technology used by various applications, such as Xbox, Delivery Optimization, and Teredo. UPnP is generally available on routers typically found on a home network. UPnP is enabled by default on most home routers and access points, but is often disabled on corporate networking.
+
+### General recommendations
+
+Here are some general recommendations when using RDP Shortpath for public networks:
+
+- Avoid using force tunneling configurations if your users access Azure Virtual Desktop over the Internet.
+- Make sure you aren't using double NAT or Carrier-Grade-NAT (CGN) configurations.
+- Recommend to your users that they don't disable UPnP on their home routers.
+- Avoid using cloud packet-inspection Services.
+- Avoid using TCP-based VPN solutions.
+- Enable IPv6 connectivity or Teredo.
+++
+## Connection security
+
+RDP Shortpath extends RDP multi-transport capabilities. It doesn't replace the reverse connect transport but complements it. Initial session brokering is managed through the Azure Virtual Desktop service and the reverse connect transport. All connection attempts are ignored unless they match the reverse connect session first. RDP Shortpath is established after authentication, and if successfully established, the reverse connect transport is dropped and all traffic flows over the RDP Shortpath.
+
+The port used for each RDP session depends on whether RDP Shortpath is being used for managed networks or public networks:
+
+- **Managed networks**: only the specified UDP port (**3390** by default) will be used for incoming RDP Shortpath traffic.
+
+- **Public networks**: each RDP session uses a dynamically assigned UDP port from an ephemeral port range (49152ΓÇô65535 by default) that accepts the RDP Shortpath traffic. You can also use a smaller, predictable port range. For more information, see [Limit the port range used by clients for public networks](configure-rdp-shortpath-limit-ports-public-networks.md).
+
+RDP Shortpath uses a TLS connection between the client and the session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the operating system during the deployment. RDP Shortpath uses a TLS connection between the client and the session host using the session host's certificates. By default, the certificate used for RDP encryption is self-generated by the operating system during the deployment. You can also deploy centrally managed certificates issued by an enterprise certification authority. For more information about certificate configurations, see [Remote Desktop listener certificate configurations](/troubleshoot/windows-server/remote/remote-desktop-listener-certificate-configurations).
+
+> [!NOTE]
+> The security offered by RDP Shortpath is the same as that offered by reverse connect transport.
+
+## Next steps
+
+- Learn how to [Configure RDP Shortpath](configure-rdp-shortpath.md).
+- Learn more about Azure Virtual Desktop network connectivity at [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).
+- Understand [Azure egress network charges](https://azure.microsoft.com/pricing/details/bandwidth/).
+- To understand how to estimate the bandwidth used by RDP, see [RDP bandwidth requirements](rdp-bandwidth.md).
virtual-desktop Troubleshoot Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-rdp-shortpath.md
+
+ Title: Troubleshoot RDP Shortpath for public networks - Azure Virtual Desktop
+description: Learn how to troubleshoot RDP Shortpath for public networks for Azure Virtual Desktop, which establishes a UDP-based transport between a Remote Desktop client and session host.
++ Last updated : 09/06/2022++
+# Troubleshoot RDP Shortpath for public networks
+
+If you're having issues when using RDP Shortpath for public networks, use the information in this article to help troubleshoot.
+
+### Verifying STUN server connectivity and NAT type
+
+You can validate connectivity to the STUN endpoints and verify that basic UDP functionality works by running the `Test-Shortpath.ps1` PowerShell script.
+
+1. Open a PowerShell prompt and run the following command to download the PowerShell script. Alternatively, go to [our GitHub repo](https://github.com/Azure/RDS-Templates/tree/master/AVD-TestShortpath) and download the `Test-Shortpath.ps1` file.
+
+ ```powershell
+ Invoke-WebRequest -Uri https://github.com/Azure/RDS-Templates/raw/master/AVD-TestShortpath/Test-Shortpath.ps1 -OutFile Test-Shortpath.ps1
+ ```
+
+1. You may need to unblock the file as the PowerShell script isn't digitally signed. You can unblock the file by running the following command:
+
+ ```powershell
+ Unblock-File -Path .\Test-Shortpath.ps1
+ ```
+
+1. Finally, run the PowerShell script by running the following command:
+
+ ```powershell
+ .\Test-Shortpath.ps1
+ ```
+
+The output will look similar to below if connectivity is successful:
+
+```
+Checking DNS service ... OK
+Checking STUN on server 20.202.0.107:3478 ... OK
+Checking STUN on server 13.107.17.41:3478 ... OK
++
+STUN works and your NAT type appears to be 'cone shaped'.
+Shortpath for public networks is likely to work on this host.
+```
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
When you deploy a scale set, you also have the option to deploy with a single [p
### Zone balancing
-Finally, for scale sets deployed across multiple zones, you also have the option of choosing "best effort zone balance" or "strict zone balance". A scale set is considered "balanced" if each zone the same number of VMs or +\\- 1 VM in all other zones for the scale set. For example:
+Finally, for scale sets deployed across multiple zones, you also have the option of choosing "best effort zone balance" or "strict zone balance". A scale set is considered "balanced" if each zone has the same number of VMs +\\- 1 VM as all other zones for the scale set. For example:
- A scale set with 2 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced. There is only one zone with a different VM count and it is only 1 less than the other zones. - A scale set with 1 VM in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered unbalanced. Zone 1 has 2 fewer VMs than zones 2 and 3.
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md
description: This topic describes how to run scripts within an Azure Linux virtu
-+ Previously updated : 07/28/2022 Last updated : 09/08/2022 -+ # Preview: Run scripts in your Linux VM by using managed Run Commands
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
description: This topic describes how to run scripts within an Azure Linux virtu
-- Previously updated : 10/27/2021++ Last updated : 09/08/2022 -+ ms.devlang: azurecli
virtual-machines Run Command Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command-managed.md
description: This topic describes how to run scripts within an Azure Windows vir
-- Previously updated : 10/28/2021++ Last updated : 09/07/2022 -+ # Preview: Run scripts in your Windows VM by using managed Run Commands
az vm run-command list --vm-name "myVM" --resource-group "myRG"
This command will retrieve current execution progress, including latest output, start/end time, exit code, and terminal state of the execution. ```azurecli-interactive
-az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" --instance-view
+az vm run-command show --name "myRunCommand" --vm-name "myVM" --resource-group "myRG" --expand instanceView
``` ### Delete RunCommand resource from the VM
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command.md
description: This topic describes how to run PowerShell scripts within an Azure
-- Previously updated : 10/28/2021++ Last updated : 09/07/2022 -+ ms.devlang: azurecli
virtual-machines High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-simple-mount.md
Title: Azure VMs high availability for SAP NW on SLES for SAP application with simple mount and NFS| Microsoft Docs
-description: High-availability guide for SAP NetWeaver on SUSE Linux Enterprise Server with simple mount and NFS for SAP applications
+ Title: Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS| Microsoft Docs
+description: Install high-availability SAP NetWeaver on SUSE Linux Enterprise Server with simple mount and NFS for SAP applications.
documentationcenter: saponazure
-# High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server for SAP Applications with simple mount architecture and NFS
+# High-availability SAP NetWeaver with simple mount and NFS on SLES for SAP Applications VMs
[dbms-guide]:dbms_guide_general.md [deployment-guide]:deployment-guide.md
[sap-hana-ha]:sap-hana-high-availability.md [nfs-ha]:high-availability-guide-suse-nfs.md
-This article describes how to deploy and configure VMs, install the cluster framework, and install an HA SAP NetWeaver system with simple mount structure. The presented architecture can be implemented using one of the following Azure native NFS
+This article describes how to deploy and configure Azure virtual machines (VMs), install the cluster framework, and install a high-availability (HA) SAP NetWeaver system with a simple mount structure. You can implement the presented architecture by using one of the following Azure native Network File System (NFS)
-- [NFS on Azure Files](../../../storage/files/files-nfs-protocol.md) or -- [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
+- [NFS on Azure Files](../../../storage/files/files-nfs-protocol.md)
+- [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md)
## Prerequisites
+The following guides contain all the required information to set up a NetWeaver HA system:
-* [SUSE SAP S/4HANA High availability cluster with simple mount structure](https://documentation.suse.com/sbp/sap/html/SAP-S4HA10-setupguide-simplemount-sle15/https://docsupdatetracker.net/index.html)
-* [Use of Filesystem resource for ASCS/ERS HA setup not possible](https://www.suse.com/support/kb/doc/?id=000019944)
+* [SAP S/4 HANA - Enqueue Replication 2 High Availability Cluster With Simple Mount](https://documentation.suse.com/sbp/sap/html/SAP-S4HA10-setupguide-simplemount-sle15/https://docsupdatetracker.net/index.html)
+* [Use of Filesystem resource for ABAP SAP Central Services (ASCS)/ERS HA setup not possible](https://www.suse.com/support/kb/doc/?id=000019944)
* SAP Note [1928533][1928533], which has:
- * List of Azure VM sizes that are supported for the deployment of SAP software
+ * A list of Azure VM sizes that are supported for the deployment of SAP software
* Important capacity information for Azure VM sizes
- * Supported SAP software, and operating system (OS) and
- * combinations
- * Required SAP kernel version for Windows and Linux on Microsoft Azure
-* SAP Note [2015553][2015553] lists prerequisites for SAP-supported SAP software deployments in Azure.
-* SAP Note [2205917][2205917] has recommended OS settings for SUSE Linux Enterprise Server for SAP Applications
-* SAP Note [2178632][2178632] has detailed information about all monitoring metrics reported for SAP in Azure.
-* SAP Note [2191498][2191498] has the required SAP Host Agent version for Linux in Azure.
-* SAP Note [2243692][2243692] has information about SAP licensing on Linux in Azure.
-* SAP Note [2578899][2578899] has general information about SUSE Linux Enterprise Server 15
-* SAP Note [1275776][1275776] has information about preparing SUSE Linux Enterprise Server for SAP environments
-* SAP Note [1999351][1999351] has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
-* [SAP Community WIKI](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux.
+ * Supported SAP software, operating systems (OSs), and combinations
+ * The required SAP kernel version for Windows and Linux on Microsoft Azure
+* SAP Note [2015553][2015553], which lists prerequisites for SAP-supported SAP software deployments in Azure.
+* SAP Note [2205917][2205917], which has recommended OS settings for SUSE Linux Enterprise Server (SLES) for SAP Applications
+* SAP Note [2178632][2178632], which has detailed information about all monitoring metrics reported for SAP in Azure
+* SAP Note [2191498][2191498], which has the required SAP Host Agent version for Linux in Azure
+* SAP Note [2243692][2243692], which has information about SAP licensing on Linux in Azure
+* SAP Note [2578899][2578899], which has general information about SUSE Linux Enterprise Server 15
+* SAP Note [1275776][1275776], which has information about preparing SUSE Linux Enterprise Server for SAP environments
+* SAP Note [1999351][1999351], which has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP
+* [SAP community wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes), which has all required SAP Notes for Linux
* [Azure Virtual Machines planning and implementation for SAP on Linux][planning-guide] * [Azure Virtual Machines deployment for SAP on Linux][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide]
-* [SUSE SAP HA Best Practice Guides][suse-ha-guide]
-* [SUSE High Availability Extension Release Notes][suse-relnotes]
+* [SUSE SAP HA best practice guides][suse-ha-guide]
+* [SUSE High Availability Extension release notes][suse-relnotes]
* [Azure Files documentation][afs-azure-doc]
-* [NetApp NFS Best Practices](https://www.netapp.com/media/10720-tr-4067.pdf)
-
-The guides contain all required information to set up Netweaver HA. Use these guides as a general baseline. They provide much more detailed information.
+* [NetApp NFS best practices](https://www.netapp.com/media/10720-tr-4067.pdf)
## Overview
+This article describes a high-availability configuration for ASCS with a simple mount structure. To deploy the SAP application layer, you need shared directories like `/sapmnt/SID`, `/usr/sap/SID`, and `/usr/sap/trans`, which are highly available. You can deploy these file systems on [NFS on Azure Files](../../../storage/files/files-nfs-protocol.md) *or* [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
-This article describes high availability configuration for SAP ASCS with simple mount structure. To deploy the SAP application layer, you need shared directories like `/sapmnt/SID`, `/usr/sap/SID` and `/usr/sap/trans`, which are highly available. These file systems can be deployed on [NFS on Azure Files](../../../storage/files/files-nfs-protocol.md) **or** [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
-
-You still need a Pacemaker cluster to protect single point of failure components like SAP Netweaver central services(ASCS/SCS).
+You still need a Pacemaker cluster to help protect single-point-of-failure components like SAP Central Services (SCS) and ASCS.
-Compared to the classic Pacemaker cluster configuration, with the simple mount deployment, the file systems aren't managed by the cluster. This configuration is only supported on SLES for SAP Applications 15 and higher. The database layer isn't covered in detail in this article.
+Compared to the classic Pacemaker cluster configuration, with the simple mount deployment, the cluster doesn't manage the file systems. This configuration is supported only on SLES for SAP Applications 15 and later. This article doesn't cover the database layer in detail.
-The example configurations and installation commands use the following instance numbers:
+The example configurations and installation commands use the following instance numbers.
| Instance name | Instance number | | - | |
-| ABAP SAP Central Services (ASCS) | 00 |
-| ERS | 01 |
+| ASCS | 00 |
+| Evaluated Receipt Settlement (ERS) | 01 |
| Primary Application Server (PAS) | 02 | | Additional Application Server (AAS) | 03 | | SAP system identifier | NW1 | - > [!IMPORTANT]
-> The configuration with simple mount structure is only supported on SLES for SAP Applications 15 and higher releases.
+> The configuration with simple mount structure is supported only on SLES for SAP Applications 15 and later releases.
- This diagram shows a typical SAP Netweaver HA architecture with simple mount. The "sapmnt" and "saptrans" file systems are deployed on Azure native NFS: NFS shares on Azure Files or NFS volumes on ANF. The SAP central services are protected by a Pacemaker cluster. The clustered VMs are behind an Azure load balancer. The file systems are not managed by the Pacemaker cluster, in contrast to the classic Pacemaker configuration.
+ This diagram shows a typical SAP NetWeaver HA architecture with a simple mount. The "sapmnt" and "saptrans" file systems are deployed on Azure native NFS: NFS shares on Azure Files or NFS volumes on Azure NetApp Files. A Pacemaker cluster protects the SAP central services. The clustered VMs are behind an Azure load balancer. The Pacemaker cluster doesn't manage the file systems, in contrast to the classic Pacemaker configuration.
:::image-end:::
-## Prepare infrastructure
+## Prepare the infrastructure
-This document assumes that you've already deployed an [Azure Virtual Network](../../../virtual-network/virtual-networks-overview.md), subnet and resource group.
+This article assumes that you've already deployed an [Azure virtual network](../../../virtual-network/virtual-networks-overview.md), subnet, and resource group. To prepare the rest of your infrastructure:
-1. Deploy your VMs. You can deploy VMs in availability sets, or in availability zones, if the Azure region supports these options. If you need additional IP addresses for your VMs, deploy and attach a second NIC. DonΓÇÖt add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../../load-balancer/load-balancer-multivip-overview.md#limitations).
+1. Deploy your VMs. You can deploy VMs in availability sets or in availability zones, if the Azure region supports these options.
+
+ > [!IMPORTANT]
+ > If you need additional IP addresses for your VMs, deploy and attach a second network interface controller (NIC). Don't add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../../load-balancer/load-balancer-multivip-overview.md#limitations).
-2. For your virtual IPs, deploy and configure an [Azure load balancer](../../../load-balancer/load-balancer-overview.md). It's recommended to use a [Standard load balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
- 1. Create the frontend IP addresses
- 1. IP address 10.27.0.9 for the ASCS instance
- * Open the load balancer, select frontend IP pool, and click Add
- * Enter the name of the new frontend IP pool (for example **frontend.NW1.ASCS**)
- * Set the Assignment to Static and enter the IP address (for example **10.27.0.9**)
- * Click OK
- 1. IP address 10.27.0.10 for the ERS instance
- * Repeat the steps above under "a" to create an IP address for the ERS (for example **10.27.0.10** and **frontend.NW1.ERS**)
- 1. Create a single backend pool.
- 1. Open the load balancer, select backend pools, and click Add
- 1. Enter the name of the new backend pool (for example **backend.NW1**)
- 1. Click Add a virtual machine.
- 1. Select Virtual machine
- 1. Select the virtual machines of the (A)SCS cluster and their IP addresses.
- 1. Click Add
+2. For your virtual IPs, deploy and configure an [Azure load balancer](../../../load-balancer/load-balancer-overview.md). We recommend that you use a [Standard load balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
+ 1. Create front-end IP address 0.27.0.9 for the ASCS instance:
+ 1. Open the load balancer, select **Frontend IP pool**, and then select **Add**.
+ 1. Enter the name of the new front-end IP pool (for example, **frontend.NW1.ASCS**).
+ 1. Set **Assignment** to **Static** and enter the IP address (for example, **10.27.0.9**).
+ 1. Select **OK**.
+ 1. Create front-end IP address 10.27.0.10 for the ERS instance:
+ * Repeat the preceding steps to create an IP address for ERS (for example, **10.27.0.10** and **frontend.NW1.ERS**).
+ 1. Create a single back-end pool:
+ 1. Open the load balancer, select **Backend pools**, and then select **Add**.
+ 1. Enter the name of the new back-end pool (for example, **backend.NW1**).
+ 1. Select **Add a virtual machine**.
+ 1. Select **Virtual machine**.
+ 1. Select the virtual machines of the ASCS cluster and their IP addresses.
+ 1. Select **Add**.
+ 1. Create a health probe for port 62000 for ASCS:
+ 1. Open the load balancer, select **Health probes**, and then select **Add**.
+ 1. Enter the name of the new health probe (for example, **health.NW1.ASCS**).
+ 1. Select **TCP** as the protocol and **62000** as the port. Keep the interval of **5**.
+ 1. Select **OK**.
+ 1. Create a health probe for port 62101 for the ERS instance:
+ * Repeat the preceding steps to create a health probe for ERS (for example, **62101** and **health.NW1.ERS**).
+ 1. Create load-balancing rules for ASCS:
+ 1. Open the load balancer, select **Load-balancing rules**, and then select **Add**.
+ 1. Enter the name of the new load-balancing rule (for example, **lb.NW1.ASCS**).
+ 1. Select the front-end IP address for ASCS, back-end pool, and health probe that you created earlier (for example, **frontend.NW1.ASCS**, **backend.NW1**, and **health.NW1.ASCS**).
+ 1. Select **HA ports**.
+ 1. Enable Floating IP.
+ 1. Select **OK**.
+ 1. Create load-balancing rules for ERS:
+ * Repeat the preceding steps to create load-balancing rules for ERS (for example, **lb.NW1.ERS**).
- 1. Create the health probes
- 1. Port 620**00** for ASCS
- * Open the load balancer, select health probes, and click Add
- * Enter the name of the new health probe (for example **health.NW1.ASCS**)
- * Select TCP as protocol, port 620**00**, keep Interval 5
- * Click OK
- 1. Port 621**01** for the ERS instance
- * Repeat the steps above under "c" to create a health probe for the ERS (for example 621**01** and **health.NW1.ERS**)
- 1. Load-balancing rules
- 1. Create load balancing rules for ASCS
- * Open the load balancer, select Load-balancing rules and click Add
- * Enter the name of the new load balancer rule (for example **lb.NW1.ASCS**)
- * Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example **frontend.NW1.ASCS**, **backend.NW1**, and **health.NW1.ASCS**)
- * Select **HA ports**
- * **Make sure to enable Floating IP**
- * Click OK
- 1. Create load balancing rules for ERS
- * Repeat the steps above to create load balancing rules for ERS (for example **lb.NW1.ERS**)
-
- > [!IMPORTANT]
- > Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
-
- > [!Note]
- > When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-
- > [!IMPORTANT]
- > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md).
+> [!Note]
+> When VMs without public IP addresses are placed in the back-end pool of an internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity unless you perform additional configuration to allow routing to public endpoints. For details on how to achieve outbound connectivity, see [Public endpoint connectivity for virtual machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md).
-### Deploy NFS
+> [!IMPORTANT]
+> Don't enable TCP time stamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set the `net.ipv4.tcp_timestamps` parameter to `0`. For details, see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md).
-There are two options for deploying Azure native NFS to host the SAP shared directories. You can either deploy a [NFS share on Azure Files](../../../storage/files/files-nfs-protocol.md), or deploy NFS volume on [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md). NFS on Azure files supports NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3.
+## Deploy NFS
-The next sections describe the steps to deploy NFS - you'll need to select only **one** of the options.
+There are two options for deploying Azure native NFS to host the SAP shared directories. You can either deploy an [NFS file share on Azure Files](../../../storage/files/files-nfs-protocol.md) or deploy an [NFS volume on Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md). NFS on Azure Files supports the NFSv4.1 protocol. NFS on Azure NetApp Files supports both NFSv4.1 and NFSv3.
-> [!TIP]
-> You chose to deploy the SAP shared directories on [NFS share on Azure Files](../../../storage/files/files-nfs-protocol.md) or [NFS volume on Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
+The next sections describe the steps to deploy NFS. Select only *one* of the options.
-#### Deploy Azure Files storage account and NFS shares
+### Deploy an Azure Files storage account and NFS shares
-NFS on Azure Files, runs on top of [Azure Files Premium storage][afs-azure-doc]. Before setting up NFS on Azure Files, see [How to create an NFS share](../../../storage/files/storage-files-how-to-create-nfs-shares.md?tabs=azure-portal).
+NFS on Azure Files runs on top of [Azure Files premium storage][afs-azure-doc]. Before you set up NFS on Azure Files, see [How to create an NFS share](../../../storage/files/storage-files-how-to-create-nfs-shares.md?tabs=azure-portal).
There are two options for redundancy within an Azure region: -- [Locally redundant storage (LRS)](../../../storage/common/storage-redundancy.md#locally-redundant-storage), which offers local, in-zone synchronous data replication.-- [Zone redundant storage (ZRS)](../../../storage/common/storage-redundancy.md#zone-redundant-storage), which replicates your data synchronously across the three [availability zones](../../../availability-zones/az-overview.md) in the region.
+- [Locally redundant storage (LRS)](../../../storage/common/storage-redundancy.md#locally-redundant-storage) offers local, in-zone synchronous data replication.
+- [Zone-redundant storage (ZRS)](../../../storage/common/storage-redundancy.md#zone-redundant-storage) replicates your data synchronously across the three [availability zones](../../../availability-zones/az-overview.md) in the region.
-Check if your selected Azure region offers NFS 4.1 on Azure Files with the appropriate redundancy. Review the [availability of Azure Files by Azure region][afs-avail-matrix] under **Premium Files Storage**. If your scenario benefits from ZRS, [verify that Premium File shares with ZRS are supported in your Azure region](../../../storage/common/storage-redundancy.md#zone-redundant-storage).
+Check if your selected Azure region offers NFSv4.1 on Azure Files with the appropriate redundancy. Review the [availability of Azure Files by Azure region][afs-avail-matrix] for **Premium Files Storage**. If your scenario benefits from ZRS, [verify that premium file shares with ZRS are supported in your Azure region](../../../storage/common/storage-redundancy.md#zone-redundant-storage).
-It's recommended to access your Azure Storage account through an [Azure Private Endpoint](../../../storage/files/storage-files-networking-endpoints.md?tabs=azure-portal). Make sure to deploy the Azure Files storage account endpoint and the VMs, where you need to mount the NFS shares, in the same Azure VNet or peered Azure VNets.
+We recommend that you access your Azure storage account through an [Azure private endpoint](../../../storage/files/storage-files-networking-endpoints.md?tabs=azure-portal). Be sure to deploy the Azure Files storage account endpoint, and the VMs where you need to mount the NFS shares, in the same Azure virtual network or in peered Azure virtual networks.
-1. Deploy a File Storage account named `sapnfsafs`. In this example, we use ZRS. If you're not familiar with the process, see [Create a storage account](../../../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#create-a-storage-account) for the Azure portal.
-1. In the **Basics** tab, use these settings:
+1. Deploy an Azure Files storage account named **sapnfsafs**. This example uses ZRS. If you're not familiar with the process, see [Create a storage account](../../../storage/files/storage-how-to-create-file-share.md?tabs=azure-portal#create-a-storage-account) for the Azure portal.
+1. On the **Basics** tab, use these settings:
- 1. For **Storage account name**, enter `sapnfsafs`.
+ 1. For **Storage account name**, enter **sapnfsafs**.
1. For **Performance**, select **Premium**. 1. For **Premium account type**, select **FileStorage**.
- 1. For **Replication**, select zone redundancy (ZRS).
+ 1. For **Replication**, select **Zone redundancy (ZRS)**.
1. Select **Next**.
-1. In the **Advanced** tab, deselect **Require secure transfer for REST API**. If you don't deselect this option, you can't mount the NFS share to your VM. The mount operation will time out.
+1. On the **Advanced** tab, clear **Require secure transfer for REST API**. If you don't clear this option, you can't mount the NFS share to your VM. The mount operation will time out.
1. Select **Next**. 1. In the **Networking** section, configure these settings:
- 1. Under **Networking connectivity**, for **Connectivity method**, select **Private endpoint** .
+ 1. Under **Networking connectivity**, for **Connectivity method**, select **Private endpoint**.
1. Under **Private endpoint**, select **Add private endpoint**.
-1. In the **Create private endpoint** pane, select your **Subscription**, **Resource group**, and **Location**.
+1. On the **Create private endpoint** pane, select your subscription, resource group, and location. Then make the following selections:
- For **Name**, enter `sapnfsafs_pe`.
+ 1. For **Name**, enter **sapnfsafs_pe**.
- For **Storage sub-resource**, select **file**.
+ 1. For **Storage sub-resource**, select **file**.
- Under **Networking**, for **Virtual network**, select the VNet and subnet to use. Again, you can use the VNet where your SAP VMs are, or a peered VNet.
+ 1. Under **Networking**, for **Virtual network**, select the virtual network and subnet to use. Again, you can use either the virtual network where your SAP VMs are or a peered virtual network.
- Under **Private DNS integration**, accept the default option **Yes** for **Integrate with private DNS zone**. Make sure to select your **Private DNS Zone**.
+ 1. Under **Private DNS integration**, accept the default option of **Yes** for **Integrate with private DNS zone**. Be sure to select your private DNS zone.
- Select **OK**.
+ 1. Select **OK**.
1. On the **Networking** tab again, select **Next**.
It's recommended to access your Azure Storage account through an [Azure Private
1. On the **Review + create** tab, select **Create**.
+Next, deploy the NFS shares in the storage account that you created. In this example, there are two NFS shares, `sapnw1` and `saptrans`.
-Next, deploy the NFS shares in the storage account you created. In this example, there are two NFS shares, `sapnw1` and `saptrans`.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select or search for **Storage accounts**. 1. On the **Storage accounts** page, select **sapnfsafs**.
-1. On the resource menu for **sapnfsafs**, select **File shares** under **Data storage**.
+1. On the resource menu for **sapnfsafs**, select **File shares** under **Data storage**.
-1. On the **File shares** page, select **File share**.
+1. On the **File shares** page, select **File share**, and then:
- 1. For **Name**, enter `sapnw1`, `saptrans`.
- 1. Select an appropriate share size. For example, **128 GB**. Consider the size of the data stored on the share, IOPs and throughput requirements. For more information, see [Azure file share targets](../../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
+ 1. For **Name**, enter **sapnw1**, **saptrans**.
+ 1. Select an appropriate share size. Consider the size of the data stored on the share, I/O per second (IOPS), and throughput requirements. For more information, see [Azure file share targets](../../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
1. Select **NFS** as the protocol. 1. Select **No root Squash**. Otherwise, when you mount the shares on your VMs, you can't see the file owner or group.
- > [!IMPORTANT]
- > The share size above is just an example. Make sure to size your shares appropriately. Size not only based on the size of the of data stored on the share, but also based on the requirements for IOPS and throughput. For details see [Azure file share targets](../../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
-
- The SAP file systems that don't need to be mounted via NFS can also be deployed on [Azure disk storage](../../disks-types.md#premium-ssds). In this example, you can deploy `/usr/sap/NW1/D02` and `/usr/sap/NW1/D03` on Azure disk storage.
+The SAP file systems that don't need to be mounted via NFS can also be deployed on [Azure disk storage](../../disks-types.md#premium-ssds). In this example, you can deploy `/usr/sap/NW1/D02` and `/usr/sap/NW1/D03` on Azure disk storage.
#### Important considerations for NFS on Azure Files shares When you plan your deployment with NFS on Azure Files, consider the following important points: -- The minimum share size is 100 GiB. You only pay for the [capacity of the provisioned shares](../../../storage/files/understanding-billing.md#provisioned-model) -- Size your NFS shares not only based on capacity requirements, but also on IOPS and throughput requirements. For details see [Azure file share targets](../../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets)-- Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues with NFS on Azure Files, consult [Troubleshoot Azure file shares performance](../../../storage/files/storage-troubleshooting-files-performance.md)-- For SAP J2EE systems, it's not supported to place `/usr/sap/<SID>/J<nr>` on NFS on Azure Files.-- If your SAP system has a heavy batch jobs load, you may have millions of job logs. If the SAP batch job logs are stored in the file system, pay special attention to the sizing of the `sapmnt` share. As of SAP_BASIS 7.52 the default behavior for the batch job logs is to be stored in the database. For details see [Job log in the database][2360818]. -- Deploy a separate `sapmnt` share for each SAP system-- Don't use the `sapmnt` share for any other activity, such as interfaces, or `saptrans`-- Don't use the `saptrans` share for any other activity, such as interfaces, or `sapmnt`-- Avoid consolidating the shares for too many SAP systems in a single storage account. There are also [Storage account performance scale targets](../../../storage/files/storage-files-scale-targets.md#storage-account-scale-targets). Be careful to not exceed the limits for the storage account, too.-- In general, don't consolidate the shares for more than **five** SAP systems in a single storage account. This guideline helps avoid exceeding the storage account limits and simplifies performance analysis.
+- The minimum share size is 100 gibibytes (GiB). You pay for only the [capacity of the provisioned shares](../../../storage/files/understanding-billing.md#provisioned-model).
+- Size your NFS shares not only based on capacity requirements, but also on IOPS and throughput requirements. For details, see [Azure file share targets](../../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets).
+- Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues with NFS on Azure Files, consult [Troubleshoot Azure file share performance](../../../storage/files/storage-troubleshooting-files-performance.md).
+- For SAP J2EE systems, placing `/usr/sap/<SID>/J<nr>` on NFS on Azure Files is not supported.
+- If your SAP system has a heavy load of batch jobs, you might have millions of job logs. If the SAP batch job logs are stored in the file system, pay special attention to the sizing of the `sapmnt` share. As of SAP_BASIS 7.52, the default behavior for the batch job logs is to be stored in the database. For details, see [Job log in the database][2360818].
+- Deploy a separate `sapmnt` share for each SAP system.
+- Don't use the `sapmnt` share for any other activity, such as interfaces.
+- Don't use the `saptrans` share for any other activity, such as interfaces.
+- Avoid consolidating the shares for too many SAP systems in a single storage account. There are also [scalability and performance targets for storage accounts](../../../storage/files/storage-files-scale-targets.md#storage-account-scale-targets). Be careful to not exceed the limits for the storage account, too.
+- In general, don't consolidate the shares for more than *five* SAP systems in a single storage account. This guideline helps you avoid exceeding the storage account limits and simplifies performance analysis.
- In general, avoid mixing shares like `sapmnt` for non-production and production SAP systems in the same storage account.-- We recommend to deploy on SLES 15 SP2 or higher to benefit from [NFS client improvements](../../../storage/files/storage-troubleshooting-files-nfs.md#ls-hangs-for-large-directory-enumeration-on-some-kernels).
+- We recommend that you deploy on SLES 15 SP2 or later to benefit from [NFS client improvements](../../../storage/files/storage-troubleshooting-files-nfs.md#ls-hangs-for-large-directory-enumeration-on-some-kernels).
- Use a private endpoint. In the unlikely event of a zonal failure, your NFS sessions automatically redirect to a healthy zone. You don't have to remount the NFS shares on your VMs.-- If you're deploying your VMs across Availability Zones, use [Storage account with ZRS](../../../storage/common/storage-redundancy.md#zone-redundant-storage) in the Azure regions that supports ZRS.
+- If you're deploying your VMs across availability zones, use a [storage account with ZRS](../../../storage/common/storage-redundancy.md#zone-redundant-storage) in the Azure regions that supports ZRS.
- Azure Files doesn't currently support automatic cross-region replication for disaster recovery scenarios.
+### Deploy Azure NetApp Files resources
-#### Deploy Azure NetApp Files resources
-
-First, check if Azure NetApp files service is available in your region of choice: [Azure regions](https://azure.microsoft.com/global-infrastructure/services/?products=netapp).
-
-1. Create the NetApp account in the selected Azure region, following the [instructions to create NetApp Account](../../../azure-netapp-files/azure-netapp-files-create-netapp-account.md).
-2. Set up Azure NetApp Files capacity pool, following the [instructions on how to set up Azure NetApp Files capacity pool](../../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
-The SAP Netweaver architecture presented in this article uses single Azure NetApp Files capacity pool, Premium SKU. We recommend Azure NetApp Files Premium SKU for SAP Netweaver application workload on Azure.
+1. Check that the Azure NetApp Files service is available in your [Azure region of choice](https://azure.microsoft.com/global-infrastructure/services/?products=netapp).
+1. Create the NetApp account in the selected Azure region. Follow [these instructions](../../../azure-netapp-files/azure-netapp-files-create-netapp-account.md).
+1. Set up the Azure NetApp Files capacity pool. Follow [these instructions](../../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
-3. Delegate a subnet to Azure NetApp files as described in the [instructions Delegate a subnet to Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-delegate-subnet.md).
+ The SAP NetWeaver architecture presented in this article uses a single Azure NetApp Files capacity pool, Premium SKU. We recommend Azure NetApp Files Premium SKU for SAP NetWeaver application workloads on Azure.
+1. Delegate a subnet to Azure NetApp Files, as described in [these instructions](../../../azure-netapp-files/azure-netapp-files-delegate-subnet.md).
+1. Deploy Azure NetApp Files volumes by following [these instructions](../../../azure-netapp-files/azure-netapp-files-create-volumes.md). Deploy the volumes in the designated Azure NetApp Files [subnet](/rest/api/virtualnetwork/subnets). The IP addresses of the Azure NetApp volumes are assigned automatically.
-4. Deploy Azure NetApp Files volumes, following the [instructions to create a volume for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-create-volumes.md). Deploy the volumes in the designated Azure NetApp Files [subnet](/rest/api/virtualnetwork/subnets). The IP addresses of the Azure NetApp volumes are assigned automatically. Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure Virtual Network or in peered Azure Virtual Networks. In this example we use two Azure NetApp Files volumes: sap<b>NW1</b> and trans. The file paths that are mounted to the corresponding mount points are /usrsap<b>nw1</b>/sapmnt<b>NW1</b>, /usrsap<b>nw1</b>/usrsap<b>NW1</b> and so on.
+ Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. This example uses two Azure NetApp Files volumes: `sapNW1` and `trans`. The file paths that are mounted to the corresponding mount points are:
- 1. volume sap<b>NW1</b> (nfs://10.27.1.5/usrsap<b>nw1</b>/sapmnt<b>NW1</b>)
- 2. volume sap<b>NW1</b> (nfs://10.27.1.5/usrsap<b>nw1</b>/usrsap<b>NW1</b>)
- 3. volume trans (nfs://10.27.1.5/trans)
+ - Volume `sapNW1` (`nfs://10.27.1.5/usrsapnw1/sapmntNW1`)
+ - Volume `sapNW1` (`nfs://10.27.1.5/usrsapnw1/usrsapNW1`)
+ - Volume `trans` (`nfs://10.27.1.5/trans`)
-The SAP file systems that don't need to be shared can also be deployed on [Azure disk storage](../../disks-types.md#premium-ssds). For example /usr/sap/<b>NW1</b>/D<b>02</b>, /usr/sap/<b>NW1</b>/D<b>03</b>) could be deployed as Azure disk storage.
+The SAP file systems that don't need to be shared can also be deployed on [Azure disk storage](../../disks-types.md#premium-ssds). For example, `/usr/sap/NW1/D02` and `/usr/sap/NW1/D03` could be deployed as Azure disk storage.
-### Important considerations for NFS on Azure NetApp Files
+#### Important considerations for NFS on Azure NetApp Files
-When considering Azure NetApp Files for the SAP Netweaver High Availability architecture, be aware of the following important considerations:
+When you're considering Azure NetApp Files for the SAP NetWeaver high-availability architecture, be aware of the following important considerations:
-- The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1 TiB increments.-- The minimum volume is 100 GiB-- Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the same Azure Virtual Network or in [peered virtual networks](../../../virtual-network/virtual-network-peering-overview.md) in the same region. Azure NetApp Files access over VNET peering in the same region is supported. Azure NetApp access over global peering isn't yet supported.-- The selected virtual network must have a subnet, delegated to Azure NetApp Files.-- The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the SAP Azure NetApp volumes, make sure that the resulting throughput meets the application requirements. -- Azure NetApp Files offers [export policy](../../../azure-netapp-files/azure-netapp-files-configure-export-policy.md): you can control the allowed clients, the access type (Read&Write, Read Only, etc.). -- Azure NetApp Files feature isn't zone aware yet. Currently Azure NetApp Files feature isn't deployed in all Availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
+- The minimum capacity pool is 4 tebibytes (TiB). You can increase the size of the capacity pool in 1-TiB increments.
+- The minimum volume is 100 GiB.
+- Azure NetApp Files and all virtual machines where Azure NetApp Files volumes will be mounted must be in the same Azure virtual network or in [peered virtual networks](../../../virtual-network/virtual-network-peering-overview.md) in the same region. Azure NetApp Files access over virtual network peering in the same region is supported. Azure NetApp Files access over global peering isn't yet supported.
+- The selected virtual network must have a subnet that's delegated to Azure NetApp Files.
+- The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-service-levels.md). When you're sizing the Azure NetApp Files volumes for SAP, make sure that the resulting throughput meets the application's requirements.
+- Azure NetApp Files offers an [export policy](../../../azure-netapp-files/azure-netapp-files-configure-export-policy.md). You can control the allowed clients and the access type (for example, read/write or read-only).
+- Azure NetApp Files isn't zone aware yet. Currently, Azure NetApp Files isn't deployed in all availability zones in an Azure region. Be aware of the potential latency implications in some Azure regions.
- Azure NetApp Files volumes can be deployed as NFSv3 or NFSv4.1 volumes. Both protocols are supported for the SAP application layer (ASCS/ERS, SAP application servers).
-## Setting up (A)SCS
+## Set up ASCS
-Next, you will prepare and install the SAP (A)SCS / ERS instances.
+Next, you'll prepare and install the SAP ASCS and ERS instances.
-### Create Pacemaker cluster
+### Create a Pacemaker cluster
-Follow the steps in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) to create a basic Pacemaker cluster for SAP (A)SCS.
+Follow the steps in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) to create a basic Pacemaker cluster for SAP ASCS.
### Prepare for installation
-The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2.
+The following items are prefixed with:
-1. **[A]** Install the latest version of SUSE Connector
+- **[A]**: Applicable to all nodes.
+- **[1]**: Applicable to only node 1.
+- **[2]**: Applicable to only node 2.
+
+1. **[A]** Install the latest version of the SUSE connector.
```bash sudo zypper install sap-suse-cluster-connector ```
-2. **[A]** Install resource agent `sapstartsrv`.
+1. **[A]** Install the `sapstartsrv` resource agent.
```bash sudo zypper install sapstartsrv-resource-agents ```
-3. **[A]** Update SAP resource agents
+1. **[A]** Update SAP resource agents.
- A patch for the resource-agents package is required to use the configuration, that is described in this article. You can check, if the patch is already installed with the following command
+ To use the configuration that this article describes, you need a patch for the resource-agents package. To check if the patch is already installed, use the following command.
```bash sudo grep 'parameter name="IS_ERS"' /usr/lib/ocf/resource.d/heartbeat/SAPInstance ```
- The output should be similar to
+ The output should be similar to the following example.
```bash <parameter name="IS_ERS" unique="0" required="0">; ```
- If the grep command doesn't find the IS_ERS parameter, you need to install the patch listed on [the SUSE download page](https://download.suse.com/patch/finder/#bu=suse&familyId=&productId=&dateRange=&startDate=&endDate=&priority=&architecture=&keywords=resource-agents)
-
+ If the `grep` command doesn't find the `IS_ERS` parameter, you need to install the patch listed on [the SUSE download page](https://download.suse.com/patch/finder/#bu=suse&familyId=&productId=&dateRange=&startDate=&endDate=&priority=&architecture=&keywords=resource-agents).
> [!IMPORTANT]
- > You need to install at least `sapstartsrv-resource-agents` version 0.91 and `resoure-agents` 4.x from November 2021.
+ > You need to install at least `sapstartsrv-resource-agents` version 0.91 and `resource-agents` 4.x from November 2021.
-3. **[A]** Setup host name resolution
+1. **[A]** Set up host name resolution.
- You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file.
- Replace the IP address and the hostname in the following commands
+ You can either use a DNS server or modify `/etc/hosts` on all nodes. This example shows how to use the `/etc/hosts` file.
```bash sudo vi /etc/hosts ```
- Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
+ Insert the following lines to `/etc/hosts`. Change the IP address and host name to match your environment.
```bash # IP address of cluster node 1 10.27.0.6 sap-cl1 # IP address of cluster node 2 10.27.0.7 sap-cl2
- # IP address of the load balancer frontend configuration for SAP Netweaver ASCS
+ # IP address of the load balancer's front-end configuration for SAP NetWeaver ASCS
10.27.0.9 sapascs
- # IP address of the load balancer frontend configuration for SAP Netweaver ERS
+ # IP address of the load balancer's front-end configuration for SAP NetWeaver ERS
10.27.0.10 sapers ```
-3. **[A]** Configure SWAP file
+1. **[A]** Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
- # Check if property ResourceDisk.Format is already set to y and if not, set it
+ # Check if the ResourceDisk.Format property is already set to y, and if not, set it.
ResourceDisk.Format=y
- # Set the property ResourceDisk.EnableSwap to y
- # Create and use swapfile on resource disk.
+ # Set the ResourceDisk.EnableSwap property to y.
+ # Create and use the SWAP file on the resource disk.
ResourceDisk.EnableSwap=y
- # Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
- # The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
- # Size of the swapfile.
+ # Set the size of the SWAP file with the ResourceDisk.SwapSizeMB property.
+ # The free space of resource disk varies by virtual machine size. Don't set a value that's too big. You can check the SWAP space by using the swapon command.
ResourceDisk.SwapSizeMB=2000 ```
- Restart the Agent to activate the change
+ Restart the agent to activate the change.
```bash sudo service waagent restart ```
-### Prepare SAP directories (NFS on Azure Files)
+### Prepare SAP directories if you're using NFS on Azure Files
-1. **[1]** Create the SAP directories on the NFS share.
- Mount temporarily the NFS share **sapnw1** one of the VMs and create the SAP directories that will be used as nested mount points.
+1. **[1]** Create the SAP directories on the NFS share.
+
+ Temporarily mount the NFS share `sapnw1` to one of the VMs and create the SAP directories that will be used as nested mount points.
```bash
- # mount temporarily the volume
+ # Temporarily mount the volume.
sudo mkdir -p /saptmp sudo mount -t nfs sapnfs.file.core.windows.net:/sapnfsafs/sapnw1 /saptmp -o vers=4,minorversion=1,sec=sys
- # create the SAP directories
+ # Create the SAP directories.
sudo cd /saptmp sudo mkdir -p sapmntNW1 sudo mkdir -p usrsapNW1
- # unmount the volume and delete the temporary directory
+ # Unmount the volume and delete the temporary directory.
cd .. sudo umount /saptmp sudo rmdir /saptmp ```
-2. **[A]** Create the shared directories
+1. **[A]** Create the shared directories.
```bash sudo mkdir -p /sapmnt/NW1
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo chattr +i /usr/sap/trans ```
-2. **[A]** Mount the file systems - with the simple mount configuration the file systems aren't controlled by the Pacemaker cluster.
+1. **[A]** Mount the file systems.
+
+ With the simple mount configuration, the Pacemaker cluster doesn't control the file systems.
```bash echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/usrsapNW1/ /usr/sap/NW1 nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab
- # Mount the file systems
+ # Mount the file systems.
mount -a ```
-### Prepare SAP directories (NFS on Azure NetApp Files)
+### Prepare SAP directories if you're using NFS on Azure NetApp Files
-1. **[A]** Disable ID mapping (if using NFSv4.1)
-The instructions in this section are only applicable, if using Azure NetApp Files volumes with NFSv4.1 protocol. Perform the configuration on all VMs, where Azure NetApp Files NFSv4.1 volumes will be mounted.
-
- 1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, that is, **`defaultv4iddomain.com`** and the mapping is set to **nobody**.
+The instructions in this section are applicable only if you're using Azure NetApp Files volumes with the NFSv4.1 protocol. Perform the configuration on all VMs where Azure NetApp Files NFSv4.1 volumes will be mounted.
+
+1. **[A]** Disable ID mapping.
+
+ 1. Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, `defaultv4iddomain.com`. Also verify that the mapping is set to `nobody`.
- > [!IMPORTANT]
- > Make sure to set the NFS domain in `/etc/idmapd.conf` on the VM to match the default domain configuration on Azure NetApp Files: **`defaultv4iddomain.com`**. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed as `nobody`.
-
```bash sudo cat /etc/idmapd.conf # Examplepython-azure-mgmt-compute
The instructions in this section are only applicable, if using Azure NetApp File
Nobody-Group = nobody ```
- 2. Verify `nfs4_disable_idmapping`. It should be set to **Y**. To create the directory structure where `nfs4_disable_idmapping` is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers.
+ 1. Verify `nfs4_disable_idmapping`. It should be set to `Y`.
+
+ To create the directory structure where `nfs4_disable_idmapping` is located, run the `mount` command. You won't be able to manually create the directory under `/sys/modules`, because access is reserved for the kernel and drivers.
```bash
- # Check nfs4_disable_idmapping
+ # Check nfs4_disable_idmapping.
cat /sys/module/nfs/parameters/nfs4_disable_idmapping
- # If you need to set nfs4_disable_idmapping to Y
+ # If you need to set nfs4_disable_idmapping to Y:
mkdir /mnt/tmp mount 10.27.1.5:/sapmnt/<b>qas</b> /mnt/tmp umount /mnt/tmp echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping
- # Make the configuration permanent
+ # Make the configuration permanent.
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf ```
-4. **[1]** Create SAP directories in the Azure NetApp Files volume.
- Mount temporarily the Azure NetApp Files volume on one of the VMs and create the SAP directories(file paths).
-
- ```bash
- # mount temporarily the volume
+1. **[1]** Temporarily mount the Azure NetApp Files volume on one of the VMs and create the SAP directories (file paths).
+
+ ```bash
+ # Temporarily mount the volume.
sudo mkdir -p /saptmp
- # If using NFSv3
+ # If you're using NFSv3:
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=3,tcp 10.27.1.5:/sapnw1 /saptmp
- # If using NFSv4.1
+ # If you're using NFSv4.1:
sudo mount -t nfs -o rw,hard,rsize=65536,wsize=65536,vers=4.1,sec=sys,tcp 10.27.1.5:/sapnw1 /saptmp
- # create the SAP directories
+ # Create the SAP directories.
sudo cd /saptmp sudo mkdir -p sapmntNW1 sudo mkdir -p usrsapNW1
- # unmount the volume and delete the temporary directory
+ # Unmount the volume and delete the temporary directory.
sudo cd .. sudo umount /saptmp sudo rmdir /saptmp ```
-5. **[A]** Create the shared directories
+1. **[A]** Create the shared directories.
```bash sudo mkdir -p /sapmnt/NW1
The instructions in this section are only applicable, if using Azure NetApp File
sudo chattr +i /usr/sap/trans ```
-6. **[A]** Mount the file systems - with the simple mount configuration the file systems aren't controlled by the Pacemaker cluster.
+1. **[A]** Mount the file systems.
+
+ With the simple mount configuration, the Pacemaker cluster doesn't control the file systems.
```bash
- # If using NFSv3
+ # If you're using NFSv3:
echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=3,hard 0 0" >> /etc/fstab echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs vers=3,hard 0 0" >> /etc/fstab echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=3,hard 0 0" >> /etc/fstab
- # If using NFSv4.1
+ # If you're using NFSv4.1:
echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab echo "10.27.1.5:/sapnw1/usrsapNW1 /usr/sap/NW1 nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab
- # Mount the file systems
+ # Mount the file systems.
mount -a ```
-### Installing SAP NetWeaver ASCS/ERS
+### Install SAP NetWeaver ASCS and ERS
-1. **[1]** Create a virtual IP resource and health-probe for the ASCS instance
+1. **[1]** Create a virtual IP resource and health probe for the ASCS instance.
> [!IMPORTANT]
- > We recommend using azure-lb resource agent, which is part of package resource-agents with a minimum version resource-agents-4.3.0184.6ee15eb2-4.13.1.
+ > We recommend using the `azure-lb` resource agent, which is part of the resource-agents package with a minimum version of `resource-agents-4.3.0184.6ee15eb2-4.13.1`.
```bash
The instructions in this section are only applicable, if using Azure NetApp File
meta resource-stickiness=3000 ```
- Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
+ Make sure that the cluster status is OK and that all resources are started. It isn't important which node the resources are running on.
```bash sudo crm_mon -r
The instructions in this section are only applicable, if using Azure NetApp File
```
-2. **[1]** Install SAP NetWeaver ASCS
+2. **[1]** Install SAP NetWeaver ASCS as root on the first node.
- Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS, for example ***sapascs***, ***10.27.0.9*** and the instance number that you used for the probe of the load balancer, for example ***00***.
+ Use a virtual host name that maps to the IP address of the load balancer's front-end configuration for ASCS (for example, `sapascs`, `10.27.0.9`) and the instance number that you used for the probe of the load balancer (for example, `00`).
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.
+ You can use the `sapinst` parameter `SAPINST_REMOTE_ACCESS_USER` to allow a non-root user to connect to `sapinst`. You can use the `SAPINST_USE_HOSTNAME` parameter to install SAP by using a virtual host name.
```bash sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=<virtual_hostname> ```
- If the installation fails to create a subfolder in /usr/sap/**NW1**/ASCS**00**, try setting the owner and group of the ASCS**00** folder and retry.
+ If the installation fails to create a subfolder in `/usr/sap/NW1/ASCS00`, set the owner and group of the `ASCS00` folder and retry.
```bash chown nw1adm /usr/sap/NW1/ASCS00 chgrp sapsys /usr/sap/NW1/ASCS00 ```
-3. **[1]** Create a virtual IP resource and health-probe for the ERS instance
+3. **[1]** Create a virtual IP resource and health probe for the ERS instance.
```bash sudo crm node online sap-cl2
The instructions in this section are only applicable, if using Azure NetApp File
sudo crm configure group g-NW1_ERS nc_NW1_ERS vip_NW1_ERS ```
- Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
+ Make sure that the cluster status is OK and that all resources are started. It isn't important which node the resources are running on.
```bash sudo crm_mon -r
The instructions in this section are only applicable, if using Azure NetApp File
# vip_NW1_ERS (ocf::heartbeat:IPaddr2): Started sap-cl2 ```
-4. **[2]** Install SAP NetWeaver ERS
+4. **[2]** Install SAP NetWeaver ERS as root on the second node.
- Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS, for example **sapers**, **10.27.0.10** and the instance number that you used for the probe of the load balancer, for example **01**.
+ Use a virtual host name that maps to the IP address of the load balancer's front-end configuration for ERS (for example, `sapers`, `10.27.0.10`) and the instance number that you used for the probe of the load balancer (for example, `01`).
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst. You can use parameter SAPINST_USE_HOSTNAME to install SAP, using virtual hostname.
+ You can use the `SAPINST_REMOTE_ACCESS_USER` parameter to allow a non-root user to connect to `sapinst`. You can use the `SAPINST_USE_HOSTNAME` parameter to install SAP by using a virtual host name.
```bash <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin SAPINST_USE_HOSTNAME=virtual_hostname ``` > [!NOTE]
- > Use SWPM SP 20 PL 05 or higher. Lower versions do not set the permissions correctly and the installation will fail.
+ > Use SWPM SP 20 PL 05 or later. Earlier versions don't set the permissions correctly, and they cause the installation to fail.
- If the installation fails to create a subfolder in /usr/sap/**NW1**/ERS**01**, try setting the owner and group of the ERS**01** folder and retry.
+ If the installation fails to create a subfolder in `/usr/sap/NW1/ERS01`, set the owner and group of the `ERS01` folder and retry.
```bash chown nw1adm /usr/sap/NW1/ERS01 chgrp sapsys /usr/sap/NW1/ERS01 ```
-5. **[1]** Adapt the ASCS/SCS and ERS instance profiles
+5. **[1]** Adapt the ASCS instance profile.
- * ASCS/SCS profile
- ```bash sudo vi /sapmnt/NW1/profile/NW1_ASCS00_sapascs
- # Change the restart command to a start command
- #Restart_Program_01 = local $(_EN) pf=$(_PF)
+ # Change the restart command to a start command.
+ # Restart_Program_01 = local $(_EN) pf=$(_PF).
Start_Program_01 = local $(_EN) pf=$(_PF)
- # Add the following lines
+ # Add the following lines.
service/halib = $(DIR_CT_RUN)/saphascriptco.so service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
- # Add the keep alive parameter, if using ENSA1
+ # Add the keepalive parameter, if you're using ENSA1.
enque/encni/set_so_keepalive = true ```
- For both ENSA1 and ENSA2, make sure that the `keepalive` OS parameters are set as described in SAP note [1410736](https://launchpad.support.sap.com/#/notes/1410736).
+ For Standalone Enqueue Server 1 and 2 (ENSA1 and ENSA2), make sure that the `keepalive` OS parameters are set as described in SAP Note [1410736](https://launchpad.support.sap.com/#/notes/1410736).
- * ERS profile
+ Now adapt the ERS instance profile.
```bash sudo vi /sapmnt/NW1/profile/NW1_ERS01_sapers
- # Change the restart command to a start command
- #Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
+ # Change the restart command to a start command.
+ # Restart_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID).
Start_Program_00 = local $(_ER) pf=$(_PFL) NR=$(SCSID)
- # Add the following lines
+ # Add the following lines.
service/halib = $(DIR_CT_RUN)/saphascriptco.so service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
- # remove Autostart from ERS profile
- # Autostart = 1
+ # Remove Autostart from the ERS profile.
+ Autostart = 1
```
-6. **[A]** Configure Keep Alive
+6. **[A]** Configure `keepalive`.
- The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1/ENSA2. Read [SAP Note 1410736][1410736] for more information.
+ Communication between the SAP NetWeaver application server and ASCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout.
+
+ To prevent this disconnection, you need to set a parameter in the SAP NetWeaver ASCS profile, if you're using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1 and ENSA2. For more information, read SAP Note[1410736][1410736].
```bash
- # Change the Linux system configuration
+ # Change the Linux system configuration.
sudo sysctl net.ipv4.tcp_keepalive_time=300 ```
-7. **[A]** Configure the SAP users after the installation
+7. **[A]** Configure the SAP users after the installation.
```bash
- # Add sidadm to the haclient group
+ # Add sidadm to the haclient group.
sudo usermod -aG haclient nw1adm ```
-8. **[1]** Add the ASCS and ERS SAP services to the `sapservice` file
+8. **[1]** Add the ASCS and ERS SAP services to the `sapservice` file.
- Add the ASCS service entry to the second node and copy the ERS service entry to the first node.
+ Add the ASCS service entry to the second node, and copy the ERS service entry to the first node.
```bash cat /usr/sap/sapservices | grep ASCS00 | sudo ssh sap-cl2 "cat >>/usr/sap/sapservices" sudo ssh sap-cl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices ```
-9. **[A]** Enable services `sapping` and `sappong`. Agent `sapping` runs before `sapinit` to hide file `/usr/sap/sapservices` and `sappong` runs after `sapinit` to unhide the `sapservices` file during VM boot. Therefore `SAPStartSrv` isn't started automatically for an SAP instance at boot time, as it is managed by the Pacemaker cluster.
+9. **[A]** Enable `sapping` and `sappong`. The `sapping` agent runs before `sapinit` to hide the `/usr/sap/sapservices` file. The `sappong` agent runs after `sapinit` to unhide the `sapservices` file during VM boot. `SAPStartSrv` isn't started automatically for an SAP instance at boot time, because the Pacemaker cluster manages it.
```bash sudo systemctl enable sapping sudo systemctl enable sappong ```
-10. **[1]** Create the SAP cluster resources
+10. **[1]** Create the SAP cluster resources.
- If using enqueue server 1 architecture (ENSA1), define the resources as follows:
+ If you're using an ENSA1 architecture, define the resources as follows.
```bash sudo crm configure property maintenance-mode="true"
The instructions in this section are only applicable, if using Azure NetApp File
sudo crm configure primitive rsc_sapstartsrv_NW1_ERS01 ocf:suse:SAPStartSrv \ params InstanceName=NW1_ERS01_nw1ers
- # If using NFS on Azure Files or NFSv3 on Azure NetApp Files
+ # If you're using NFS on Azure Files or NFSv3 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \ meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10
- # If using NFS on Azure Files or NFSv3 on Azure NetApp Files
+ # If you're using NFS on Azure Files or NFSv3 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \ AUTOMATIC_RECOVER=false IS_ERS=true MINIMAL_PROBE=true \ meta priority=1000
- # If using NFSv4.1 on Azure NetApp Files
+ # If you're using NFSv4.1 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \ meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10
- # If using NFSv4.1 on Azure NetApp Files
+ # If you're using NFSv4.1 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \
The instructions in this section are only applicable, if using Azure NetApp File
sudo crm configure property maintenance-mode="false" ```
- SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support.
+ SAP introduced support for [ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
- If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), define the resources as follows:
+ If you're using an ENSA2 architecture, define the resources as follows.
```bash sudo crm configure property maintenance-mode="true"
The instructions in this section are only applicable, if using Azure NetApp File
sudo crm configure primitive rsc_sapstartsrv_NW1_ERS01 ocf:suse:SAPStartSrv \ params InstanceName=NW1_ERS01_nw1ers
- # If using NFS on Azure Files or NFSv3 on Azure NetApp Files
+ # If you're using NFS on Azure Files or NFSv3 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \ meta resource-stickiness=5000
- # If using NFS on Azure Files or NFSv3 on Azure NetApp Files
+ # If you're using NFS on Azure Files or NFSv3 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \ AUTOMATIC_RECOVER=false IS_ERS=true MINIMAL_PROBE=true
- # If using NFSv4.1 on Azure NetApp Files
+ # If you're using NFSv4.1 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false MINIMAL_PROBE=true \ meta resource-stickiness=5000
- # If using NFSv4.1 on Azure NetApp Files
+ # If you're using NFSv4.1 on Azure NetApp Files:
sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \
The instructions in this section are only applicable, if using Azure NetApp File
sudo crm configure property maintenance-mode="false" ```
- If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
- Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running.
+ If you're upgrading from an older version and switching to ENSA2, see SAP Note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
+
+ Make sure that the cluster status is OK and that all resources are started. It isn't important which node the resources are running on.
```bash sudo crm_mon -r
The instructions in this section are only applicable, if using Azure NetApp File
# rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1 ```
-## <a name="2d6008b0-685d-426c-b59e-6cd281fd45d7"></a>SAP application server preparation
-
-Some databases require that the database instance installation is executed on an application server. Prepare the application server virtual machines to be able to use them in these cases.
+## <a name="2d6008b0-685d-426c-b59e-6cd281fd45d7"></a>Prepare the SAP application server
-The steps below assume that you install the application server on a server different from the ASCS/SCS and HANA servers.
+Some databases require you to execute the database installation on an application server. Prepare the application server VMs to be able to execute the database installation.
-### SAP application server preparation common steps
+The following common steps assume that you install the application server on a server that's different from the ASCS and HANA servers:
-1. Setup host name resolution
+1. Set up host name resolution.
- You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file.
- Replace the IP address and the hostname in the following commands
+ You can either use a DNS server or modify `/etc/hosts` on all nodes. This example shows how to use the `/etc/hosts` file.
```bash sudo vi /etc/hosts ```
- Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment
+ Insert the following lines to `/etc/hosts`. Change the IP address and host name to match your environment.
```bash 10.27.0.6 sap-cl1 10.27.0.7 sap-cl2
- # IP address of the load balancer frontend configuration for SAP Netweaver ASCS
+ # IP address of the load balancer's front-end configuration for SAP NetWeaver ASCS
10.27.0.9 sapascs
- # IP address of the load balancer frontend configuration for SAP Netweaver ERS
+ # IP address of the load balancer's front-end configuration for SAP NetWeaver ERS
10.27.0.10 sapers 10.27.0.8 sapa01 10.27.0.12 sapa02 ```
-1. Configure SWAP file
+1. Configure the SWAP file.
```bash sudo vi /etc/waagent.conf
- # Set the property ResourceDisk.EnableSwap to y
- # Create and use swapfile on resource disk.
+ # Set the ResourceDisk.EnableSwap property to y.
+ # Create and use the SWAP file on the resource disk.
ResourceDisk.EnableSwap=y
- # Set the size of the SWAP file with property ResourceDisk.SwapSizeMB
- # The free space of resource disk varies by virtual machine size. Make sure that you do not set a value that is too big. You can check the SWAP space with command swapon
- # Size of the swapfile.
+ # Set the size of the SWAP file by using the ResourceDisk.SwapSizeMB property.
+ # The free space of the resource disk varies by virtual machine size. Don't set a value that's too big. You can check the SWAP space by using the swapon command.
ResourceDisk.SwapSizeMB=2000 ```
- Restart the Agent to activate the change
+ Restart the agent to activate the change.
```bash sudo service waagent restart ``` ### Prepare SAP directories
-Follow the instructions in one of the following sections, to prepare the SAP directories on the SAP application server VMs:
-
-- If you are using NFS on Azure Files, follow the instructions in **Prepare SAP directories (NFS on Azure Files)**. -- If you are using NFS on Azure NetApp Files, follow the instructions in **Prepare SAP directories (NFS on NetApp Files)**.
-
-#### Prepare SAP directories (NFS on Azure files)
+If you're using NFS on Azure Files, use the following instructions to prepare the SAP directories on the SAP application server VMs:
-1. Create the mount points
+1. Create the mount points.
```bash sudo mkdir -p /sapmnt/NW1
Follow the instructions in one of the following sections, to prepare the SAP dir
sudo chattr +i /usr/sap/trans ```
-2. Mount the file systems
+2. Mount the file systems.
```bash echo "sapnfsafs.file.core.windows.net:/sapnfsafs/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab echo "sapnfsafs.file.core.windows.net:/sapnfsafs/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys 0 0" >> /etc/fstab
- # Mount the file systems
+ # Mount the file systems.
mount -a ```
-#### Prepare SAP directories (NFS on Azure NetApp Files)
-1. Create the mount points
+If you're using NFS on Azure NetApp Files, use the following instructions to prepare the SAP directories on the SAP application server VMs:
+
+1. Create the mount points.
```bash sudo mkdir -p /sapmnt/NW1
Follow the instructions in one of the following sections, to prepare the SAP dir
sudo chattr +i /sapmnt/NW1 sudo chattr +i /usr/sap/trans
-2. Mount the file systems
+2. Mount the file systems.
```bash
- # If using NFSv3
+ # If you're using NFSv3:
echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=3,hard 0 0" >> /etc/fstab echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=3, hard 0 0" >> /etc/fstab
- # If using NFSv4.1
+ # If you're using NFSv4.1:
echo "10.27.1.5:/sapnw1/sapmntNW1 /sapmnt/NW1 nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab echo "10.27.1.5:/saptrans /usr/sap/trans nfs vers=4,minorversion=1,sec=sys,hard 0 0" >> /etc/fstab
- # Mount the file systems
+ # Mount the file systems.
mount -a ```
-## Install database
+## Install the database
-In this example, SAP NetWeaver is installed on SAP HANA. You can use any supported database for this installation. For more information on how to install SAP HANA in Azure, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]. For a list of supported databases, see [SAP Note 1928533][1928533].
+In this example, SAP NetWeaver is installed on SAP HANA. You can use any supported database for this installation. For more information on how to install SAP HANA in Azure, see [High availability of SAP HANA on Azure virtual machines][sap-hana-ha]. For a list of supported databases, see SAP Note [1928533][1928533].
-Install the SAP NetWeaver database instance as root using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the database.
-You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+Install the SAP NetWeaver database instance as root by using a virtual host name that maps to the IP address of the load balancer's front-end configuration for the database. You can use the `SAPINST_REMOTE_ACCESS_USER` parameter to allow a non-root user to connect to `sapinst`.
- ```bash
- sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
- ```
+```bash
+sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin
+```
-## SAP NetWeaver application server installation
+## Install the SAP NetWeaver application server
-Follow these steps to install an SAP application server.
+Follow these steps to install an SAP application server:
-1. **[A]** Prepare application server
- Follow the steps in the chapter [SAP NetWeaver application server preparation](high-availability-guide-suse-nfs-azure-files.md#2d6008b0-685d-426c-b59e-6cd281fd45d7) above to prepare the application server.
+1. **[A]** Prepare the application server.
+
+ Follow the steps in [SAP NetWeaver application server preparation](high-availability-guide-suse-nfs-azure-files.md#2d6008b0-685d-426c-b59e-6cd281fd45d7).
-2. **[A]** Install SAP NetWeaver application server.
- Install a primary or additional SAP NetWeaver applications server.
+2. **[A]** Install a primary or additional SAP NetWeaver application server.
- You can use the sapinst parameter SAPINST_REMOTE_ACCESS_USER to allow a non-root user to connect to sapinst.
+ You can use the `SAPINST_REMOTE_ACCESS_USER` parameter to allow a non-root user to connect to `sapinst`.
```bash sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=sapadmin ```
-3. **[A]** Update SAP HANA secure store
+3. **[A]** Update the SAP HANA secure store to point to the virtual name of the SAP HANA system replication setup.
- Update the SAP HANA secure store to point to the virtual name of the SAP HANA System Replication setup.
+ Run the following command to list the entries.
- Run the following command to list the entries
```bash hdbuserstore List ```
- The command should list all entries and should look similar to
+ The command should list all entries and should look similar to this example.
+
```bash DATA FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.DAT KEY FILE : /home/nw1adm/.hdb/sapa01/SSFS_HDB.KEY
Follow these steps to install an SAP application server.
DATABASE: NW1 ```
- In this example, the IP address of the default entry points to the VM, not the load balancer. Change the entry to point to the virtual hostname of the load balancer. Make sure to use the same port and database name. For example, `30313` and `NW1` in the sample output.
+ In this example, the IP address of the default entry points to the VM, not the load balancer. Change the entry to point to the virtual host name of the load balancer. Be sure to use the same port and database name. For example, use `30313` and `NW1` in the sample output.
```bash su - nw1adm hdbuserstore SET DEFAULT nw1db:30313@NW1 SAPABAP1 <password of ABAP schema> ```
-## Test cluster setup
+## Test your cluster setup
-Thoroughly test your Pacemaker cluster. [Execute the typical failover tests](./high-availability-guide-suse.md#test-the-cluster-setup).
+Thoroughly test your Pacemaker cluster. [Run the typical failover tests](./high-availability-guide-suse.md#test-the-cluster-setup).
## Next steps
-* [HA for SAP NW on Azure VMs on SLES for SAP applications multi-SID guide](./high-availability-guide-suse-multi-sid.md)
-* [SAP workload configurations with Azure Availability Zones](sap-ha-availability-zones.md)
+* [HA for SAP NetWeaver on Azure VMs on SLES for SAP applications multi-SID guide](./high-availability-guide-suse-multi-sid.md)
+* [SAP workload configurations with Azure availability zones](sap-ha-availability-zones.md)
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]
+* [High Availability of SAP HANA on Azure VMs][sap-hana-ha]
virtual-network-manager How To Exclude Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-exclude-elements.md
Assume you have the following virtual networks in your subscription. Each virtua
To begin using the basic editor to create your conditional statement, you need to create a new network group.
-1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under *Settings*. Then select **+ Add** to create a new network group.
+1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under *Settings*. Then select **+ Create** to create a new network group.
-1. Enter a **Name** for the network group. Then select the **Conditional statements** tab.
+1. Enter a **Name** and an optional **Description** for the network group, and select **Add**.
+1. Select the network group from the list and select **Create Azure Policy**.
+1. Enter a **Policy name** and leave the **Scope** selections unless changes are needed.
+1. Under **Criteria**, select **Tags** from the drop-down under *Parameter* and then select **Exist** from the drop-down under *Operator*.
-1. Select **Tags** from the drop-down under *Parameter* and then select **Exist** from the drop-down under *Operator*.
-
-1. Enter **Prod** under *Condition*, then select **Evaluate**. You should only see VNet-A-EastUS, VNet-B-WestUS, and VNetA show up in the list.
-
-1. Add another conditional statement by selecting the logical operator **AND**. Select **Name** for the *Parameter* and **Contains** for the *Operator*. Enter **VNet-A** into the *Condition* field. Select **Evaluate** to see which virtual network shows up in the list. You should only see VNet-A-EastUS and VNet-A-WestUS.
-
-1. Select **Review + Create** and then select **Create** once validation has passed.
+1. Enter **Prod** under *Condition*, then select **Save**.
+1. After a few minutes, select your network group and select **Group Members** under *Settings*. You should only see VNet-A-EastUS, VNet-A-WestUS, and VNetA show up in the list.
> [!NOTE]
-> The **basic editor** is only available during the creation of a network group.
+> The **basic editor** is only available during the creation of an Azure Policy.
## Advanced editor
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
description: Learn how to set up an Azure AD tenant for P2S Azure AD authenticat
Previously updated : 07/29/2022 Last updated : 09/06/2022
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
:::image type="content" source="./media/openvpn-create-azure-ad-tenant/configuration.png" alt-text="Screenshot showing settings for Tunnel type, Authentication type, and Azure Active Directory settings."::: > [!NOTE]
- > Make sure you include a trailing slash at the end of the `AadIssuerUri` **Issuer** value. Otherwise, the connection may fail.
+ > Make sure you include a trailing slash at the end of the **Issuer** value. Otherwise, the connection may fail.
> 1. Save your changes.
Verify that you have an Azure AD tenant. If you don't have an Azure AD tenant, y
## Next steps
-* [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
+[Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md).
web-application-firewall Waf Front Door Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-best-practices.md
Previously updated : 07/18/2022 Last updated : 09/06/2022
For more information, see [Tuning Web Application Firewall (WAF) for Azure Front
After you've tuned your WAF, you should configure it to [run in prevention mode](waf-front-door-policy-settings.md#waf-mode). By running in prevention mode, you ensure the WAF actually blocks requests that it detects are malicious. Running in detection mode is useful while you tune and configure your WAF, but provides no protection.
+### Define your WAF configuration as code
+
+When you tune your WAF for your application workload, you typically create a set of rule exclusions to reduce false positive detections. If you manually configure these exclusions by using the Azure portal, then when you upgrade your WAF to use a newer ruleset version, you need to reconfigure the same exceptions against the new ruleset version. This process can be time-consuming and error-prone.
+
+Instead, consider defining your WAF rule exclusions and other configuration as code, such as by using the Azure CLI, Azure PowerShell, Bicep or Terraform. Then, when you need to update your WAF ruleset version, you can easily reuse the same exclusions.
+ ## Managed ruleset best practices ### Enable default rule sets
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection.|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
|941110|XSS Filter - Category 1: Script Tag Vector| |941120|XSS Filter - Category 2: Event Handler Vector| |941130|XSS Filter - Category 3: Attribute Vector|
The following rule groups and rules are available when using Web Application Fir
|942100|SQL Injection Attack Detected via libinjection| |942110|SQL Injection Attack: Common Injection Testing Detected| |942120|SQL Injection Attack: SQL Operator Detected|
-|942130|SQL Injection Attack: SQL Tautology Detected.|
|942140|SQL Injection Attack: Common DB Names Detected| |942150|SQL Injection Attack| |942160|Detects blind sqli tests using sleep() or benchmark().|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
|941110|XSS Filter - Category 1: Script Tag Vector| |941120|XSS Filter - Category 2: Event Handler Vector| |941130|XSS Filter - Category 3: Attribute Vector|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection.|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
|941110|XSS Filter - Category 1: Script Tag Vector| |941120|XSS Filter - Category 2: Event Handler Vector| |941130|XSS Filter - Category 3: Attribute Vector|
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection.|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
|941110|XSS Filter - Category 1: Script Tag Vector| |941120|XSS Filter - Category 2: Event Handler Vector| |941130|XSS Filter - Category 3: Attribute Vector|
The following rule groups and rules are available when using Web Application Fir
|RuleId|Description| ||| |941100|XSS Attack Detected via libinjection|
-|941101|XSS Attack Detected via libinjection|
+|941101|XSS Attack Detected via libinjection.<br />This rule detects requests with a *Referer* header.|
|941110|XSS Filter - Category 1 = Script Tag Vector| |941130|XSS Filter - Category 3 = Attribute Vector| |941140|XSS Filter - Category 4 = JavaScript URI Vector|
web-application-firewall Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/best-practices.md
Previously updated : 07/18/2022 Last updated : 09/06/2022
For more information, see [Troubleshoot Web Application Firewall (WAF) for Azure
After you've tuned your WAF, you should configure it to [run in prevention mode](create-waf-policy-ag.md#configure-waf-rules-optional). By running in prevention mode, you ensure the WAF actually blocks requests that it detects are malicious. Running in detection mode is useful while you tune and configure your WAF, but provides no protection.
+### Define your WAF configuration as code
+
+When you tune your WAF for your application workload, you typically create a set of rule exclusions to reduce false positive detections. If you manually configure these exclusions by using the Azure portal, then when you upgrade your WAF to use a newer ruleset version, you need to reconfigure the same exceptions against the new ruleset version. This process can be time-consuming and error-prone.
+
+Instead, consider defining your WAF rule exclusions and other configuration as code, such as by using the Azure CLI, Azure PowerShell, Bicep or Terraform. Then, when you need to update your WAF ruleset version, you can easily reuse the same exclusions.
+ ## Managed ruleset best practices ### Enable core rule sets