Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | Application Proxy Integrate With Remote Desktop Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-remote-desktop-services.md | In an RDS deployment, the RD Web role and the RD Gateway role run on Internet-fa - You should already have [deployed RDS](/windows-server/remote/remote-desktop-services/rds-in-azure), and [enabled Application Proxy](../app-proxy/application-proxy-add-on-premises-application.md). Ensure you have satisfied the pre-requisites to enable Application Proxy, such as installing the connector, opening required ports and URLs, and enabling TLS 1.2 on the server. To learn which ports need to be opened, and other details, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](application-proxy-add-on-premises-application.md). - Your end users must use a compatible browser to connect to RD Web or the RD Web client. For more details see [Support for client configurations](#support-for-other-client-configurations). - When publishing RD Web, it is recommended to use the same internal and external FQDN. If the internal and external FQDNs are different then you should disable Request Header Translation to avoid the client receiving invalid links.+- If you are using the RD Web client, you *must* use the same internal and external FQDN. If the internal and external FQDNs are different, you will encounter websocket errors when making a RemoteApp connection through the RD Web client. - If you are using RD Web on Internet Explorer, you will need to enable the RDS ActiveX add-on. - If you are using the RD Web client, you will need to use the Application Proxy [connector version 1.5.1975 or later](./application-proxy-release-version-history.md). - For the Azure AD pre-authentication flow, users can only connect to resources published to them in the **RemoteApp and Desktops** pane. Users can't connect to a desktop using the **Connect to a remote PC** pane. |
active-directory | Concept Authentication Passwordless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md | The following providers offer FIDO2 security keys of different form factors that | Provider | Biometric | USB | NFC | BLE | FIPS Certified | |:-|:-:|:-:|:-:|:-:|:-:| | [AuthenTrend](https://authentrend.com/about-us/#pg-35-3) | ![y] | ![y]| ![y]| ![y]| ![n] |-| [ACS](https://www.acs.com.hk/en/products/553/pocketkey-fido%C2%AE-certified-usb-security-key/) | ![n] | ![y]| ![n]| ![n]| ![n] | +| [ACS](https://www.acs.com.hk/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [ATOS](https://atos.net/en/solutions/cyber-security/iot-and-ot-security/smart-card-solution-cardos-for-iot) | ![n] | ![y]| ![y]| ![n]| ![n] | | [Ciright](https://www.cyberonecard.com/) | ![n] | ![n]| ![y]| ![n]| ![n] | | [Crayonic](https://www.crayonic.com/keyvault) | ![y] | ![n]| ![y]| ![y]| ![n] | |
active-directory | Concept Fido2 Hardware Vendor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md | The following table lists partners who are Microsoft-compatible FIDO2 security k | Provider | Biometric | USB | NFC | BLE | FIPS Certified | |:-|:-:|:-:|:-:|:-:|:-:| | [AuthenTrend](https://authentrend.com/about-us/#pg-35-3) | ![y] | ![y]| ![y]| ![y]| ![n] |-| [ACS](https://www.acs.com.hk/en/products/553/pocketkey-fido%C2%AE-certified-usb-security-key/) | ![n] | ![y]| ![n]| ![n]| ![n] | +| [ACS](https://www.acs.com.hk/) | ![n] | ![y]| ![n]| ![n]| ![n] | | [ATOS](https://atos.net/en/solutions/cyber-security/iot-and-ot-security/smart-card-solution-cardos-for-iot) | ![n] | ![y]| ![y]| ![n]| ![n] | | [Ciright](https://www.cyberonecard.com/) | ![n] | ![n]| ![y]| ![n]| ![n] | | [Crayonic](https://www.crayonic.com/keyvault) | ![y] | ![n]| ![y]| ![y]| ![n] | |
active-directory | Location Condition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md | Locations such as your organization's public network ranges can be marked as tru - Conditional Access policies can include or exclude these locations. - Sign-ins from trusted named locations improve the accuracy of Azure AD Identity Protection's risk calculation, lowering a user's sign-in risk when they authenticate from a location marked as trusted.+- Locations marked as trusted cannot be deleted. Remove the trusted designation before attempting to delete. > [!WARNING] > Even if you know the network and mark it as trusted does not mean you should exclude it from policies being applied. Verify explicitly is a core principle of a Zero Trust architecture. To find out more about Zero Trust and other ways to align your organization to the guiding principles, see the [Zero Trust Guidance Center](/security/zero-trust/). |
active-directory | Configurable Token Lifetimes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configurable-token-lifetimes.md | All timespans used here are formatted according to the C# [TimeSpan](/dotnet/api You can configure token lifetime policies and assign them to apps using Microsoft Graph. For more information, see the [tokenLifetimePolicy resource type](/graph/api/resources/tokenlifetimepolicy) and its associated methods. -### Service principal policies --You can use the following Microsoft Graph REST API commands for service principal policies.</br></br> --| Command | Description | -| | | -| [Assign tokenLifetimePolicy](/graph/api/application-post-tokenlifetimepolicies) | Specify the service principal object ID to link the specified policy to a service principal. | -| [List assigned tokenLifetimePolicy](/graph/api/application-list-tokenlifetimepolicies) | Specify the service principal object ID to get the policies that are assigned to a service principal. | -| [Remove tokenLifetimePolicy](/graph/api/application-delete-tokenlifetimepolicies) | Specify the service principal object ID to remove a policy from the service principal. | - ## Cmdlet reference These are the cmdlets in the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation). |
active-directory | Configure Token Lifetimes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md | -In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific app or service principal. They can also be set for multi-organizations (multi-tenant application). +In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific app. They can also be set for multi-organizations (multi-tenant application). For more information, see [configurable token lifetimes](configurable-token-lifetimes.md). Remove-MgApplicationTokenLifetimePolicyByRef -ApplicationId $applicationObjectId Remove-MgPolicyTokenLifetimePolicy -TokenLifetimePolicyId $tokenLifetimePolicyId ``` -## Create a policy and assign it to a service principal --In the following steps, you'll create a policy that requires users to authenticate less frequently in your web app. Assign the policy to service principal, which sets the lifetime of the access/ID tokens for your web app. --Create a token lifetime policy. --```http -POST https://graph.microsoft.com/v1.0/policies/tokenLifetimePolicies -Content-Type: application/json --{ - "definition": [ - "{\"TokenLifetimePolicy\":{\"Version\":1,\"AccessTokenLifetime\":\"8:00:00\"}}" - ], - "displayName": "Contoso token lifetime policy", - "isOrganizationDefault": false -} -``` --Assign the policy to a service principal. --```http -POST https://graph.microsoft.com/v1.0/servicePrincipals/11111111-1111-1111-1111-111111111111/tokenLifetimePolicies/$ref -Content-Type: application/json --{ - "@odata.id":"https://graph.microsoft.com/v1.0/policies/tokenLifetimePolicies/22222222-2222-2222-2222-222222222222" -} -``` --List the policies on the service principal. --```http -GET https://graph.microsoft.com/v1.0/servicePrincipals/11111111-1111-1111-1111-111111111111/tokenLifetimePolicies -``` --Remove the policy from the service principal. --```http -DELETE https://graph.microsoft.com/v1.0/servicePrincipals/11111111-1111-1111-1111-111111111111/tokenLifetimePolicies/22222222-2222-2222-2222-222222222222/$ref -``` - ## View existing policies in a tenant To see all policies that have been created in your organization, run the [Get-MgPolicyTokenLifetimePolicy](/powershell/module/microsoft.graph.identity.signins/get-mgpolicytokenlifetimepolicy) cmdlet. Any results with defined property values that differ from the defaults listed above are in scope of the retirement. |
active-directory | Howto Vm Sign In Azure Ad Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md | Now that you've created the VM, you need to configure an Azure RBAC policy to de - **Virtual Machine Administrator Login**: Users who have this role assigned can log in to an Azure virtual machine with administrator privileges. - **Virtual Machine User Login**: Users who have this role assigned can log in to an Azure virtual machine with regular user privileges. -To allow a user to log in to the VM over RDP, you must assign the Virtual Machine Administrator Login or Virtual Machine User Login role to the resource group that contains the VM and its associated virtual network, network interface, public IP address, or load balancer resources. +To allow a user to log in to the VM over RDP, you must assign the Virtual Machine Administrator Login or Virtual Machine User Login role to the Virtual Machine resource. > [!NOTE] > Manually elevating a user to become a local administrator on the VM by adding the user to a member of the local administrators group or by running `net localgroup administrators /add "AzureAD\UserUpn"` command is not supported. You need to use Azure roles above to authorize VM login. |
active-directory | Entitlement Management Logic Apps Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md | Title: Trigger Logic Apps with custom extensions in entitlement management (Preview) + Title: Trigger Logic Apps with custom extensions in entitlement management description: Learn how to configure and use custom logic app workflows in entitlement management. documentationCenter: ''-# Trigger Logic Apps with custom extensions in entitlement management (Preview) +# Trigger Logic Apps with custom extensions in entitlement management [Azure Logic Apps](../../logic-apps/logic-apps-overview.md) can be used to automate custom workflows and connect apps and services in one place. Users can integrate Logic Apps with entitlement management to broaden their governance workflows beyond the core entitlement management use cases. These triggers to Logic Apps are controlled in a tab within access package polic 1. In the left menu, select **Catalogs**. -1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions (Preview)**. +1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions**. 1. In the header navigation bar, select **Add a Custom Extension**. These triggers to Logic Apps are controlled in a tab within access package polic 1. Change to the policy tab, select the policy and select **Edit**. -1. In the policy settings, go to the **Custom Extensions (Preview)** tab. +1. In the policy settings, go to the **Custom Extensions** tab. 1. In the menu below **Stage**, select the access package event you wish to use as trigger for this custom extension (Logic App). For example, if you only want to trigger the custom extension Logic App workflow when a user requests the access package, select **Request is created**. |
active-directory | F5 Big Ip Kerberos Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-advanced.md | By default, Azure AD issues tokens for users granted access to an application. T ## Configure Active Directory Kerberos constrained delegation -For the BIG-IP APM to perform SSO to the back-end application on behalf of users, configure KCD in the target Active Directory domain. Delegating authentication requires you provision the BIG-IP APM with a domain service account. +For the BIG-IP APM to perform SSO to the back-end application on behalf of users, configure KCD in the target Active Directory (AD) domain. Delegating authentication requires you to provision the BIG-IP APM with a domain service account. For this scenario, the application is hosted on server APP-VM-01 and runs in the context of a service account named web_svc_account, not the computer identity. The delegating service account assigned to the APM is F5-BIG-IP. ### Create a BIG-IP APM delegation account -Because BIG-IP doesn't support group-managed service accounts, create a standard user account for the APM service account: +The BIG-IP does not support group Managed Service Accounts (gMSA), therefore create a standard user account for the APM service account. -1. Enter the following PowerShell command. Replace the `UserPrincipalName` and `SamAccountName` with your environment values. +1. Enter the following PowerShell command. Replace the **UserPrincipalName** and **SamAccountName** values with your environment values. For better security, use a dedicated SPN that matches the host header of the application. - ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName host/f5-big-ip.contoso.com@contoso.com SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password")``` + ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName $HOST_SPN SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ``` -2. Create a service principal name (SPN) for the APM service account to use during delegation to the web application service account: + HOST_SPN = host/f5-big-ip.contoso.com@contoso.com - ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"}``` + >[!NOTE] + >When the Host is used, any application running on the host will delegate the account whereas when HTTPS is used, it will allow only HTTP protocol-related operations. -3. Ensure the SPN shows against the APM service account: +2. Create a **Service Principal Name (SPN)** for the APM service account to use during delegation to the web application service account: - ```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames``` + ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ``` -4. Before you specify the target SPN, view its SPN configuration. The APM service account delegates for the web application: - - 1. Confirm your web application is running in the computer context, or a dedicated service account. - 2. Use the following command to query the account object in Active Directory to see its defined SPNs. Replace `<name_of_account>` with the account for your environment. + >[!NOTE] + >It is mandatory to include the host/ part in the format of UserPrincipleName (host/name.domain@domain) or ServicePrincipleName (host/name.domain). - ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames``` +3. Before you specify the target SPN, view its SPN configuration. Ensure the SPN shows against the APM service account. The APM service account delegates for the web application: -5. Use an SPN defined against a web application service account. For better security, use a dedicated SPN that matches the host header of the application. For example, because the web application host header in this example is myexpenses.contoso.com, add `HTTP/myexpenses.contoso.com` to the application service account object in Active Directory: + * Confirm your web application is running in the computer context or a dedicated service account. + * For the Computer context, use the following command to query the account object in the Active Directory to see its defined SPNs. Replace <name_of_account> with the account for your environment. - ```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"}``` + ```Get-ADComputer -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ``` -Or if the app ran in the machine context, add the SPN to the object of the computer account in Active Directory: + For example: + Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames + + * For the dedicated service account, use the following command to query the account object in Active Directory to see its defined SPNs. Replace <name_of_account> with the account for your environment. + + ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ``` - ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"}``` + For example: + Get-ADComputer -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ++ 4. If the application ran in the machine context, add the SPN to the object of the computer account in Active Directory: ++ ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ``` With SPNs defined, establish trust for the APM service account delegate to that service. The configuration varies depending on the topology of your BIG-IP instance and application server. With SPNs defined, establish trust for the APM service account delegate to that 1. Set trust for the APM service account to delegate authentication: - ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true``` + ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ``` 2. The APM service account needs to know the target SPN it's trusted to delegate to. Set the target SPN to the service account running your web application: - ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')}``` + ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ``` -> [!NOTE] -> You can complete these tasks with the Active Directory Users and Computers, Microsoft Management Console (MMC) snap-in, on a domain controller. + >[!NOTE] + >You can complete these tasks with the Active Directory Users and Computers, Microsoft Management Console (MMC) snap-in, on a domain controller. ### Configure BIG-IP and the target application in different domains -In Windows Server 2012, and higher, cross-domain KCD uses resource-based constrained delegation. The constraints for a service are transferred from the domain administrator to the service administrator. This delegation allows the back-end service administrator to allow or deny SSO. It introduces a different approach at configuration delegation, which is possible when you use PowerShell or Active Directory Service Interfaces Editor (ADSI Edit). +In the Windows Server 2012 version, and higher, cross-domain KCD uses Resource-Based Constrained Delegation (RBCD). The constraints for a service are transferred from the domain administrator to the service administrator. This delegation allows the back-end service administrator to allow or deny SSO. This situation creates a different approach at configuration delegation, which is possible when you use PowerShell or Active Directory Service Interfaces Editor (ADSI Edit). ++You can use the PrincipalsAllowedToDelegateToAccount property of the application service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2, or later) in the same domain as the application. ++Use an SPN defined against a web application service account. For better security, use a dedicated SPN that matches the host header of the application. For example, because the web application host header in this example is myexpenses.contoso.com, add HTTP/myexpenses.contoso.com to the application service account object in Active Directory (AD): -You can use the `PrincipalsAllowedToDelegateToAccount` property of the application service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2, or later) in the same domain as the application. +```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ``` For the following commands, note the context. If the web_svc_account service runs in the context of a user account, use these commands: -```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` -```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` -```Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount``` +```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` +```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount ``` +```$big-ip Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount ``` If the web_svc_account service runs in the context of a computer account, use these commands: -```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` -```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` -```Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount``` +```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` +```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount ``` +```$big-ip Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount ``` For more information, see [Kerberos Constrained Delegation across domains](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831477(v=ws.11)). For help diagnosing KCD-related problems, see the F5 BIG-IP deployment guide [Co ## Resources -* AskF5 article, [Active Directory Authentication](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html) +* MyF5 article, [Active Directory Authentication](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html) * [Forget passwords, go passwordless](https://www.microsoft.com/security/business/identity/passwordless) * [What is Conditional Access?](../conditional-access/overview.md) * [Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) |
active-directory | F5 Big Ip Kerberos Easy Button | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md | Select **Deploy** to commit settings and verify the application is in your tenan ## Active Directory KCD configurations -For the BIG-IP APM to perform SSO to the back-end application on behalf of users, configure KCD in the target AD domain. Delegating authentication requires you provision the BIG-IP APM with a domain service account. +For the BIG-IP APM to perform SSO to the back-end application on behalf of users, configure KCD in the target Active Directory (AD) domain. Delegating authentication requires you to provision the BIG-IP APM with a domain service account. Skip this section if your APM service account and delegation are set up. Otherwise, log into a domain controller with an Admin account. For this scenario, the application is hosted on server APP-VM-01 and runs in the ### Create a BIG-IP APM delegation account -The BIG-IP doesn't support group Managed Service Accounts (gMSA), therefore create a standard user account for the APM service account. +The BIG-IP does not support group Managed Service Accounts (gMSA), therefore create a standard user account for the APM service account. -1. Replace the **UserPrincipalName** and **SamAccountName** values with the values in your environment. +1. Enter the following PowerShell command. Replace the **UserPrincipalName** and **SamAccountName** values with your environment values. For better security, use a dedicated SPN that matches the host header of the application. - ```New-ADUser -Name "F5 BIG-IP Delegation Account" -UserPrincipalName host/f5-big-ip.contoso.com@contoso.com -SamAccountName "f5-big-ip" -PasswordNeverExpires $true -Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password")``` + ```New-ADUser -Name "F5 BIG-IP Delegation Account" UserPrincipalName $HOST_SPN SamAccountName "f5-big-ip" -PasswordNeverExpires $true Enabled $true -AccountPassword (Read-Host -AsSecureString "Account Password") ``` -2. Create a **Service Principal Name (SPN)** for the APM service account for performing delegation to the web application service account. + HOST_SPN = host/f5-big-ip.contoso.com@contoso.com - ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @{Add="host/f5-big-ip.contoso.com"}``` + >[!NOTE] + >When the Host is used, any application running on the host will delegate the account whereas when HTTPS is used, it will allow only HTTP protocol-related operations. -3. Ensure the SPN shows against the APM service account. +2. Create a **Service Principal Name (SPN)** for the APM service account to use during delegation to the web application service account: ++ ```Set-AdUser -Identity f5-big-ip -ServicePrincipalNames @Add="host/f5-big-ip.contoso.com"} ``` - ```Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames``` + >[!NOTE] + >It is mandatory to include the host/ part in the format of UserPrincipleName (host/name.domain@domain) or ServicePrincipleName (host/name.domain). ++4. Before you specify the target SPN, view its SPN configuration. Ensure the SPN shows against the APM service account. The APM service account delegates for the web application: ++ * Confirm your web application is running in the computer context or a dedicated service account. + * For the Computer context, use the following command to query the account object in the Active Directory to see its defined SPNs. Replace <name_of_account> with the account for your environment. ++ ```Get-ADComputer -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ``` -4. Before specifying the target SPN that the APM service account should delegate to for the web application, you need to view its SPN configuration. Confirm your web application is running in the computer context, or a dedicated service account. Next, query that account object in AD to see its defined SPNs. Replace <name_of_account> with the account for your environment. + For example: + Get-ADUser -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames - ```Get-ADUser -identity <name_of _account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames``` + * For the dedicated service account, use the following command to query the account object in Active Directory to see its defined SPNs. Replace <name_of_account> with the account for your environment. -5. You can use an SPN defined against a web application service account, but for better security, use a dedicated SPN that matches the host header of the application. For example, the web application host header is myexpenses.contoso.com. You can add HTTP/myexpenses.contoso.com to the applications service account object in AD. + ```Get-ADUser -identity <name_of_account> -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames ``` - ```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"}``` + For example: + Get-ADComputer -identity f5-big-ip -properties ServicePrincipalNames | Select-Object -ExpandProperty ServicePrincipalNames -Or if the app ran in the machine context, add the SPN to the object of the computer account in AD. +4. If the application ran in the machine context, add the SPN to the object of the computer account in Active Directory: - ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"}``` + ```Set-ADComputer -Identity APP-VM-01 -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ``` -With the SPNs defined, the APM service account needs trust to delegate to that service. The configuration varies depending on the topology of your BIG-IP and application server. +With SPNs defined, establish trust for the APM service account delegate to that service. The configuration varies depending on the topology of your BIG-IP instance and application server. ### Configure BIG-IP and target application in the same domain -1. Set trust for the APM service account to delegate authentication. +1. Set trust for the APM service account to delegate authentication: - ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true``` + ```Get-ADUser -Identity f5-big-ip | Set-ADAccountControl -TrustedToAuthForDelegation $true ``` -2. The APM service account needs to know the target SPN it's trusted to delegate to, or which service for which it's allowed to request a Kerberos ticket. Set target SPN to the service account running your web application. +2. The APM service account needs to know the target SPN it's trusted to delegate to. Set the target SPN to the service account running your web application: - ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')}``` + ```Set-ADUser -Identity f5-big-ip -Add @{'msDS-AllowedToDelegateTo'=@('HTTP/myexpenses.contoso.com')} ``` ->[!NOTE] ->You can complete these tasks with the Active Directory Users and Computers Microsoft Management Console (MMC) on a domain controller. + >[!NOTE] + >You can complete these tasks with the Active Directory Users and Computers, Microsoft Management Console (MMC) snap-in, on a domain controller. ### BIG-IP and application in different domains -From the Windows Server 2012 version onward, cross-domain KCD uses resource-based constrained delegation (RCD). The constraints are for a service transferred from the domain administrator to the service administrator. The back-end service Administrator allows or denies SSO. This situation creates a different approach for configuration delegation, which is possible using PowerShell or ADSIEdit. +In the Windows Server 2012 version, and higher, cross-domain KCD uses Resource-Based Constrained Delegation (RBCD). The constraints for a service are transferred from the domain administrator to the service administrator. This delegation allows the back-end service administrator to allow or deny SSO. This situation creates a different approach at configuration delegation, which is possible when you use PowerShell or Active Directory Service Interfaces Editor (ADSI Edit). ++You can use the PrincipalsAllowedToDelegateToAccount property of the application service account (computer or dedicated service account) to grant delegation from BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2, or later) in the same domain as the application. ++Use an SPN defined against a web application service account. For better security, use a dedicated SPN that matches the host header of the application. For example, because the web application host header in this example is myexpenses.contoso.com, add HTTP/myexpenses.contoso.com to the application service account object in Active Directory (AD): -You can use the PrincipalsAllowedToDelegateToAccount property of the applications service account (computer or dedicated service account) to grant delegation from the BIG-IP. For this scenario, use the following PowerShell command on a domain controller (Windows Server 2012 R2+) in the same domain as the application. +```Set-AdUser -Identity web_svc_account -ServicePrincipalNames @{Add="http/myexpenses.contoso.com"} ``` -Note the context for the following examples. +For the following commands, note the context. -If the web_svc_account service runs in the context of a user account: +If the web_svc_account service runs in the context of a user account, use these commands: - ```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` - ```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` - ```Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount``` +```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` +```Set-ADUser -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount ``` +```$big-ip Get-ADUser web_svc_account -Properties PrincipalsAllowedToDelegateToAccount ``` -If the web_svc_account service runs in the context of a computer account: +If the web_svc_account service runs in the context of a computer account, use these commands: - ```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com``` - ```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount $big-ip``` - ```Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount``` +```$big-ip= Get-ADComputer -Identity f5-big-ip -server dc.contoso.com ``` +```Set-ADComputer -Identity web_svc_account -PrincipalsAllowedToDelegateToAccount ``` +```$big-ip Get-ADComputer web_svc_account -Properties PrincipalsAllowedToDelegateToAccount ``` For more information, see [Kerberos Constrained Delegation across domains](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/hh831477(v=ws.11)). If no error page appears, the issue is probably related to the back-end request, For more information, see: -* dev/central: [APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) -* AskF5: [Session Variables](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) +* dev/central: [APM variable assign examples](https://community.f5.com/t5/codeshare/apm-variable-assign-examples/ta-p/287962) +* MyF5: [Session Variables](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) |
active-directory | Bigpanda Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bigpanda-tutorial.md | + + Title: Azure Active Directory SSO integration with BigPanda +description: Learn how to configure single sign-on between Azure Active Directory and BigPanda. ++++++++ Last updated : 06/19/2023+++++# Azure Active Directory SSO integration with BigPanda ++In this article, you'll learn how to integrate BigPanda with Azure Active Directory (Azure AD). BigPanda transforms IT data into actionable intelligence and automation, enabling incident response teams to increase uptime, efficiency, and velocity. When you integrate BigPanda with Azure AD, you can: ++* Control in Azure AD who has access to BigPanda. +* Enable your users to be automatically signed-in to BigPanda with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for BigPanda in a test environment. BigPanda supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning. ++> [!NOTE] +> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. ++## Prerequisites ++To integrate Azure Active Directory with BigPanda, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* BigPanda single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the BigPanda application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add BigPanda from the Azure AD gallery ++Add BigPanda from the Azure AD application gallery to configure single sign-on with BigPanda. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **BigPanda** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type the URL: + `https://bigpanda.io/SAML2` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://api.bigpanda.io/login/<ORG_NAME>/azure/callback` ++1. If you wish to configure the application in **SP** initiated mode, then perform the following step: ++ In the **Sign on URL** textbox, type a URL using the following pattern: + `https://api.bigpanda.io/login/<INSTANCE>` ++ > [!NOTE] + > These values are not real. Update these values with the actual Reply URL and Sign on URL. Contact [BigPanda support team](mailto:support@bigpanda.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up BigPanda** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure BigPanda SSO ++To configure single sign-on on **BigPanda** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [BigPanda support team](mailto:support@bigpanda.io). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create BigPanda test user ++In this section, a user called B.Simon is created in BigPanda. BigPanda supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in BigPanda, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to BigPanda Sign-on URL where you can initiate the login flow. ++* Go to BigPanda Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the BigPanda for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the BigPanda tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the BigPanda for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure BigPanda you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Civic Eye Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/civic-eye-sso-tutorial.md | + + Title: Azure Active Directory SSO integration with CivicEye SSO +description: Learn how to configure single sign-on between Azure Active Directory and CivicEye SSO. ++++++++ Last updated : 06/16/2023+++++# Azure Active Directory SSO integration with CivicEye SSO ++In this article, you'll learn how to integrate CivicEye SSO with Azure Active Directory (Azure AD). Provide SSO functionality for our CivicEye Platform customers through their existing AD deployment. When you integrate CivicEye SSO with Azure AD, you can: ++* Control in Azure AD who has access to CivicEye SSO. +* Enable your users to be automatically signed-in to CivicEye SSO with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for CivicEye SSO in a test environment. CivicEye SSO supports **SP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with CivicEye SSO, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* CivicEye SSO single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the CivicEye SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add CivicEye SSO from the Azure AD gallery ++Add CivicEye SSO from the Azure AD application gallery to configure single sign-on with CivicEye SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **CivicEye SSO** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a URL using the following pattern: + `https://<CustomerName>.civiceye.com` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://<CustomerName>.civiceye.com/consumer` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://<CustomerName>.civiceye.com` ++ > [!Note] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [CivicEye SSO support team](mailto:help@civiceye.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up CivicEye SSO** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure CivicEye SSO ++To configure single sign-on on **CivicEye SSO** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [CivicEye SSO support team](mailto:help@civiceye.com). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create CivicEye SSO test user ++In this section, you create a user called Britta Simon at CivicEye SSO. Work with [CivicEye SSO support team](mailto:help@civiceye.com) to add the users in the CivicEye SSO platform. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to CivicEye SSO Sign-on URL where you can initiate the login flow. ++* Go to CivicEye SSO Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the CivicEye SSO tile in the My Apps, this will redirect to CivicEye SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure CivicEye SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Code42 Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * A Code42 tenant with Identity Management enabled.-* A Code42 user account with [Customer Cloud Admin](https://support.code42.com/Administrator/Cloud/Monitoring_and_managing/Roles_reference#Customer_Cloud_Admin) permission. +* A Code42 user account with [Customer Cloud Admin](https://support.code42.com/hc/en-us/articles/14827655905943-Roles-reference#Customer_Cloud_Admin) permission. ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). The scenario outlined in this tutorial assumes that you already have the followi ## Step 2. Configure Code42 to support provisioning with Azure AD -This section guides you through the steps to configure Azure AD as a provisioning provider in the Identity Management section of Code42's console. Doing so will enable Code42 to securely receive provisioning requests from Azure AD. It is recommended to review [Code42's support documentation](https://support.code42.com/Administrator/Cloud/Configuring/Introduction_to_SCIM_provisioning/How_to_provision_users_to_Code42_from_Azure_AD) before provisioning with Azure AD. +This section guides you through the steps to configure Azure AD as a provisioning provider in the Identity Management section of Code42's console. Doing so will enable Code42 to securely receive provisioning requests from Azure AD. It is recommended to review [Code42's support documentation](https://support.code42.com/hc/en-us/articles/14827670461207-How-to-provision-users-to-Code42-from-Azure-AD) before provisioning with Azure AD. ### To create a provisioning provider in Code42's console: Once you've configured provisioning, use the following resources to monitor your * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)-* [Configure organization assignments based on SCIM groups in Code42](https://support.code42.com/Administrator/Cloud/Configuring/Introduction_to_SCIM_provisioning/How_to_provision_users_to_Code42_from_Azure_AD#Step_6:_Choose_an_organization_mapping_method) -* [Configure role assignments based on SCIM groups in Code42](https://support.code42.com/Administrator/Cloud/Configuring/Introduction_to_SCIM_provisioning/How_to_provision_users_to_Code42_from_Azure_AD#Step_7:_Configure_role_mapping) +* [Configure organization assignments based on SCIM groups in Code42](https://support.code42.com/hc/en-us/articles/14827670461207-How-to-provision-users-to-Code42-from-Azure-AD#step-6-map-users-to-organizations-and-roles-using-scim-groups-0-18) +* [Configure role assignments based on SCIM groups in Code42](https://support.code42.com/hc/en-us/articles/14827670461207-How-to-provision-users-to-Code42-from-Azure-AD#apply-organization-and-role-mappings-0-21) ## Next steps |
active-directory | Colloquial Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/colloquial-tutorial.md | + + Title: Azure Active Directory SSO integration with Colloquial +description: Learn how to configure single sign-on between Azure Active Directory and Colloquial. ++++++++ Last updated : 06/20/2023+++++# Azure Active Directory SSO integration with Colloquial ++In this article, you'll learn how to integrate Colloquial with Azure Active Directory (Azure AD). Colloquial enables companies to manage the portfolio of their capabilities, processes, information, apps or technology. When you integrate Colloquial with Azure AD, you can: ++* Control in Azure AD who has access to Colloquial. +* Enable your users to be automatically signed-in to Colloquial with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Colloquial in a test environment. Colloquial supports **SP** initiated single sign-on and **Just In Time** user provisioning. ++## Prerequisites ++To integrate Azure Active Directory with Colloquial, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Colloquial single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Colloquial application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Colloquial from the Azure AD gallery ++Add Colloquial from the Azure AD application gallery to configure single sign-on with Colloquial. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Colloquial** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++  ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using the following pattern: + `colloquial-<Customer_ID>` ++ b. In the **Reply URL** textbox, type a URL using the following pattern: + `https://app.colloquial.io/auth/saml/<Customer_ID>/callback` ++ c. In the **Sign on URL** textbox, type a URL using the following pattern: + `https://app.colloquial.io/login/<Customer_ID>/saml` ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Colloquial support team](mailto:support@colloquial.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++  ++1. On the **Set up Colloquial** section, copy the appropriate URL(s) based on your requirement. ++  ++## Configure Colloquial SSO ++To configure single sign-on on **Colloquial** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Colloquial support team](mailto:support@colloquial.io). They set this setting to have the SAML SSO connection set properly on both sides. ++### Create Colloquial test user ++In this section, a user called B.Simon is created in Colloquial. Colloquial supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Colloquial, a new one is commonly created after authentication. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++* Click on **Test this application** in Azure portal. This will redirect to Colloquial Sign-on URL where you can initiate the login flow. ++* Go to Colloquial Sign-on URL directly and initiate the login flow from there. ++* You can use Microsoft My Apps. When you click the Colloquial tile in the My Apps, this will redirect to Colloquial Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Colloquial you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Mixpanel Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mixpanel-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 3. Determine what data to [map between Azure AD and Mixpanel](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Mixpanel to support provisioning with Azure AD-1. For setting up SSO and claiming a domain refer [this](https://help.mixpanel.com/hc/articles/360036428871-Single-Sign-On). +1. For setting up SSO and claiming a domain refer [this](https://docs.mixpanel.com/docs/admin/sso). 2. After that you will need to generate a SCIM token in the SCIM tab of the access security section of your organization settings.  Once you've configured provisioning, use the following resources to monitor your ## Next steps -* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) +* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
aks | Csi Migrate In Tree Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md | Migration from in-tree to CSI is supported by creating a static volume: ## Next steps -For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][aks-storage-backups-best-practices]. -+- For more information about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][aks-storage-backups-best-practices]. +- Protect your newly migrated CSI Driver based PVs by [backing them up using Azure Backup for AKS](../backup/azure-kubernetes-service-cluster-backup.md). <!-- LINKS - internal --> [install-azure-cli]: /cli/azure/install-azure-cli [aks-rbac-cluster-admin-role]: manage-azure-rbac.md#create-role-assignments-for-users-to-access-the-cluster |
aks | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md | To enable this architecture, each AKS deployment spans two resource groups: 1. You create the first resource group. This group contains only the Kubernetes service resource. The AKS resource provider automatically creates the second resource group during deployment. An example of the second resource group is *MC_myResourceGroup_myAKSCluster_eastus*. For information on how to specify the name of this second resource group, see the next section. 2. The second resource group, known as the *node resource group*, contains all of the infrastructure resources associated with the cluster. These resources include the Kubernetes node VMs, virtual networking, and storage. By default, the node resource group has a name like *MC_myResourceGroup_myAKSCluster_eastus*. AKS automatically deletes the node resource group whenever you delete the cluster. You should only use this cluster for resources that share the cluster's lifecycle. + > [!NOTE] + > Modifying any resource under the node resource group in the AKS cluster is an unsupported action and will cause cluster operation failures. You can prevent changes from being made to the node resource group by [blocking users from modifying resources](cluster-configuration.md#fully-managed-resource-group-preview) managed by the AKS cluster. + ## Can I provide my own name for the AKS node resource group? Yes. By default, AKS names the node resource group *MC_resourcegroupname_clustername_location*, but you can also provide your own name. |
aks | Kubernetes Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-service-principal.md | Title: Use a service principal with Azure Kubernetes Services (AKS) -description: Create and manage an Azure Active Directory service principal with a cluster in Azure Kubernetes Service (AKS) +description: Learn how to create and manage an Azure Active Directory service principal with a cluster in Azure Kubernetes Service (AKS). Previously updated : 06/08/2022 Last updated : 06/27/2023 #Customer intent: As a cluster operator, I want to understand how to create a service principal and delegate permissions for AKS to access required resources. In large enterprise environments, the user that deploys the cluster (or CI/CD system), may not have permissions to create this service principal automatically when the cluster is created.-To access other Azure Active Directory (Azure AD) resources, an AKS cluster requires either an [Azure Active Directory (AD) service principal][aad-service-principal] or a [managed identity][managed-identity-resources-overview]. A service principal or managed identity is needed to dynamically create and manage other Azure resources such as an Azure load balancer or container registry (ACR). +An AKS cluster requires either an [Azure Active Directory (AD) service principal][aad-service-principal] or a [managed identity][managed-identity-resources-overview] to dynamically create and manage other Azure resources, such as an Azure Load Balancer or Azure Container Registry (ACR). -Managed identities are the recommended way to authenticate with other resources in Azure, and is the default authentication method for your AKS cluster. For more information about using a managed identity with your cluster, see [Use a system-assigned managed identity][use-managed-identity]. +> [!NOTE] +> We recommend using managed identities to authenticate with other resources in Azure, and they're the default authentication method for your AKS cluster. For more information about using a managed identity with your cluster, see [Use a system-assigned managed identity][use-managed-identity]. -This article shows how to create and use a service principal for your AKS clusters. +This article shows you how to create and use a service principal for your AKS clusters. ## Before you begin -To create an Azure AD service principal, you must have permissions to register an application with your Azure AD tenant, and to assign the application to a role in your subscription. If you don't have the necessary permissions, you need to ask your Azure AD or subscription administrator to assign the necessary permissions, or pre-create a service principal for you to use with the AKS cluster. +To create an Azure AD service principal, you must have permissions to register an application with your Azure AD tenant and to assign the application to a role in your subscription. If you don't have the necessary permissions, you need to ask your Azure AD or subscription administrator to assign the necessary permissions or pre-create a service principal for you to use with your AKS cluster. If you're using a service principal from a different Azure AD tenant, there are other considerations around the permissions available when you deploy the cluster. You may not have the appropriate permissions to read and write directory information. For more information, see [What are the default user permissions in Azure Active Directory?][azure-ad-permissions] ## Prerequisites -Azure CLI version 2.0.59 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. --Azure PowerShell version 5.0.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install the Azure Az PowerShell module][install-the-azure-az-powershell-module]. +* If using Azure CLI, you need Azure CLI version 2.0.59 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. +* If using Azure PowerShell, you need Azure PowerShell version 5.0.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install the Azure Az PowerShell module][install-the-azure-az-powershell-module]. ## Manually create a service principal ### [Azure CLI](#tab/azure-cli) -To manually create a service principal with the Azure CLI, use the [`az ad sp create-for-rbac`][az-ad-sp-create] command. +1. Create a service principal using the [`az ad sp create-for-rbac`][az-ad-sp-create] command. -```azurecli-interactive -az ad sp create-for-rbac --name myAKSClusterServicePrincipal -``` + ```azurecli-interactive + az ad sp create-for-rbac --name myAKSClusterServicePrincipal + ``` -The output is similar to the following example. Copy the values for `appId` and `password`. These values are used when you create an AKS cluster in the next section. + Your output should be similar to the following example output: -```json -{ - "appId": "559513bd-0c19-4c1a-87cd-851a26afd5fc", - "displayName": "myAKSClusterServicePrincipal", - "name": "http://myAKSClusterServicePrincipal", - "password": "e763725a-5eee-40e8-a466-dc88d980f415", - "tenant": "72f988bf-86f1-41af-91ab-2d7cd011db48" -} -``` + ```json + { + "appId": "559513bd-0c19-4c1a-87cd-851a26afd5fc", + "displayName": "myAKSClusterServicePrincipal", + "name": "http://myAKSClusterServicePrincipal", + "password": "e763725a-5eee-40e8-a466-dc88d980f415", + "tenant": "72f988bf-86f1-41af-91ab-2d7cd011db48" + } + ``` ++2. Copy the values for `appId` and `password` from the output. You use these when creating an AKS cluster in the next section. ### [Azure PowerShell](#tab/azure-powershell) -To manually create a service principal with Azure PowerShell, use the [`New-AzADServicePrincipal`][new-azadserviceprincipal] command. +1. Create a service principal using the [`New-AzADServicePrincipal`][new-azadserviceprincipal] command. -```azurepowershell-interactive -New-AzADServicePrincipal -DisplayName myAKSClusterServicePrincipal -OutVariable sp -``` + ```azurepowershell-interactive + New-AzADServicePrincipal -DisplayName myAKSClusterServicePrincipal -OutVariable sp + ``` -The output is similar to the following example. The values are also stored in a variable that is used when you create an AKS cluster in the next section. + Your output should be similar to the following example output: -```Output -Secret : System.Security.SecureString -ServicePrincipalNames : {559513bd-0c19-4c1a-87cd-851a26afd5fc, http://myAKSClusterServicePrincipal} -ApplicationId : 559513bd-0c19-4c1a-87cd-851a26afd5fc -ObjectType : ServicePrincipal -DisplayName : myAKSClusterServicePrincipal -Id : 559513bd-0c19-4c1a-87cd-851a26afd5fc -Type : -``` + ```output + Secret : System.Security.SecureString + ServicePrincipalNames : {559513bd-0c19-4c1a-87cd-851a26afd5fc, http://myAKSClusterServicePrincipal} + ApplicationId : 559513bd-0c19-4c1a-87cd-851a26afd5fc + ObjectType : ServicePrincipal + DisplayName : myAKSClusterServicePrincipal + Id : 559513bd-0c19-4c1a-87cd-851a26afd5fc + Type : + ``` -To decrypt the value stored in the **Secret** secure string, run the following command: + The values are stored in a variable that you use when creating an AKS cluster in the next section. -```azurepowershell-interactive -$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret) -[System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR) -``` +2. Decrypt the value stored in the **Secret** secure string using the following command. -For more information, see [Create an Azure service principal with Azure PowerShell][create-an-azure-service-principal-with-azure-powershell] + ```azurepowershell-interactive + $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($sp.Secret) + [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR) + ``` For more information, see [Create an Azure service principal with Azure PowerShe ### [Azure CLI](#tab/azure-cli) -To use an existing service principal when you create an AKS cluster using the [`az aks create`][az-aks-create] command, use the `--service-principal` and `--client-secret` parameters to specify the `appId` and `password` from the output of the [`az ad sp create-for-rbac`][az-ad-sp-create] command: +* Use an existing service principal for a new AKS cluster using the [`az aks create`][az-aks-create] command and use the `--service-principal` and `--client-secret` parameters to specify the `appId` and `password` from the output you received the previous section. -```azurecli-interactive -az aks create \ - --resource-group myResourceGroup \ - --name myAKSCluster \ - --service-principal <appId> \ - --client-secret <password> -``` + ```azurecli-interactive + az aks create \ + --resource-group myResourceGroup \ + --name myAKSCluster \ + --service-principal <appId> \ + --client-secret <password> + ``` -> [!NOTE] -> If you're using an existing service principal with customized secret, ensure the secret is not longer than 190 bytes. + > [!NOTE] + > If you're using an existing service principal with customized secret, make sure the secret isn't longer than 190 bytes. ### [Azure PowerShell](#tab/azure-powershell) -To use an existing service principal when you create an AKS cluster, you'll need to convert the service principal `ApplicationId` and `Secret` to a **PSCredential** object as shown in the following example. +1. Convert the service principal `ApplicationId` and `Secret` to a **PSCredential** object using the following command. -```azurepowershell-interactive -$Cred = New-Object -TypeName System.Management.Automation.PSCredential ($sp.ApplicationId, $sp.Secret) -``` + ```azurepowershell-interactive + $Cred = New-Object -TypeName System.Management.Automation.PSCredential ($sp.ApplicationId, $sp.Secret) + ``` -When running the `New-AzAksCluster` command, you specify the `ServicePrincipalIdAndSecret` parameter with the previously created **PSCredential** object as its value. +2. Use an existing service principal for a new AKS cluster using the [`New-AzAksCluster`][new-azakscluster] command and specify the `ServicePrincipalIdAndSecret` parameter with the previously created **PSCredential** object as its value. -```azurepowershell-interactive -New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -ServicePrincipalIdAndSecret $Cred -``` + ```azurepowershell-interactive + New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -ServicePrincipalIdAndSecret $Cred + ``` -> [!NOTE] -> If you're using an existing service principal with customized secret, ensure the secret is no longer than 190 bytes. + > [!NOTE] + > If you're using an existing service principal with customized secret, make sure the secret isn't longer than 190 bytes. ## Delegate access to other Azure resources -The service principal for the AKS cluster can be used to access other resources. For example, if you want to deploy your AKS cluster into an existing Azure virtual network subnet or connect to Azure Container Registry (ACR), you need to delegate access to those resources to the service principal. +You can use the service principal for the AKS cluster to access other resources. For example, if you want to deploy your AKS cluster into an existing Azure virtual network subnet or connect to Azure Container Registry (ACR), you need to delegate access to those resources to the service principal. Permission granted to a cluster using a system-assigned managed identity may take up 60 minutes to populate. ### [Azure CLI](#tab/azure-cli) -To delegate permissions, create a role assignment using the [`az role assignment create`][az-role-assignment-create] command. Assign the `appId` to a particular scope, such as a resource group or virtual network resource. A role then defines what permissions the service principal has on the resource, as shown in the following example: +* Create a role assignment using the [`az role assignment create`][az-role-assignment-create] command. Assign the `appId` to a particular scope, such as a resource group or virtual network resource. The role defines what permissions the service principal has on the resource. -```azurecli -az role assignment create --assignee <appId> --scope <resourceScope> --role Contributor -``` + > [!NOTE] + > The `--scope` for a resource needs to be a full resource ID, such as */subscriptions/\<guid\>/resourceGroups/myResourceGroup* or */subscriptions/\<guid\>/resourceGroups/myResourceGroupVnet/providers/Microsoft.Network/virtualNetworks/myVnet*. -The `--scope` for a resource needs to be a full resource ID, such as */subscriptions/\<guid\>/resourceGroups/myResourceGroup* or */subscriptions/\<guid\>/resourceGroups/myResourceGroupVnet/providers/Microsoft.Network/virtualNetworks/myVnet* + ```azurecli-interactive + az role assignment create --assignee <appId> --scope <resourceScope> --role Contributor + ``` ### [Azure PowerShell](#tab/azure-powershell) -To delegate permissions, create a role assignment using the [`New-AzRoleAssignment`][new-azroleassignment] command. Assign the `ApplicationId` to a particular scope, such as a resource group or virtual network resource. A role then defines what permissions the service principal has on the resource, as shown in the following example: +* Create a role assignment using the [`New-AzRoleAssignment`][new-azroleassignment] command. Assign the `ApplicationId` to a particular scope, such as a resource group or virtual network resource. The role defines what permissions the service principal has on the resource. -```azurepowershell-interactive -New-AzRoleAssignment -ApplicationId <ApplicationId> -Scope <resourceScope> -RoleDefinitionName Contributor -``` + > [!NOTE] + > The `Scope` for a resource needs to be a full resource ID, such as */subscriptions/\<guid\>/resourceGroups/myResourceGroup* or */subscriptions/\<guid\>/resourceGroups/myResourceGroupVnet/providers/Microsoft.Network/virtualNetworks/myVnet* -The `Scope` for a resource needs to be a full resource ID, such as */subscriptions/\<guid\>/resourceGroups/myResourceGroup* or */subscriptions/\<guid\>/resourceGroups/myResourceGroupVnet/providers/Microsoft.Network/virtualNetworks/myVnet* + ```azurepowershell-interactive + New-AzRoleAssignment -ApplicationId <ApplicationId> -Scope <resourceScope> -RoleDefinitionName Contributor + ``` -> [!NOTE] -> If you have removed the Contributor role assignment from the node resource group, the operations below may fail. -> Permission granted to a cluster using a system-assigned managed identity may take up 60 minutes to populate. - The following sections detail common delegations that you may need to assign. ### Azure Container Registry ### [Azure CLI](#tab/azure-cli) -If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. Currently, the recommended configuration is to use the [`az aks create`][az-aks-create] or [`az aks update`][az-aks-update] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. +If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. We recommend using the [`az aks create`][az-aks-create] or [`az aks update`][az-aks-update] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. ### [Azure PowerShell](#tab/azure-powershell) -If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. Currently, the recommended configuration is to use the [`New-AzAksCluster`][new-azakscluster] or [`Set-AzAksCluster`][set-azakscluster] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. +If you use Azure Container Registry (ACR) as your container image store, you need to grant permissions to the service principal for your AKS cluster to read and pull images. We recommend using the [`New-AzAksCluster`][new-azakscluster] or [`Set-AzAksCluster`][set-azakscluster] command to integrate with a registry and assign the appropriate role for the service principal. For detailed steps, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-to-acr]. You may use advanced networking where the virtual network and subnet or public I ### Storage -If you need to access existing disk resources in another resource group, assign one of the following set of role permissions: +If you need to access existing disk resources in another resource group, assign one of the following sets of role permissions: -- Create a [custom role][rbac-custom-role] and define the following role permissions:- - *Microsoft.Compute/disks/read* - - *Microsoft.Compute/disks/write* -- Or, assign the [Virtual Machine Contributor][rbac-disk-contributor] built-in role on the resource group+* Create a [custom role][rbac-custom-role] and define the *Microsoft.Compute/disks/read* and *Microsoft.Compute/disks/write* role permissions, or +* Assign the [Virtual Machine Contributor][rbac-disk-contributor] built-in role on the resource group. ### Azure Container Instances If you use Virtual Kubelet to integrate with AKS and choose to run Azure Contain When using AKS and an Azure AD service principal, consider the following: -- The service principal for Kubernetes is a part of the cluster configuration. However, don't use this identity to deploy the cluster.-- By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time.-- Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint.-- When you specify the service principal **Client ID**, use the value of the `appId`.-- On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json`-- When you delete an AKS cluster that was created by [`az aks create`][az-aks-create], the service principal created automatically isn't deleted.- - To delete the service principal, query for your clusters *servicePrincipalProfile.clientId* and then delete it using the [`az ad sp delete`][az-ad-sp-delete] command. Replace the values for the `-g` parameter for the resource group name, and `-n` parameter for the cluster name: +* The service principal for Kubernetes is a part of the cluster configuration, but don't use this identity to deploy the cluster. +* By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time. +* Every service principal is associated with an Azure AD application. You can associate the service principal for a Kubernetes cluster with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. +* When you specify the service principal **Client ID**, use the value of the `appId`. +* On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the `/etc/kubernetes/azure.json` file. +* When you delete an AKS cluster that was created using the [`az aks create`][az-aks-create] command, the service principal created isn't automatically deleted. + * To delete the service principal, query for your cluster's *servicePrincipalProfile.clientId* and delete it using the [`az ad sp delete`][az-ad-sp-delete] command. Replace the values for the `-g` parameter for the resource group name and `-n` parameter for the cluster name: ```azurecli az ad sp delete --id $(az aks show -g myResourceGroup -n myAKSCluster --query servicePrincipalProfile.clientId -o tsv) When using AKS and an Azure AD service principal, consider the following: When using AKS and an Azure AD service principal, consider the following: -- The service principal for Kubernetes is a part of the cluster configuration. However, don't use this identity to deploy the cluster.-- By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time.-- Every service principal is associated with an Azure AD application. The service principal for a Kubernetes cluster can be associated with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint.-- When you specify the service principal **Client ID**, use the value of the `ApplicationId`.-- On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the file `/etc/kubernetes/azure.json`-- When you delete an AKS cluster that was created by [`New-AzAksCluster`][new-azakscluster], the service principal created automatically isn't deleted.- - To delete the service principal, query for your clusters *ServicePrincipalProfile.ClientId* and then delete it using the [`Remove-AzADServicePrincipal`][remove-azadserviceprincipal] command. Replace the values for the `-ResourceGroupName` parameter for the resource group name, and `-Name` parameter for the cluster name: +* The service principal for Kubernetes is a part of the cluster configuration, but don't use this identity to deploy the cluster. +* By default, the service principal credentials are valid for one year. You can [update or rotate the service principal credentials][update-credentials] at any time. +* Every service principal is associated with an Azure AD application. You can associate the service principal for a Kubernetes cluster with any valid Azure AD application name (for example: *https://www.contoso.org/example*). The URL for the application doesn't have to be a real endpoint. +* When you specify the service principal **Client ID**, use the value of the `ApplicationId`. +* On the agent node VMs in the Kubernetes cluster, the service principal credentials are stored in the `/etc/kubernetes/azure.json` file. +* When you delete an AKS cluster that was created using the [`New-AzAksCluster`][new-azakscluster], the service principal created isn't automatically deleted. + * To delete the service principal, query for your cluster's *ServicePrincipalProfile.ClientId* and delete it using the [`Remove-AzADServicePrincipal`][remove-azadserviceprincipal] command. Replace the values for the `-ResourceGroupName` parameter for the resource group name and `-Name` parameter for the cluster name: ```azurepowershell-interactive $ClientId = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster ).ServicePrincipalProfile.ClientId When using AKS and an Azure AD service principal, consider the following: ### [Azure CLI](#tab/azure-cli) -The service principal credentials for an AKS cluster are cached by the Azure CLI. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [`az aks create`][az-aks-create] may indicate a problem with the cached service principal credentials: +Azure CLI caches the service principal credentials for AKS clusters. If these credentials expire, you encounter errors during AKS cluster deployment. If you run the [`az aks create`][az-aks-create] command and receive an error message similar to the following, it may indicate a problem with the cached service principal credentials: ```azurecli Operation failed with status: 'Bad Request'. Details: The credentials in ServicePrincipalProfile were invalid. Please see htt (Details: adal: Refresh request failed. Status Code = '401'. ``` -Check the expiration date of your service principal credentials using the [`az ad app credential list`][az-ad-app-credential-list] command with the `"[].endDateTime"` query. +You can check the expiration date of your service principal credentials using the [`az ad app credential list`][az-ad-app-credential-list] command with the `"[].endDateTime"` query. ```azurecli az ad app credential list --id <app-id> --query "[].endDateTime" -o tsv The default expiration time for the service principal credentials is one year. I ### [Azure PowerShell](#tab/azure-powershell) -The service principal credentials for an AKS cluster are cached by Azure PowerShell. If these credentials have expired, you encounter errors during deployment of the AKS cluster. The following error message when running [`New-AzAksCluster`][new-azakscluster] may indicate a problem with the cached service principal credentials: +Azure PowerShell caches the service principal credentials for AKS clusters. If these credentials expire, you encounter errors during AKS cluster deployment. If you run the [`New-AzAksCluster`][new-azakscluster] command and receive an error message similar to the following, it may indicate a problem with the cached service principal credentials: ```azurepowershell-interactive Operation failed with status: 'Bad Request'. Details: The credentials in ServicePrincipalProfile were invalid. Please see htt (Details: adal: Refresh request failed. Status Code = '401'. ``` -Check the expiration date of your service principal credentials using the [Get-AzADAppCredential][get-azadappcredential] command. The output will show you the `StartDateTime` of your credentials. +You can check the expiration date of your service principal credentials using the [Get-AzADAppCredential][get-azadappcredential] command. The output shows you the `StartDateTime` of your credentials. ```azurepowershell-interactive Get-AzADAppCredential -ApplicationId <ApplicationId> For information on how to update the credentials, see [Update or rotate the cred <!-- LINKS - internal --> [aad-service-principal]:../active-directory/develop/app-objects-and-service-principals.md-[acr-intro]: ../container-registry/container-registry-intro.md [az-ad-sp-create]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac [az-ad-sp-delete]: /cli/azure/ad/sp#az_ad_sp_delete [az-ad-app-credential-list]: /cli/azure/ad/app/credential#az_ad_app_credential_list-[azure-load-balancer-overview]: ../load-balancer/load-balancer-overview.md [install-azure-cli]: /cli/azure/install-azure-cli [service-principal]:../active-directory/develop/app-objects-and-service-principals.md-[user-defined-routes]: ../load-balancer/load-balancer-overview.md -[az-ad-app-list]: /cli/azure/ad/app#az_ad_app_list -[az-ad-app-delete]: /cli/azure/ad/app#az_ad_app_delete [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-update]: /cli/azure/aks#az_aks_update [rbac-network-contributor]: ../role-based-access-control/built-in-roles.md#network-contributor For information on how to update the credentials, see [Update or rotate the cred [new-azakscluster]: /powershell/module/az.aks/new-azakscluster [new-azadserviceprincipal]: /powershell/module/az.resources/new-azadserviceprincipal [get-azadappcredential]: /powershell/module/az.resources/get-azadappcredential-[create-an-azure-service-principal-with-azure-powershell]: /powershell/azure/create-azure-service-principal-azureps [new-azroleassignment]: /powershell/module/az.resources/new-azroleassignment [set-azakscluster]: /powershell/module/az.aks/set-azakscluster [remove-azadserviceprincipal]: /powershell/module/az.resources/remove-azadserviceprincipal |
aks | Open Service Mesh Integrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md | Last updated 03/23/2022 # Integrations with Open Service Mesh on Azure Kubernetes Service (AKS) -The Open Service Mesh (OSM) add-on integrates with features provided by Azure as well as open source projects. +The Open Service Mesh (OSM) add-on integrates with features provided by Azure and some open source projects. > [!IMPORTANT] > Integrations with open source projects aren't covered by the [AKS support policy][aks-support-policy]. ## Ingress -Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with [Web Application Routing][web-app-routing], [NGINX ingress][osm-nginx], or [Contour ingress][osm-contour]. Open source projects integrating with OSM are not covered by the [AKS support policy][aks-support-policy]. +Ingress allows for traffic external to the mesh to be routed to services within the mesh. With OSM, you can configure most ingress solutions to work with your mesh, but OSM works best with one of the following solutions: -At this time, [Azure Gateway Ingress Controller (AGIC)][agic] only works for HTTP backends. If you configure OSM to use AGIC, AGIC will not be used for other backends such as HTTPS and mTLS. +* [Web Application Routing][web-app-routing] +* [NGINX ingress][osm-nginx] +* [Contour ingress][osm-contour] -### Using the Azure Gateway Ingress Controller (AGIC) with the OSM add-on for HTTP ingress +> [!NOTE] +> At this time, [Azure Gateway Ingress Controller (AGIC)][agic] only works for HTTP backends. If you configure OSM to use AGIC, AGIC won't be used for other backends, such as HTTPS and mTLS. ++### Use the Azure Gateway Ingress Controller (AGIC) with the OSM add-on for HTTP ingress > [!IMPORTANT]-> You can't configure [Azure Gateway Ingress Controller (AGIC)][agic] for HTTPS ingress. --After installing the AGIC ingress controller, create a namespace for the application service, add it to the mesh using the OSM CLI, and deploy the application service to that namespace: --```console -# Create a namespace -kubectl create ns httpbin --# Add the namespace to the mesh -osm namespace add httpbin --# Deploy the application --export RELEASE_BRANCH=release-v1.2 -kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/$RELEASE_BRANCH/manifests/samples/httpbin/httpbin.yaml -n httpbin -``` --Verify that the pods are up and running, and have the envoy sidecar injected: --```console -kubectl get pods -n httpbin -``` --Example output: --```console -NAME READY STATUS RESTARTS AGE -httpbin-7c6464475-9wrr8 2/2 Running 0 6d20h -``` --```console -kubectl get svc -n httpbin -``` --Example output: --```console -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -httpbin ClusterIP 10.0.92.135 <none> 14001/TCP 6d20h -``` --Next, deploy the following `Ingress` and `IngressBackend` configurations to allow external clients to access the `httpbin` service on port `14001`. --```console -kubectl apply -f <<EOF -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: httpbin - namespace: httpbin - annotations: - kubernetes.io/ingress.class: azure/application-gateway -spec: - rules: - - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: httpbin - port: - number: 14001 --kind: IngressBackend -apiVersion: policy.openservicemesh.io/v1alpha1 -metadata: - name: httpbin - namespace: httpbin -spec: - backends: - - name: httpbin - port: - number: 14001 # targetPort of httpbin service - protocol: http - sources: - - kind: IPRange - name: 10.0.0.0/8 -EOF -``` --Ensure that both the Ingress and IngressBackend objects have been successfully deployed: --```console -kubectl get ingress -n httpbin -``` --Example output: --```console -NAME CLASS HOSTS ADDRESS PORTS AGE -httpbin <none> * 20.85.173.179 80 6d20h -``` --```console -kubectl get ingressbackend -n httpbin -``` --Example output: --```console -NAME STATUS -httpbin committed -``` --Use `kubectl` to display the external IP address of the ingress service. -```console -kubectl get ingress -n httpbin -``` --Use `curl` to verify you can access the `httpbin` service using the external IP address of the ingress service. -```console -curl -sI http://<external-ip>/get -``` --Confirm you receive a response with `status 200`. +> You can't configure [Azure Gateway Ingress Controller (AGIC)][agic] for HTTPS ingress. ++#### Create a namespace and deploy the application service ++1. Installing the AGIC ingress controller. +2. Create a namespace for the application service using the `kubectl create ns` command. ++ ```console + kubectl create ns httpbin + ``` ++3. Add the namespace to the mesh using the `osm namespace add` OSM CLI command. ++ ```console + osm namespace add httpbin + ``` ++4. Deploy the application service to the namespace using the `kubectl apply` command. ++ ```console + export RELEASE_BRANCH=release-v1.2 + kubectl apply -f https://raw.githubusercontent.com/openservicemesh/osm-docs/$RELEASE_BRANCH/manifests/samples/httpbin/httpbin.yaml -n httpbin + ``` ++5. Verify the pods are up and running and have the envoy sidecar injected using the `kubectl get pods` command. ++ ```console + kubectl get pods -n httpbin + ``` ++ Your output should look similar to the following example output: ++ ```output + NAME READY STATUS RESTARTS AGE + httpbin-7c6464475-9wrr8 2/2 Running 0 6d20h + ``` ++6. List the details of the service using the `kubectl get svc` command. ++ ```console + kubectl get svc -n httpbin + ``` ++ Your output should look similar to the following example output: ++ ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + httpbin ClusterIP 10.0.92.135 <none> 14001/TCP 6d20h + ``` ++#### Deploy the ingress configurations and verify access to the application service ++1. Deploy the following `Ingress` and `IngressBackend` configurations to allow external clients to access the `httpbin` service on port `14001` using the `kubectl apply` command. ++ ```console + kubectl apply -f <<EOF + apiVersion: networking.k8s.io/v1 + kind: Ingress + metadata: + name: httpbin + namespace: httpbin + annotations: + kubernetes.io/ingress.class: azure/application-gateway + spec: + rules: + - http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: httpbin + port: + number: 14001 + + kind: IngressBackend + apiVersion: policy.openservicemesh.io/v1alpha1 + metadata: + name: httpbin + namespace: httpbin + spec: + backends: + - name: httpbin + port: + number: 14001 # targetPort of httpbin service + protocol: http + sources: + - kind: IPRange + name: 10.0.0.0/8 + EOF + ``` ++2. Verify the `Ingress` object was successfully deployed using the `kubectl get ingress` command and make note of the external IP address. ++ ```console + kubectl get ingress -n httpbin + ``` ++ Your output should look similar to the following example output: ++ ```output + NAME CLASS HOSTS ADDRESS PORTS AGE + httpbin <none> * 20.85.173.179 80 6d20h + ``` ++3. Verify the `IngressBackend` object was successfully deployed using the `kubectl get ingressbackend` command. ++ ```console + kubectl get ingressbackend -n httpbin + ``` ++ Your output should look similar to the following example output: ++ ```output + NAME STATUS + httpbin committed + ``` ++4. Verify you can access the `httpbin` service using the external IP address of the ingress service and the following `curl` command. ++ ```console + curl -sI http://<external-ip>/get + ``` ++5. Confirm you receive a response with `status 200`. ## Metrics observability -Observability of metrics allows you to view the metrics of your mesh and the deployments in your mesh. With OSM, you can use [Prometheus and Grafana][osm-metrics] for metrics observability, but those integrations aren't covered by the [AKS support policy][aks-support-policy]. +Metrics observability allows you to view the metrics of your mesh and the deployments in your mesh. With OSM, you can use [Prometheus and Grafana][osm-metrics] for metrics observability, but those integrations aren't covered by the [AKS support policy][aks-support-policy]. You can also integrate OSM with [Azure Monitor][azure-monitor]. -Before you can enable metrics on your mesh to integrate with Azure Monitor: --* Enable Azure Monitor on your cluster -* Enable the OSM add-on for your AKS cluster -* Onboard your application namespaces to the mesh --To enable metrics for a namespace in the mesh use `osm metrics enable`. For example: --```console -osm metrics enable --namespace myappnamespace -``` --Create a Configmap in the `kube-system` namespace that enables Azure Monitor to monitor your namespaces. For example, create a `monitor-configmap.yaml` with the following to monitor the `myappnamespace`: --```yaml -kind: ConfigMap -apiVersion: v1 -data: - schema-version: v1 - config-version: ver1 - osm-metric-collection-configuration: |- - # OSM metric collection settings - [osm_metric_collection_configuration] - [osm_metric_collection_configuration.settings] - # Namespaces to monitor - monitor_namespaces = ["myappnamespace"] -metadata: - name: container-azm-ms-osmconfig - namespace: kube-system -``` --Apply that ConfigMap using `kubectl apply`. --```console -kubectl apply -f monitor-configmap.yaml -``` --To access your metrics from the Azure portal, select your AKS cluster, then select *Logs* under *Monitoring*. From the *Monitoring* section, query the `InsightsMetrics` table to view metrics in the enabled namespaces. For example, the following query shows the *envoy* metrics for the *myappnamespace* namespace. --```sh -InsightsMetrics -| where Name contains "envoy" -| extend t=parse_json(Tags) -| where t.app == "myappnamespace" -``` +Before you can enable metrics on your mesh to integrate with Azure Monitor, make sure you have the following prerequisites: ++* Enable Azure Monitor on your cluster. +* Enable the OSM add-on for your AKS cluster. +* Onboard your application namespaces to the mesh. ++1. Enable metrics for a namespace in the mesh using the `osm metrics enable` command. ++ ```console + osm metrics enable --namespace myappnamespace + ``` ++2. Create a ConfigMap in the `kube-system` namespace that enables Azure Monitor to monitor your namespaces. For example, create a `monitor-configmap.yaml` with the following contents to monitor the `myappnamespace`: ++ ```yaml + kind: ConfigMap + apiVersion: v1 + data: + schema-version: v1 + config-version: ver1 + osm-metric-collection-configuration: |- + # OSM metric collection settings + [osm_metric_collection_configuration] + [osm_metric_collection_configuration.settings] + # Namespaces to monitor + monitor_namespaces = ["myappnamespace"] + metadata: + name: container-azm-ms-osmconfig + namespace: kube-system + ``` ++3. Apply the ConfigMap using the `kubectl apply` command. ++ ```console + kubectl apply -f monitor-configmap.yaml + ``` ++4. Navigate to the Azure portal and select your AKS cluster. +5. Under **Monitoring**, select **Logs**. +6. In the **Monitoring** section, query the `InsightsMetrics` table to view metrics in the enabled namespaces. For example, the following query shows the *envoy* metrics for the *myappnamespace* namespace: ++ ```sh + InsightsMetrics + | where Name contains "envoy" + | extend t=parse_json(Tags) + | where t.app == "myappnamespace" + ``` ## Automation and developer tools -OSM can integrate with certain automation projects and developer tooling to help operators and developers build and release applications. For example, OSM integrates with [Flagger][osm-flagger] for progressive delivery and [Dapr][osm-dapr] for building applications. OSM's integration with Flagger and Dapr aren't covered by the [AKS support policy][aks-support-policy]. +OSM can integrate with certain automation projects and developer tooling to help operators and developers build and release applications. For example, OSM integrates with [Flagger][osm-flagger] for progressive delivery and [Dapr][osm-dapr] for building applications. The OSM integrations with Flagger and Dapr aren't covered by the [AKS support policy][aks-support-policy]. ## External authorization External authorization allows you to offload authorization of HTTP requests to a OSM has several types of certificates it uses to operate on your AKS cluster. OSM includes its own certificate manager called [Tresor][osm-tresor], which is used by default. Alternatively, OSM allows you to integrate with [Hashicorp Vault][osm-hashi-vault] and [cert-manager][osm-cert-manager], but those integrations aren't covered by the [AKS support policy][aks-support-policy]. +## Next steps ++This article covered the Open Service Mesh (OSM) add-on integrations with features provided by Azure and some open source projects. To learn more about OSM, see [About OSM in AKS][about-osm-in-aks]. ++<!-- LINKS --> [agic]: ../application-gateway/ingress-controller-overview.md-[agic-aks]: ../application-gateway/tutorial-ingress-controller-add-on-existing.md [aks-support-policy]: support-policies.md [azure-monitor]: ../azure-monitor/overview.md-[nginx]: https://github.com/kubernetes/ingress-nginx -[osm-ingress-policy]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/#http-ingress [osm-nginx]: https://release-v1-0.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/ [osm-contour]: https://release-v1-0.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#1-using-contour-ingress-controller-and-gateway [osm-metrics]: https://release-v1-0.docs.openservicemesh.io/docs/guides/observability/metrics/ OSM has several types of certificates it uses to operate on your AKS cluster. OS [osm-opa]: https://release-v1-0.docs.openservicemesh.io/docs/guides/integrations/external_auth_opa/ [osm-hashi-vault]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-hashicorp-vault [osm-cert-manager]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-cert-manager-[open-source-integrations]: open-service-mesh-integrations.md#additional-open-source-integrations -[osm-traffic-management-example]: https://github.com/MicrosoftDocs/azure-docs/pull/81085/files [osm-tresor]: https://release-v1-0.docs.openservicemesh.io/docs/guides/certificates/#using-osms-tresor-certificate-issuer-[web-app-routing]: web-app-routing.md +[web-app-routing]: web-app-routing.md +[about-osm-in-aks]: open-service-mesh-about.md |
aks | Use Kms Etcd Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md | After configuring KMS, you can enable [diagnostic-settings for key vault to chec ## Disable KMS -Use the following command to disable KMS on existing cluster. +Before disabling KMS, you can use the following Azure CLI command to verify if KMS is enabled. ++```azurecli-interactive +az aks list --query "[].{Name:name, KmsEnabled:securityProfile.azureKeyVaultKms.enabled, KeyId:securityProfile.azureKeyVaultKms.keyId}" -o table +``` ++If the results confirm KMS is enabled, run the following command to disable KMS on the cluster. ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azure-keyvault-kms |
api-management | Add Correlation Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/add-correlation-id.md | -This article shows an Azure API management policy sample that demonstrates how to add a header containing a correlation id to the inbound request. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to add a header containing a correlation id to the inbound request. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Authorize Request Based On Jwt Claims | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/authorize-request-based-on-jwt-claims.md | Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Authorize Request Using External Authorizer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/authorize-request-using-external-authorizer.md | -This article shows an Azure API management policy sample that demonstrates how to secure API access by using an external authorizer encapsulating custom authentication/authorization logic. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to secure API access by using an external authorizer encapsulating custom authentication/authorization logic. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Access restrictions policies](../api-management-access-restriction-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Cache Response | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/cache-response.md | -This article shows an Azure API management policy sample that demonstrates how to add capabilities to a backend service. For example, accept a name of the place instead of latitude and longitude in a weather forecast API. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to add capabilities to a backend service. For example, accept a name of the place instead of latitude and longitude in a weather forecast API. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Filter Ip Addresses When Using Appgw | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/filter-ip-addresses-when-using-appgw.md | -This article shows an Azure API management policy sample that demonstrates how filter on the request IP address when the API Management instance is accessed through an Application Gateway or other intermediary. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how filter on the request IP address when the API Management instance is accessed through an Application Gateway or other intermediary. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Access restrictions policies](../api-management-access-restriction-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Filter Response Content | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/filter-response-content.md | -This article shows an Azure API management policy sample that demonstrates how to filter data elements from the response payload based on the product associated with the request. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to filter data elements from the response payload based on the product associated with the request. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **outbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Generate Shared Access Signature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/generate-shared-access-signature.md | -This article shows an Azure API management policy sample that demonstrates how to generate [Shared Access Signature](../../storage/common/storage-sas-overview.md) using expressions and forward the request to Azure storage with rewrite-uri policy. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to generate [Shared Access Signature](../../storage/common/storage-sas-overview.md) using expressions and forward the request to Azure storage with rewrite-uri policy. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Get X Csrf Token From Sap Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/get-x-csrf-token-from-sap-gateway.md | -This article shows an Azure API management policy sample that demonstrates how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to implement X-CSRF pattern used by many APIs. This example is specific to SAP Gateway. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Log Errors To Stackify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/log-errors-to-stackify.md | -This article shows an Azure API management policy sample that demonstrates how to add an error logging policy to send errors to Stackify for logging. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to add an error logging policy to send errors to Stackify for logging. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **on-error** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Route Requests Based On Size | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/route-requests-based-on-size.md | -This article shows an Azure API management policy sample that demonstrates how to route requests based on the size of their bodies. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to route requests based on the size of their bodies. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Send Request Context Info To Backend Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/send-request-context-info-to-backend-service.md | -This article shows an Azure API management policy sample that demonstrates how to send request context information to the backend service. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to send request context information to the backend service. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Set Cache Duration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/set-cache-duration.md | -This article shows an Azure API management policy sample that demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to set response cache duration using maxAge value in Cache-Control header sent by the backend. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Policy Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Set Header To Enable Backend To Construct Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/set-header-to-enable-backend-to-construct-urls.md | -This article shows an Azure API management policy sample that demonstrates how to add a Forwarded header in the inbound request to allow the backend API to construct proper URLs. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +This article shows an Azure API management policy sample that demonstrates how to add a Forwarded header in the inbound request to allow the backend API to construct proper URLs. To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). ## Code Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
api-management | Use Oauth2 For Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policies/use-oauth2-for-authorization.md | This article shows an Azure API management policy sample that demonstrates how t * For a more detailed example policy that not only acquires an access token, but also caches and renews it upon expiration, see [this blog](https://techcommunity.microsoft.com/t5/azure-paas-blog/api-management-policy-for-access-token-acquisition-caching-and/ba-p/2191623). * API Management [authorizations](../authorizations-overview.md) can also be used to simplify the process of managing authorization tokens to OAuth 2.0 backend services. -To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](../policy-reference.md). +To set or edit a policy code, follow the steps described in [Set or edit a policy](../set-edit-policies.md). To see other examples, see [policy samples](/azure/api-management/policies). The following script uses named values that appear in {{property_name}}. To learn about named values and how to use them in API Management policies, see [this](../api-management-howto-properties.md) topic. Paste the code into the **inbound** block. ## Next steps -Learn more about APIM policies: +Learn more about API Management policies: + [Transformation policies](../api-management-transformation-policies.md)-+ [Policy samples](../policy-reference.md) ++ [Policy samples](/azure/api-management/policies) |
app-service | Configure Authentication Customize Sign In Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md | The token format varies slightly according to the provider. See the following ta | `microsoftaccount` | `{"access_token":"<access_token>"}` or `{"authentication_token": "<token>"`| `authentication_token` is preferred over `access_token`. The `expires_in` property is optional. <br/> When requesting the token from Live services, always request the `wl.basic` scope. | | `google` | `{"id_token":"<id_token>"}` | The `authorization_code` property is optional. Providing an `authorization_code` value will add an access token and a refresh token to the token store. When specified, `authorization_code` can also optionally be accompanied by a `redirect_uri` property. | | `facebook`| `{"access_token":"<user_access_token>"}` | Use a valid [user access token](https://developers.facebook.com/docs/facebook-login/access-tokens) from Facebook. |-| `twitter` | `{"access_token":"<access_token>", "access_token_secret":"<acces_token_secret>"}` | | +| `twitter` | `{"access_token":"<access_token>", "access_token_secret":"<access_token_secret>"}` | | | | | | If the provider token is validated successfully, the API returns with an `authenticationToken` in the response body, which is your session token. |
app-service | Configure Language Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md | You can also configure a custom start file with the following extensions: To add a custom start file, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive-az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<filname-with-extension>" +az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<filename-with-extension>" ``` ### Run custom command |
app-service | Configure Ssl Certificate In Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md | using (X509Store certStore = new X509Store(StoreName.My, StoreLocation.CurrentUs // Use certificate Console.WriteLine(cert.FriendlyName); - // Consider to call Dispose() on the certificate after it's being used, avaliable in .NET 4.6 and later + // Consider to call Dispose() on the certificate after it's being used, available in .NET 4.6 and later } ``` |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | By default, App Service certificates have a one-year validity period. Before and > > Unlike an App Service managed certificate, domain re-verification for App Service certificates *isn't* automated. Failure to verify domain ownership results in failed renewals. For more information about how to verify your App Service certificate, review [Confirm domain ownership](#confirm-domain-ownership). >-> The renewal process requires that the well-known [service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). These permissions are set up for you when you import an App Service certificate through the Azure portal. Make sure that you don't remove these permisisons from your key vault. +> The renewal process requires that the well-known [service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). These permissions are set up for you when you import an App Service certificate through the Azure portal. Make sure that you don't remove these permissions from your key vault. 1. To change the automatic renewal setting for your App Service certificate at any time, on the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. |
app-service | How To Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md | Use the following command to check if your virtual network has any locks. az lock list --resource-group $VNET_RG --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks ``` -Delete any exisiting locks using the following command. +Delete any existing locks using the following command. ```azurecli az lock delete --resource-group $VNET_RG --name <lock-name> --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks |
app-service | Overview Local Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-local-cache.md | As part of the step that copies the storage content, any folder that is named re To flush the local cache logs, stop and restart the app. This action clears the old cache. ### Why does App Service starts showing previously deployed files after a restart when Local Cache is enabled?-In case App Service starts showing previously deployed files on a restart, check for the precense of the App Setting - '[WEBSITE_DISABLE_SCM_SEPARATION=true](https://github.com/projectkudu/kudu/wiki/Configurable-settings#use-the-same-process-for-the-user-site-and-the-scm-site)'. After adding this setting any deployments via KUDU start writing to the local VM instead of the persistent storage. Best practices mentioned above in this article should be leveraged, wherein the deployments should always be done to the staging slot which does not have Local Cache enabled. +In case App Service starts showing previously deployed files on a restart, check for the presence of the App Setting - '[WEBSITE_DISABLE_SCM_SEPARATION=true](https://github.com/projectkudu/kudu/wiki/Configurable-settings#use-the-same-process-for-the-user-site-and-the-scm-site)'. After adding this setting any deployments via KUDU start writing to the local VM instead of the persistent storage. Best practices mentioned above in this article should be leveraged, wherein the deployments should always be done to the staging slot which does not have Local Cache enabled. ## More resources |
app-service | Reference App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md | The following environment variables are related to [WebJobs](webjobs-create.md). // NOTE: This is set on all sites, irrespective of whether it is a Functions site, because the EnvSettings module depends // upon it to decide when to inject the app-settings.| | `WEBSITE_PLACEHOLDER_PING_PATH` | This env var can be used to set a special warmup ping path on placeholder template sites. |-| ` WEBSITE_PLACEHOLDER_DISABLE_AUTOSPECIALIZATION` | This env var can be used to disabe specialization from being enabled automatically for a given placeholder template site. | +| ` WEBSITE_PLACEHOLDER_DISABLE_AUTOSPECIALIZATION` | This env var can be used to disable specialization from being enabled automatically for a given placeholder template site. | | `WEBSITE_FUNCTIONS_STARTUPCONTEXT_CACHE` | This env var is set only during specialization of a placeholder, to indicate to the Functions Runtime that // some function-app related data needed at startup, like secrets, are available in a file at the path specified // by this env var. | |
app-service | Scenario Secure App Authentication App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service.md | -## Connect to backend services as app --User authentication can begin with authenticating the user to your app service as described in the previous section. ---Once the app service has the authenticated identity, your system needs to **connect to backend services as the app**: --* Use [managed identity](tutorial-connect-overview.md#connect-to-azure-services-with-managed-identity). If managed identity isn't available, then use [Key Vault](tutorial-connect-overview.md#connect-to-key-vault-with-managed-identity). --* The user identity doesn't need to flow further. Any additional security to reach backend services is handled with the app service's identity. - [!INCLUDE [start](./includes/tutorial-set-up-app-service-authentication/after.md)] > [!div class="nextstepaction"] |
app-service | Troubleshoot Domain Ssl Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md | The App Service certificate was renewed, but the app that uses the App Service c #### Cause 1: Missing access policy permissions on the key vault -The Key Vault used to store the App Service Certificate is missing access policy permissions on the key vault for Microsoft.Azure.Websites and Microsoft.Azure.CertificateRegistation. The service principals and their required permissions for Key Vault access are: +The Key Vault used to store the App Service Certificate is missing access policy permissions on the key vault for Microsoft.Azure.Websites and Microsoft.Azure.CertificateRegistration. The service principals and their required permissions for Key Vault access are: </br></br> |Service Principal|Secret Permissions|Certificate Permissions| |
app-service | Troubleshoot Dotnet Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-dotnet-visual-studio.md | Remote debugging only works with continuous WebJobs. Scheduled and on-demand Web 2. In the ContosoAdsWebJob project, open *Functions.cs*. -3. [Set a breakpoint](/visualstudio/debugger/) on the first statement in the `GnerateThumbnail` method. +3. [Set a breakpoint](/visualstudio/debugger/) on the first statement in the `GenerateThumbnail` method.  |
app-service | Tutorial Auth Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md | -# Requires non-internal subscription - internal subscriptons doesn't provide permission to correctly configure AAD apps +# Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps # Tutorial: Authenticate and authorize users end-to-end in Azure App Service if (bearerToken) { ## 8. Browse to the apps -1. Use the frontend web site in a browser. The URL is in the formate of `https://<front-end-app-name>.azurewebsites.net/`. +1. Use the frontend web site in a browser. The URL is in the format of `https://<front-end-app-name>.azurewebsites.net/`. 1. The browser requests your authentication to the web app. Complete the authentication. :::image type="content" source="./media/tutorial-auth-aad/browser-screenshot-authentication-permission-requested-pop-up.png" alt-text="Screenshot of browser authentication pop-up requesting permissions."::: |
app-service | Tutorial Connect App App Graph Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-app-graph-javascript.md | -# Requires non-internal subscription - internal subscriptons doesn't provide permission to correctly configure AAD apps +# Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps # Tutorial: Flow authentication from App Service through back-end API to Microsoft Graph |
app-service | Tutorial Java Quarkus Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md | You can access Quarkus app locally by typing the `w` character into the console, If you see exceptions in the output, double-check that the configuration values for `%dev` are correct. > [!TIP]-> You can enable continuous testing by typing `r` into the terminal. This will continously run tests as you develop the application. You can also use Quarkus' *Live Coding* to see changes to your Java or `pom.xml` immediately. Simlply edit code and reload the browser. +> You can enable continuous testing by typing `r` into the terminal. This will continuously run tests as you develop the application. You can also use Quarkus' *Live Coding* to see changes to your Java or `pom.xml` immediately. Simlply edit code and reload the browser. When you're done testing locally, shut down the application with `CTRL-C` or type `q` in the terminal. |
app-service | Tutorial Multi Container App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-multi-container-app.md | When the app setting has been created, Cloud Shell shows information similar to ### Modify configuration file -In the Cloud Shell, opne the file `docker-compose-wordpress.yml` in a text editor. +In the Cloud Shell, open the file `docker-compose-wordpress.yml` in a text editor. The `volumes` option maps the file system to a directory within the container. `${WEBAPP_STORAGE_HOME}` is an environment variable in App Service that is mapped to persistent storage for your app. You'll use this environment variable in the volumes option so that the WordPress files are installed into persistent storage instead of the container. Make the following modifications to the file: |
app-service | Tutorial Php Mysql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md | Pricing for the create resources is as follows: #### How do I connect to the MySQL database that's secured behind the virtual network with other tools? -- For basic access from a commmand-line tool, you can run `mysql` from the app's SSH terminal.+- For basic access from a command-line tool, you can run `mysql` from the app's SSH terminal. - To connect from a desktop tool like MySQL Workbench, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network. - You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network. |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | ms.devlang: python Last updated 02/28/2023 -zone_pivot_groups: deploy-python-web-app-postgressql +zone_pivot_groups: deploy-python-web-app-postgresql # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure Pricing for the created resources is as follows: #### How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools? -- For basic access from a commmand-line tool, you can run `psql` from the app's SSH terminal.+- For basic access from a command-line tool, you can run `psql` from the app's SSH terminal. - To connect from a desktop tool, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network. - You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network. |
app-service | Webjobs Sdk How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-sdk-how-to.md | The attribute can be declared at the parameter, method, or class level. The sett ### Timeout attribute -The [`Timeout`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/TimeoutAttribute.cs) attribute causes a function to be canceled if it doesn't finish within a specified amount of time. In the following example, the function would run for one day without the Timeout attribute. Timeout causes the function to be canceled after 15 seconds. When the Timeout attribute's "throwOnError" parameter is set to "true", the function invocation is terminated by having an exception thrown by the webjobs SDK when the timeout inverval is exceeded. The default value of "throwOnError" is "false". When the Timeout attribute is used, the default behavior is to cancel the function invocation by setting the cancellation token while allowing the invocation to run indefinitely until the function code returns or throws an exception. +The [`Timeout`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/TimeoutAttribute.cs) attribute causes a function to be canceled if it doesn't finish within a specified amount of time. In the following example, the function would run for one day without the Timeout attribute. Timeout causes the function to be canceled after 15 seconds. When the Timeout attribute's "throwOnError" parameter is set to "true", the function invocation is terminated by having an exception thrown by the webjobs SDK when the timeout interval is exceeded. The default value of "throwOnError" is "false". When the Timeout attribute is used, the default behavior is to cancel the function invocation by setting the cancellation token while allowing the invocation to run indefinitely until the function code returns or throws an exception. ```cs [Timeout("00:00:15")] |
applied-ai-services | Concept Custom Neural | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md | Values in training cases should be diverse and representative. For example, if a * The model doesn't recognize values split across page boundaries. * Custom neural models are only trained in English. Model performance is lower for documents in other languages. * If a dataset labeled for custom template models is used to train a custom neural model, the unsupported field types are ignored.-* Custom neural models are limited to 10 build operations per month. Open a support request if you need the limit increased. +* Custom neural models are limited to 20 build operations per month. Open a support request if you need the limit increased. For more information, see [Form Recognizer service quotas and limits](service-limits.md) ## Training a model |
automation | Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/query-logs.md | Title: Query Azure Automation Update Management logs description: This article tells how to query the logs for Update Management in your Log Analytics workspace. Previously updated : 12/13/2022 Last updated : 06/28/2022 A record with a type of `UpdateRunProgress` is created that provides update depl | SucceededOnRetry | Value indicating if the update execution failed on the first attempt and the current operation is a retry attempt. | | ErrorResult | Windows Update error code generated if an update fails to install. | | UpdateRunName| Name of the update schedule.| -| InstallationStatus | The possible installation states of an update on the client computer,<br> `NotStarted` - job not triggered yet.<br> `Failed` - job started but failed with an exception.<br> `InProgress` - job in progress.<br> `MaintenanceWindowExceeded` - if execution was remaining but maintenance window interval reached.<br> `Succeeded` - job succeeded.<br> `InstallFailed` - update failed to install successfully.<br> `NotIncluded` - the corresponding update's classification doesn't match with customer's entries in input classification list.<br> `Excluded` - user enters a KBID in excluded list. While patching, if KBID in excluded list matches with the system detected update KB ID, it is marked as excluded. | +| InstallationStatus | The possible installation states of an update on the client computer,<br> `NotStarted` - job not triggered yet.<br> `Failed` - job started but failed with an exception.<br> `InProgress` - job in progress.<br> `MaintenanceWindowExceeded` - if execution was remaining but maintenance window interval reached.<br> `Succeeded` - job succeeded.<br> `Install Failed` - update failed to install successfully.<br> `NotIncluded` - the corresponding update's classification doesn't match with customer's entries in input classification list.<br> `Excluded` - user enters a KBID in excluded list. While patching, if KBID in excluded list matches with the system detected update KB ID, it is marked as excluded. | | Computer | Fully-qualified domain name of reporting machine. | | Title | The title of the update. | | Product | The products for which the update is applicable. | |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | The following image is taken from the sample package, and shows the exterior lay ### Unit layer -Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. A closed entity type such as Polygon, closed Polyline, Circle, or closed Ellipse is required to represent each unit. So, walls and doors alone doesn't create a unit because there isn’t an entity that represents the unit. +Units are navigable spaces in the building, such as offices, hallways, stairs, and elevators. A closed entity type such as Polygon, closed Polyline, Circle, or closed Ellipse is required to represent each unit. So, walls and doors alone don't create a unit because there isn’t an entity that represents the unit. The following image is taken from the [sample drawing package] and shows the unit label layer and unit layer in red. All other layers are turned off to help with visualization. Also, one unit is selected to help show that each unit is a closed Polyline. Defining text properties enables you to associate text entities that fall inside :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/dwg-layers.png" alt-text="Screenshot showing the 'create a new manifest' screen of the onboarding tool."::: > [!IMPORTANT]-> Wayfinding support for `Drawing Package 2.0` will be available soon. The following feature class should be defined (not case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors: +> The following feature class should be defined (not case sensitive) in order to use [wayfinding]. `Wall` will be treated as an obstruction for a given path request. `Stair` and `Elevator` will be treated as level connectors to navigate across floors: > > 1. Wall > 2. Stair |
azure-maps | How To Use Indoor Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md | -When you create an indoor map using Azure Maps Creator, default styles are applied. Azure Maps Creator now also supports customizing the styles of the different elements of your indoor maps using the [Style Rest API](/rest/api/maps/v20220901preview/style), or the [visual style editor](https://azure.github.io/Azure-Maps-Style-Editor/). +When you create an indoor map using Azure Maps Creator, default styles are applied. Azure Maps Creator now also supports customizing the styles of the different elements of your indoor maps using the [Style Rest API], or the [visual style editor]. ## Prerequisites -- [Azure Maps account](quick-demo-map-app.md#create-an-azure-maps-account)-- [Azure Maps Creator resource](how-to-manage-creator.md)-- [Subscription key](quick-demo-map-app.md#get-the-subscription-key-for-your-account).-- [Map configuration][mapConfiguration] alias or ID. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps][tutorial] tutorial helpful.+- [Azure Maps account] +- [Azure Maps Creator resource] +- [Subscription key] +- A map configuration alias or ID. For more information, see [map configuration API]. -You'll need the map configuration `alias` (or `mapConfigurationId`) to render indoor maps with custom styles via the Azure Maps Indoor Maps module. +> [!TIP] +> If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful. ++The map configuration `alias` (or `mapConfigurationId`) is required to render indoor maps with custom styles via the Azure Maps Indoor Maps module. ## Embed the Indoor Maps module You can install and embed the *Azure Maps Indoor* module in one of two ways. -To use the globally hosted Azure Content Delivery Network version of the *Azure Maps Indoor* module, reference the following JavaScript and Style Sheet references in the `<head>` element of the HTML file: +To use the globally hosted Azure Content Delivery Network version of the *Azure Maps Indoor* module, reference the following `script` and `stylesheet` references in the `<head>` element of the HTML file: ```html <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/> <script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script> ``` - Or, you can download the *Azure Maps Indoor* module. The *Azure Maps Indoor* module contains a client library for accessing Azure Maps services. Follow the steps below to install and load the *Indoor* module into your web application. + Or, you can download the *Azure Maps Indoor* module. The *Azure Maps Indoor* module contains a client library for accessing Azure Maps services. The following steps demonstrate how to install and load the *Indoor* module into your web application. - 1. Install the latest [azure-maps-indoor package](https://www.npmjs.com/package/azure-maps-indoor). + 1. Install the latest [azure-maps-indoor package]. ```powershell >npm install azure-maps-indoor Set the map domain with a prefix matching the location of your Creator resource, `atlas.setDomain('us.atlas.microsoft.com');` -For more information, see [Azure Maps service geographic scope][geos]. +For more information, see [Azure Maps service geographic scope]. Next, instantiate a *Map object* with the map configuration object set to the `alias` or `mapConfigurationId` property of your map configuration, then set your `styleAPIVersion` to `2022-09-01-preview`. -The *Map object* will be used in the next step to instantiate the *Indoor Manager* object. The code below shows you how to instantiate the *Map object* with `mapConfiguration`, `styleAPIVersion` and map domain set: +The *Map object* will be used in the next step to instantiate the *Indoor Manager* object. The following code shows you how to instantiate the *Map object* with `mapConfiguration`, `styleAPIVersion` and map domain set: ```javascript const subscriptionKey = "<Your Azure Maps Subscription Key>"; const map = new atlas.Map("map-id", { ## Instantiate the Indoor Manager -To load the indoor map style of the tiles, you must instantiate the *Indoor Manager*. Instantiate the *Indoor Manager* by providing the *Map object*. If you wish to support [dynamic map styling](indoor-map-dynamic-styling.md), you must pass the `statesetId`. The `statesetId` variable name is case-sensitive. Your code should like the JavaScript below. +To load the indoor map style of the tiles, you must instantiate the *Indoor Manager*. Instantiate the *Indoor Manager* by providing the *Map object*. If you wish to support [dynamic map styling], you must pass the `statesetId`. The `statesetId` variable name is case-sensitive. Your code should look like the following JavaScript code snippet: ```javascriptf const statesetId = "<statesetId>"; const indoorManager = new atlas.indoor.IndoorManager(map, { }); ``` -To enable polling of state data you provide, you must provide the `statesetId` and call `indoorManager.setDynamicStyling(true)`. Polling state data lets you dynamically update the state of dynamic properties or *states*. For example, a feature such as room can have a dynamic property (*state*) called `occupancy`. Your application may wish to poll for any *state* changes to reflect the change inside the visual map. The code below shows you how to enable state polling: +To enable polling of state data you provide, you must provide the `statesetId` and call `indoorManager.setDynamicStyling(true)`. Polling state data lets you dynamically update the state of dynamic properties or *states*. For example, a feature such as room can have a dynamic property (*state*) called `occupancy`. Your application may wish to poll for any *state* changes to reflect the change inside the visual map. The following code shows you how to enable state polling: ```javascript const statesetId = "<statesetId>"; map.events.add("facilitychanged", indoorManager, (eventData) => { }); ``` -The `eventData` variable holds information about the level or facility that invoked the `levelchanged` or `facilitychanged` event, respectively. When a level changes, the `eventData` object will contain the `facilityId`, the new `levelNumber`, and other metadata. When a facility changes, the `eventData` object will contain the new `facilityId`, the new `levelNumber`, and other metadata. +The `eventData` variable holds information about the level or facility that invoked the `levelchanged` or `facilitychanged` event, respectively. When a level changes, the `eventData` object contains the `facilityId`, the new `levelNumber`, and other metadata. When a facility changes, the `eventData` object contains the new `facilityId`, the new `levelNumber`, and other metadata. ## Example: custom styling: consume map configuration in WebSDK (preview) -When you create an indoor map using Azure Maps Creator, default styles are applied. Azure Maps Creator now also supports customizing your indoor styles. For more information, see [Create custom styles for indoor maps](how-to-create-custom-styles.md). Creator also offers a [visual style editor][visual style editor]. +When you create an indoor map using Azure Maps Creator, default styles are applied. Azure Maps Creator now also supports customizing your indoor styles. For more information, see [Create custom styles for indoor maps]. Creator also offers a [visual style editor]. -1. Follow the [Create custom styles for indoor maps](how-to-create-custom-styles.md) how-to article to create your custom styles. Make a note of the map configuration alias after saving your changes. +1. Follow the [Create custom styles for indoor maps] how-to article to create your custom styles. Make a note of the map configuration alias after saving your changes. -2. Use the [Azure Content Delivery Network](#embed-the-indoor-maps-module) option to install the *Azure Maps Indoor* module. +2. Use the [Azure Content Delivery Network] option to install the *Azure Maps Indoor* module. 3. Create a new HTML file When you create an indoor map using Azure Maps Creator, default styles are appli 6. Initialize a *Map object*. The *Map object* supports the following options: - `Subscription key` is your Azure Maps subscription key. - `center` defines a latitude and longitude for your indoor map center location. Provide a value for `center` if you don't want to provide a value for `bounds`. Format should appear as `center`: [-122.13315, 47.63637].- - `bounds` is the smallest rectangular shape that encloses the tileset map data. Set a value for `bounds` if you don't want to set a value for `center`. You can find your map bounds by calling the [Tileset List API](/rest/api/maps/v2/tileset/list). The Tileset List API returns the `bbox`, which you can parse and assign to `bounds`. Format should appear as `bounds`: [# west, # south, # east, # north]. + - `bounds` is the smallest rectangular shape that encloses the tileset map data. Set a value for `bounds` if you don't want to set a value for `center`. You can find your map bounds by calling the [Tileset List API]. The Tileset List API returns the `bbox`, which you can parse and assign to `bounds`. Format should appear as `bounds`: [# west, # south, # east, # north]. - `mapConfiguration` the ID or alias of the map configuration that defines the custom styles you want to display on the map, use the map configuration ID or alias from step 1.- - `style` allows you to set the initial style from your map configuration that will be displayed, if unset, the style matching map configuration's default configuration will be used. + - `style` allows you to set the initial style from your map configuration that is displayed. If not set, the style matching map configuration's default configuration is used. - `zoom` allows you to specify the min and max zoom levels for your map. - `styleAPIVersion`: pass **'2022-09-01-preview'** (which is required while Custom Styling is in public preview) When you create an indoor map using Azure Maps Creator, default styles are appli 8. Add *Map object* event listeners. > [!TIP]-> The map configuration is referenced using the `mapConfigurationId` or `alias` . Each time you edit or change a map configuration, its ID changes but its alias remains the same. It is recommended to reference the map configuration by its alias in your applications. For more information, See [map configuration](creator-indoor-maps.md#map-configuration) in the concepts article. +> The map configuration is referenced using the `mapConfigurationId` or `alias` . Each time you edit or change a map configuration, its ID changes but its alias remains the same. It is recommended to reference the map configuration by its alias in your applications. For more information, See [map configuration] in the concepts article. -Your file should now look similar to the HTML below. +Your file should now look similar to the following HTML: ```html <!DOCTYPE html> Your file should now look similar to the HTML below. </html> ``` -To see your indoor map, load it into a web browser. It should appear like the image below. If you click on the stairwell feature, the *level picker* will appear in the upper right-hand corner. +To see your indoor map, load it into a web browser. It should appear like the following image. If you select the stairwell feature, the *level picker* appears in the upper right-hand corner.  -[See live demo](https://samples.azuremaps.com/?sample=creator-indoor-maps) +For a live demo of an indoor map with available source code, see [Creator Indoor Maps] in the [Azure Maps Samples]. ## Next steps Learn more about how to add more data to your map: > [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps) -[mapConfiguration]: /rest/api/maps/v20220901preview/map-configuration -[tutorial]: tutorial-creator-indoor-maps.md -[geos]: geographic-scope.md +[Azure Content Delivery Network]: #embed-the-indoor-maps-module +[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account +[Azure Maps Creator resource]: how-to-manage-creator.md +[Azure Maps service geographic scope]: geographic-scope.md +[azure-maps-indoor package]: https://www.npmjs.com/package/azure-maps-indoor +[Create custom styles for indoor maps]: how-to-create-custom-styles.md +[Creator Indoor Maps]: https://samples.azuremaps.com/?sample=creator-indoor-maps +[dynamic map styling]: indoor-map-dynamic-styling.md +[map configuration API]: /rest/api/maps/v20220901preview/map-configuration +[map configuration]: creator-indoor-maps.md#map-configuration +[Style Rest API]: /rest/api/maps/v20220901preview/style +[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account +[Tileset List API]: /rest/api/maps/v2/tileset/list +[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md [visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor/ |
azure-maps | How To Use Ios Map Control Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ios-map-control-library.md | Title: Get started with iOS map control | Microsoft Azure Maps + Title: Get started with iOS map control + description: Become familiar with the Azure Maps iOS SDK. See how to install the SDK and create an interactive map. The Azure Maps iOS SDK is a vector map library for iOS. This article guides you ## Prerequisites -Be sure to complete the steps in the [Quickstart: Create an iOS app](quick-ios-app.md) article. +Be sure to complete the steps in the [Quickstart: Create an iOS app] article. ## Localizing the map -The Azure Maps iOS SDK provides three ways of setting the language and regional view of the map. The following code demonstrates the different ways of setting the *language* to French ("fr-FR") and the *regional view* to "Auto". +The Azure Maps iOS SDK provides three ways of setting the language and regional view of the map. The following code demonstrates the different ways of setting the *language* to French (fr-FR) and the *regional view* to `Auto`. -1. Pass the language and regional view information into the `AzureMaps` class using the static `language` and `view` properties. This sets the default language and regional view properties in your app. +1. Set the default language and regional view properties in your app by passing the language and regional view information into the `AzureMaps` class using the static `language` and `view` properties. ```swift // Alternatively use Azure Active Directory authenticate. The Azure Maps iOS SDK provides three ways of setting the language and regional ]) ``` -1. The final way of programmatically setting the language and regional view properties uses the maps `setStyle` method. Do this any time you need to change the language and regional view of the map. +1. The final way of programmatically setting the language and regional view properties uses the maps `setStyle` method. Use the maps `setStyle` method anytime you need to change the language and regional view of the map. ```swift mapControl.getMapAsync { map in The Azure Maps iOS SDK provides three ways of setting the language and regional } ``` -Here is an example of an Azure Maps application with the language set to "fr-FR" and regional view set to "Auto". +Here's an example of an Azure Maps application with the language set to `fr-FR` and regional view set to `Auto`. :::image type="content" source="media/ios-sdk/how-to-use-ios-map-control-library/fr-borderless.png" alt-text="A map image showing labels in French."::: -For a complete list of supported languages and regional views, see [Localization support in Azure Maps](supported-languages.md). +For a complete list of supported languages and regional views, see [Localization support in Azure Maps]. ## Navigating the map This section details the various ways to navigate when in an Azure Maps program. The Azure Maps iOS SDK supports using the Azure Government cloud. You specify using the Azure Maps government cloud domain by adding the following line of code where the Azure Maps authentication details are specified: -``` +```swift AzureMaps.domain = "atlas.azure.us" ``` Be sure to use Azure Maps authentication details from the Azure Government cloud ## Additional information -See the following articles for additional code examples: +See the following articles for more code examples: * [Quickstart: Create an iOS app](quick-ios-app.md) * [Change map styles in iOS maps](set-map-style-ios-sdk.md) * [Add a symbol layer](add-symbol-layer-ios.md) * [Add a line layer](add-line-layer-map-ios.md) * [Add a polygon layer](add-polygon-layer-map-ios.md)++[Quickstart: Create an iOS app]: quick-ios-app.md +[Localization support in Azure Maps]: supported-languages.md |
azure-maps | How To Use Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md | Title: How to use the Azure Maps web map control description: Learn how to add and localize maps to web and mobile applications by using the Map Control client-side JavaScript library in Azure Maps. -- Previously updated : 11/29/2021++ Last updated : 06/29/2023 -This article uses the Azure Maps Web SDK, however the Azure Maps services work with any map control. For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects](open-source-projects.md#third-part-map-control-plugins). +This article uses the Azure Maps Web SDK, however the Azure Maps services work with any map control. For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects]. ## Prerequisites To use the Map Control in a web page, you must have one of the following prerequ * An [Azure Maps account] * A [subscription key]-* Obtain your Azure Active Directory (AAD) credentials with [authentication options] +* Obtain your Azure Active Directory (Azure AD) credentials with [authentication options] ## Create a new map in a web page You can embed a map in a web page by using the Map Control client-side JavaScrip 2. Load in the Azure Maps Web SDK. You can choose one of two options: - * Use the globally hosted CDN version of the Azure Maps Web SDK by adding references to the JavaScript and stylesheet in the `<head>` element of the HTML file: + * Use the globally hosted CDN version of the Azure Maps Web SDK by adding references to the JavaScript and `stylesheet` in the `<head>` element of the HTML file: ```html <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> ``` - * Load the Azure Maps Web SDK source code locally using the [azure-maps-control](https://www.npmjs.com/package/azure-maps-control) npm package and host it with your app. This package also includes TypeScript definitions. + * Load the Azure Maps Web SDK source code locally using the [azure-maps-control] npm package and host it with your app. This package also includes TypeScript definitions. > **npm install azure-maps-control** - Then add references to the Azure Maps stylesheet to the `<head>` element of the file: + Then add references to the Azure Maps `stylesheet` to the `<head>` element of the file: ```html <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> You can embed a map in a web page by using the Map Control client-side JavaScrip </body> ``` -5. Now, we'll initialize the map control. In order to authenticate the control, you'll either need to own an Azure Maps subscription key or use Azure Active Directory (AAD) credentials with [authentication options](/javascript/api/azure-maps-control/atlas.authenticationoptions). +5. Next, initialize the map control. In order to authenticate the control, use an Azure Maps subscription key or Azure AD credentials with [authentication options]. If you're using a subscription key for authentication, copy and paste the following script element inside the `<head>` element, and below the first `<script>` element. Replace `<Your Azure Maps Key>` with your Azure Maps subscription key. You can embed a map in a web page by using the Map Control client-side JavaScrip </script> ``` - If you're using Azure Active Directory (AAD) for authentication, copy and paste the following script element inside the `<head>` element, and below the first `<script>` element. + If you're using Azure AD for authentication, copy and paste the following script element inside the `<head>` element, and below the first `<script>` element. ```HTML <script type="text/javascript"> You can embed a map in a web page by using the Map Control client-side JavaScrip </script> ``` - For more information about authentication with Azure Maps, see the [Authentication with Azure Maps](azure-maps-authentication.md) document. For a list of samples showing how to integrate Azure Active Directory (AAD) with Azure Maps, see [Azure Maps & Azure Active Directory Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples) in GitHub. + For more information about authentication with Azure Maps, see the [Authentication with Azure Maps] document. For a list of samples showing how to integrate Azure AD with Azure Maps, see [Azure Maps & Azure Active Directory Samples] in GitHub. >[!TIP] >In this example, we've passed in the `id` of the map `<div>`. Another way to do this is to pass in the `HTMLElement` object by passing`document.getElementById('myMap')` as the first parameter. You can embed a map in a web page by using the Map Control client-side JavaScrip <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> ``` -7. Putting it all together, your HTML file should look something like the following markup: +7. Your HTML file should now look something like the following code snippet: ```HTML <!DOCTYPE html> You can embed a map in a web page by using the Map Control client-side JavaScrip </html> ``` -8. Open the file in your web browser and view the rendered map. It should look like the image below: +8. Open the file in your web browser and view the rendered map. It should look like the following image:  ## Localizing the map -Azure Maps provides two different ways of setting the language and regional view for the rendered map. The first option is to add this information to the global `atlas` namespace, which will result in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to "Auto": +Azure Maps provides two different ways of setting the language and regional view for the rendered map. The first option is to add this information to the global `atlas` namespace, which results in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to "Auto": ```javascript atlas.setLanguage('fr-FR'); map = new atlas.Map('myMap', { > [!NOTE] > It is possible to load multiple map instances on the same page with different language and region settings. Additionally, these settings can be updated after the map loads using the `setStyle` function of the map. -Here is an example of Azure Maps with the language set to "fr-FR" and the regional view set to "Auto". +Here's an example of Azure Maps with the language set to "fr-FR" and the regional view set to `Auto`.  -For a list of supported languages and regional views, see [Localization support in Azure Maps](supported-languages.md). +For a list of supported languages and regional views, see [Localization support in Azure Maps]. ## Azure Government cloud support -The Azure Maps Web SDK supports the Azure Government cloud. All JavaScript and CSS URLs used to access the Azure Maps Web SDK remain the same. The following tasks will need to be done to connect to the Azure Government cloud version of the Azure Maps platform. +The Azure Maps Web SDK supports the Azure Government cloud. All JavaScript and CSS URLs used to access the Azure Maps Web SDK remain the same. The following tasks need to be done to connect to the Azure Government cloud version of the Azure Maps platform. When using the interactive map control, add the following line of code before creating an instance of the `Map` class. atlas.setDomain('atlas.azure.us'); Be sure to use Azure Maps authentication details from the Azure Government cloud platform when authenticating the map and services. -When using the services module, the domain for the services needs to be set when creating an instance of an API URL endpoint. For example, the following code creates an instance of the `SearchURL` class and points the domain to the Azure Government cloud. +The domain for the services needs to be set when creating an instance of an API URL endpoint, when using the services module. For example, the following code creates an instance of the `SearchURL` class and points the domain to the Azure Government cloud. ```javascript var searchURL = new atlas.service.SearchURL(pipeline, 'atlas.azure.us'); If directly accessing the Azure Maps REST services, change the URL domain to `at If developing using a JavaScript framework, one of the following open-source projects may be useful: -* [ng-azure-maps](https://github.com/arnaudleclerc/ng-azure-maps) - Angular 10 wrapper around Azure maps. -* [AzureMapsControl.Components](https://github.com/arnaudleclerc/AzureMapsControl.Components) - An Azure Maps Blazor component. -* [Azure Maps React Component](https://github.com/WiredSolutions/react-azure-maps) - A react wrapper for the Azure Maps control. -* [Vue Azure Maps](https://github.com/rickyruiz/vue-azure-maps) - An Azure Maps component for Vue application. +* [ng-azure-maps] - Angular 10 wrapper around Azure maps. +* [AzureMapsControl.Components] - An Azure Maps Blazor component. +* [Azure Maps React Component] - A react wrapper for the Azure Maps control. +* [Vue Azure Maps] - An Azure Maps component for Vue application. ## Next steps Learn best practices and see samples: > [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps) -For a list of samples showing how to integrate Azure Active Directory (AAD) with Azure Maps, see: +For a list of samples showing how to integrate Azure AD with Azure Maps, see: > [!div class="nextstepaction"] > [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples) +[authentication options]: /javascript/api/azure-maps-control/atlas.authenticationoptions +[Authentication with Azure Maps]: azure-maps-authentication.md +[Azure Maps & Azure Active Directory Samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account+[Azure Maps community - Open-source projects]: open-source-projects.md#third-part-map-control-plugins +[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps +[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components +[azure-maps-control]: https://www.npmjs.com/package/azure-maps-control +[Localization support in Azure Maps]: supported-languages.md +[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account-[authentication options]: /javascript/api/azure-maps-control/atlas.authenticationoptions +[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps |
azure-maps | Map Get Shape Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md | function getDrawnShapes() { } ``` -The [Get drawn shapes from drawing manager] code sample allows you to draw a shape on a map and then get the code used to create those drawings by using the drawing managers `drawingManager.getSource()` function. +The [Get drawn shapes from drawing manager] code sample allows you to draw a shape on a map and then get the code used to create those drawings by using the drawing managers `drawingManager.getSource()` function. For the source code for this sample, see [Get drawn shapes from drawing manager sample code]. :::image type="content" source="./media/map-get-shape-data/get-data-from-drawn-shape.png" alt-text="A screenshot of a map with a circle drawn around Seattle. Next to the map is the code used to create the circle."::: Learn more about the classes and methods used in this article: > [!div class="nextstepaction"] > [Drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar) -[Get drawn shapes from drawing manager]: https://samples.azuremaps.com/drawing-tools-module/get-drawn-shapes-from-drawing-manager +[Get drawn shapes from drawing manager]: https://samples.azuremaps.com/drawing-tools-module/get-drawn-shapes-from-drawing-manager +[Get drawn shapes from drawing manager sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Get%20drawn%20shapes%20from%20drawing%20manager/Get%20drawn%20shapes%20from%20drawing%20manager.html |
azure-maps | Map Show Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md | map.setTraffic({ }); ``` -The [Traffic Overlay] sample demonstrates how to display the traffic overlay on a map. +The [Traffic Overlay] sample demonstrates how to display the traffic overlay on a map. For the source code for this sample, see [Traffic Overlay source code]. :::image type="content" source="./media/map-show-traffic/traffic-overlay.png"alt-text="A screenshot of map with the traffic overlay, showing current traffic."::: The [Traffic Overlay] sample demonstrates how to display the traffic overlay on ## Traffic overlay options -The [Traffic Overlay Options] tool lets you switch between the different traffic overlay settings to see how the rendering changes. +The [Traffic Overlay Options] tool lets you switch between the different traffic overlay settings to see how the rendering changes. For the source code for this sample, see [Traffic Overlay Options source code]. :::image type="content" source="./media/map-show-traffic/traffic-overlay-options.png"alt-text="A screenshot of map showing the traffic overlay options."::: map.controls.add(new atlas.control.TrafficControl(), { position: 'top-right' }); map.controls.add(new atlas.control.TrafficLegendControl(), { position: 'bottom-left' }); ``` -The [Add traffic controls] sample is a fully functional map that shows how to display traffic data on a map. +The [Traffic controls] sample is a fully functional map that shows how to display traffic data on a map. For the source code for this sample, see [Traffic controls source code]. :::image type="content" source="./media/map-show-traffic/add-traffic-controls.png"alt-text="A screenshot of map with the traffic display button, showing current traffic."::: Enhance your user experiences: > [Code sample page](https://aka.ms/AzureMapsSamples) [Traffic Overlay]: https://samples.azuremaps.com/traffic/traffic-overlay-[Add traffic controls]: https://samples.azuremaps.com/traffic/traffic-controls -[Traffic Overlay Options]: https://samples.azuremaps.com/traffic/traffic-overlay-options +[Traffic controls]: https://samples.azuremaps.com/traffic/traffic-controls +[Traffic Overlay Options]: https://samples.azuremaps.com/traffic/traffic-overlay-options +[Traffic Overlay source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20Overlay/Traffic%20Overlay.html +[Traffic controls source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20controls/Traffic%20controls.html +[Traffic Overlay Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20Overlay%20Options/Traffic%20Overlay%20Options.html |
azure-maps | Release Notes Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md | Title: Release notes - Map Control description: Release notes for the Azure Maps Web SDK. -+ Last updated 3/15/2023 This document contains information about new features and other changes to the M ## v3 (preview) +### [3.0.0-preview.9] (June 27, 2023) ++#### New features (3.0.0-preview.9) ++- WebGL2 is used by default. ++- Elevation APIs: `atlas.sources.ElevationTileSource`, `map.enableElevation(elevationSource, options)`, `map.disableElevation()` ++- ability to customize maxPitch / minPitch in `CameraOptions` ++#### Bug fixes (3.0.0-preview.9) ++- fixed an issue where accessibility-related duplicated DOM elements may result when `map.setServiceOptions` is called ++#### Installation (3.0.0-preview.9) ++The preview is available on [npm][3.0.0-preview.9] and CDN. ++- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.9][3.0.0-preview.9] ++- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file: ++ ```html + <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.9/atlas.min.css" rel="stylesheet" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0.0-preview.9/atlas.min.js"></script> + ``` + ### [3.0.0-preview.8] (June 2, 2023) #### Bug fixes (3.0.0-preview.8) This update is the first preview of the upcoming 3.0.0 release. The underlying [ ## v2 (latest) +### [2.3.1] (June 27, 2023) ++#### Bug fixes (2.3.1) ++- fix `ImageSpriteManager` icon images may get removed during style change ++#### Other changes (2.3.1) ++- security: insecure-randomness fix in UUID generation. + ### [2.3.0] (June 2, 2023) #### New features (2.3.0) Stay up to date on Azure Maps: > [!div class="nextstepaction"] > [Azure Maps Blog] +[3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9 [3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8 [3.0.0-preview.7]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.7 [3.0.0-preview.6]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.6 Stay up to date on Azure Maps: [3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1+[2.3.1]: https://www.npmjs.com/package/azure-maps-control/v/2.3.1 [2.3.0]: https://www.npmjs.com/package/azure-maps-control/v/2.3.0 [2.2.7]: https://www.npmjs.com/package/azure-maps-control/v/2.2.7 [2.2.6]: https://www.npmjs.com/package/azure-maps-control/v/2.2.6 |
azure-maps | Set Drawing Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md | drawingManager = new atlas.drawing.DrawingManager(map,{ The previous examples demonstrated how to customize drawing options while instantiating the Drawing Manager. You can also set the Drawing Manager options by using the `drawingManager.setOptions()` function. -The [Drawing manager options] can be used to test out customization of all options for the drawing manager using the `setOptions` function. +The [Drawing manager options] can be used to test out customization of all options for the drawing manager using the `setOptions` function. For the source code for this sample, see [Drawing manager options source code]. :::image type="content" source="./media/set-drawing-options/drawing-manager-options.png"alt-text="A screenshot of a map of Seattle with a panel on the left showing the drawing manager options that can be selected to see the effects they make to the map."::: Learn more about the classes and methods used in this article: > [Drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar) [Drawing manager options]: https://samples.azuremaps.com/drawing-tools-module/drawing-manager-options+[Drawing manager options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Drawing%20manager%20options/Drawing%20manager%20options.html |
azure-maps | Spatial Io Add Ogc Map Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-ogc-map-layer.md | The following sections outline the web map service features that are supported b The `url` can be the base URL for the service or a full URL with the query for getting the capabilities of the service. Depending on the details provided, the WFS client may try several standard URL formats to determine how to initially access the service. -The [OGC map layer] sample shows how to overlay an OGC map layer on the map. +The [OGC map layer] sample shows how to overlay an OGC map layer on the map. For the source code for this sample, see [OGC map layer source code]. :::image type="content" source="./media/spatial-io-add-ogc-map-layer/ogc-map-layer.png"alt-text="A screenshot that shows the snap grid on map."::: The [OGC map layer] sample shows how to overlay an OGC map layer on the map. -> ## OGC map layer options -The [OGC map layer options] sample demonstrates the different OGC map layer options. +The [OGC map layer options] sample demonstrates the different OGC map layer options. For the source code for this sample, see [OGC map layer options source code]. :::image type="content" source="./media/spatial-io-add-ogc-map-layer/ogc-map-layer-options.png"alt-text="A screenshot that shows a map along with the OGC map layer options."::: The [OGC map layer options] sample demonstrates the different OGC map layer opti ## OGC Web Map Service explorer -The [OGC Web Map Service explorer] sample overlays imagery from the Web Map Services (WMS) and Web Map Tile Services (WMTS) as layers. You may select which layers in the service are rendered on the map. You may also view the associated legends for these layers. +The [OGC Web Map Service explorer] sample overlays imagery from the Web Map Services (WMS) and Web Map Tile Services (WMTS) as layers. You may select which layers in the service are rendered on the map. You may also view the associated legends for these layers. For the source code for this sample, see [OGC Web Map Service explorer source code]. :::image type="content" source="./media/spatial-io-add-ogc-map-layer/ogc-web-map-service-explorer.png"alt-text="A screenshot that shows a map with a WMTS layer that comes from the world geology survey. On the left of the map is a drop-down list showing the OGC services which can be selected."::: See the following articles, which contain code samples you could add to your map [OGC map layer]: https://samples.azuremaps.com/spatial-io-module/ogc-map-layer-example [OGC map layer options]: https://samples.azuremaps.com/spatial-io-module/ogc-map-layer-options [OGC Web Map Service explorer]: https://samples.azuremaps.com/spatial-io-module/ogc-web-map-service-explorer++[OGC map layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/OGC%20map%20layer%20example/OGC%20map%20layer%20example.html +[OGC map layer options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/OGC%20map%20layer%20options/OGC%20map%20layer%20options.html +[OGC Web Map Service explorer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/OGC%20Web%20Map%20Service%20explorer/OGC%20Web%20Map%20Service%20explorer.html |
azure-maps | Spatial Io Add Simple Data Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md | The real power of the simple data layer comes when: - Features in the data set have several style properties individually set on them; or - You're not sure what the data set exactly contains. -For example when parsing XML data feeds, you may not know the exact styles and geometry types of the features. The [Simple data layer options] sample shows the power of the simple data layer by rendering the features of a KML file. It also demonstrates various options that the simple data layer class provides. +For example when parsing XML data feeds, you may not know the exact styles and geometry types of the features. The [Simple data layer options] sample shows the power of the simple data layer by rendering the features of a KML file. It also demonstrates various options that the simple data layer class provides. For the source code for this sample, see [Simple data layer options source code]. :::image type="content" source="./media/spatial-io-add-simple-data-layer/simple-data-layer-options.png"alt-text="A screenshot of map with a panel on the left showing the different simple data layer options."::: See the following articles for more code samples to add to your maps: > [Supported data format details](spatial-io-supported-data-format-details.md) [Simple data layer options]: https://samples.azuremaps.com/spatial-io-module/simple-data-layer-options+[Simple data layer options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20IO%20Module/Simple%20data%20layer%20options/Simple%20data%20layer%20options.html |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | Title: Azure Monitor OpenTelemetry configuration for .NET, Java, Node.js, and Python applications + Title: Configure Azure Monitor OpenTelemetry for .NET, Java, Node.js, and Python applications description: This article provides configuration guidance for .NET, Java, Node.js, and Python applications. Last updated 06/23/2023 ms.devlang: csharp, javascript, typescript, python -# Azure Monitor OpenTelemetry configuration +# Configure Azure Monitor OpenTelemetry This article covers configuration settings for the Azure Monitor OpenTelemetry distro. |
azure-monitor | Metrics Supported | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md | -Date list was last updated: 06/04/2023. +Date list was last updated: 06/27/2023. Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface). This latest update adds a new column and reorders the metrics to be alphabetical |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||-|DeviceAttestationCount |Yes |Device Attestation Requests |Count |Count |Count of all the requests sent by an Azure Sphere device for authentication and attestation. |DeviceId, CatalogId, StatusCodeClass | -|DeviceErrorCount |Yes |Device Errors |Count |Count |Count of all the errors encountered by an Azure Sphere device. |DeviceId, CatalogId, ErrorCategory, ErrorClass, ErrorType | +|DeviceEventsCount |Yes |Device Events |Count |Count |Count of all the events generated by an Azure Sphere device. |DeviceId, EventCategory, EventClass, EventType | +|DeviceRequestsCount |Yes |Device Requests |Count |Count |Count of all the requests sent by an Azure Sphere device. |DeviceId, OperationName, ResultType | ## Microsoft.Batch/batchaccounts <!-- Data source : naam--> This latest update adds a new column and reorders the metrics to be alphabetical |SpeechModelHostingHours |Yes |Speech Model Hosting Hours |Count |Total |Number of speech model hosting hours |ApiName, FeatureName, UsageChannel, Region | |SpeechSessionDuration |Yes |Speech Session Duration (Deprecated) |Seconds |Total |Total duration of speech session in seconds. |ApiName, OperationName, Region | |SuccessfulCalls |Yes |Successful Calls |Count |Total |Number of successful calls. |ApiName, OperationName, Region, RatelimitKey |-|SuccessRate |No |Availability |Percent |Average |Availability percentage with the following calculation: (Total Calls - Server Errors)/Total Calls. Server Errors include any HTTP responses >=500. |ApiName, OperationName, Region, RatelimitKey | +|SuccessRate |No |AvailabilityRate |Percent |Average |Availability percentage with the following calculation: (Total Calls - Server Errors)/Total Calls. Server Errors include any HTTP responses >=500. |ApiName, OperationName, Region, RatelimitKey | |SynthesizedCharacters |Yes |Synthesized Characters |Count |Total |Number of Characters. |ApiName, FeatureName, UsageChannel, Region | |TextCharactersTranslated |Yes |Text Characters Translated |Count |Total |Number of characters in incoming text translation request. |ApiName, FeatureName, UsageChannel, Region | |TextCustomCharactersTranslated |Yes |Text Custom Characters Translated |Count |Total |Number of characters in incoming custom text translation request. |ApiName, FeatureName, UsageChannel, Region | This latest update adds a new column and reorders the metrics to be alphabetical |||||||| |aborted_connections |Yes |Aborted Connections |Count |Total |Aborted Connections |No Dimensions | |active_connections |Yes |Active Connections |Count |Maximum |Active Connections |No Dimensions |+|available_memory_bytes |Yes |Available Memory Bytes |Bytes |Average |Amount of physical memory, in bytes. |No Dimensions | |backup_storage_used |Yes |Backup Storage Used |Bytes |Maximum |Backup Storage Used |No Dimensions | |Com_alter_table |Yes |Com Alter Table |Count |Total |The number of times ALTER TABLE statement has been executed. |No Dimensions | |Com_create_db |Yes |Com Create DB |Count |Total |The number of times CREATE DB statement has been executed. |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |Innodb_buffer_pool_pages_free |Yes |InnoDB Buffer Pool Pages Free |Count |Total |The number of free pages in the InnoDB buffer pool. |No Dimensions | |Innodb_buffer_pool_read_requests |Yes |InnoDB Buffer Pool Read Requests |Count |Total |The number of logical read requests. |No Dimensions | |Innodb_buffer_pool_reads |Yes |InnoDB Buffer Pool Reads |Count |Total |The number of logical reads that InnoDB could not satisfy from the buffer pool, and had to read directly from disk. |No Dimensions |+|Innodb_data_writes |Yes |Innodb Data Writes |Count |Total |The total number of data writes. |No Dimensions | +|Innodb_row_lock_time |Yes |Innodb Row Lock Time |Milliseconds |Average |The total time spent in acquiring row locks for InnoDB tables, in milliseconds. |No Dimensions | |io_consumption_percent |Yes |Storage IO Percent |Percent |Maximum |Storage I/O consumption percent |No Dimensions | |memory_percent |Yes |Host Memory Percent |Percent |Maximum |Host Memory Percent |No Dimensions | |network_bytes_egress |Yes |Host Network Out |Bytes |Total |Host Network egress in bytes |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |serverlog_storage_percent |Yes |Serverlog Storage Percent |Percent |Maximum |Serverlog Storage Percent |No Dimensions | |serverlog_storage_usage |Yes |Serverlog Storage Used |Bytes |Maximum |Serverlog Storage Used |No Dimensions | |Slow_queries |Yes |Slow Queries |Count |Total |The number of queries that have taken more than long_query_time seconds. |No Dimensions |-|storage_io_count |Yes |IO Count |Count |Total |The number of I/O consumed. |No Dimensions | +|storage_io_count |No |Storage IO Count |Count |Total |The number of storage I/O consumed. |No Dimensions | |storage_limit |Yes |Storage Limit |Bytes |Maximum |Storage Limit |No Dimensions | |storage_percent |Yes |Storage Percent |Percent |Maximum |Storage Percent |No Dimensions |-|storage_throttle_count |Yes |Storage Throttle Count |Count |Maximum |Storage IO requests throttled in the selected time range. |No Dimensions | +|storage_throttle_count |Yes |Storage Throttle Count (deprecated) |Count |Maximum |Storage IO requests throttled in the selected time range. Deprecated, please check Storage IO Percent for throttling. |No Dimensions | |storage_used |Yes |Storage Used |Bytes |Maximum |Storage Used |No Dimensions |+|Threads_running |Yes |Threads Running |Count |Total |The number of threads that are not sleeping. |No Dimensions | |total_connections |Yes |Total Connections |Count |Total |Total Connections |No Dimensions | ## Microsoft.DBforMySQL/servers This latest update adds a new column and reorders the metrics to be alphabetical |TotalLatency |Yes |Total Latency |Milliseconds |Average |The response latency of the service. |Protocol | |TotalRequests |Yes |Total Requests |Count |Sum |The total number of requests received by the service. |Protocol | -## Microsoft.HealthcareApis/workspaces/analyticsconnectors -<!-- Data source : arm--> --|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| -|||||||| -|AnalyticsConnectorHealthStatus |Yes |Analytics Connector Health Status |Count |Sum |The health status of analytics connector |Operation, Component | -|AnalyticsConnectorResourceLatency |Yes |Analytics Connector Process Latency |Milliseconds |Average |The response latency of the service. |No Dimensions | -|AnalyticsConnectorSuccessfulDataSize |Yes |Analytics Connector Successful Data Size |Count |Sum |The size of data successfully processed by the analytics connector |No Dimensions | -|AnalyticsConnectorSuccessfulResourceCount |Yes |Analytics Connector Successful Resource Count |Count |Sum |The amount of data successfully processed by the analytics connector |No Dimensions | -|AnalyticsConnectorTotalError |Yes |Analytics Connector Total Error Count |Count |Sum |The total number of errors logged by the analytics connector |ErrorType, Operation | - ## Microsoft.HealthcareApis/workspaces/fhirservices <!-- Data source : arm--> This latest update adds a new column and reorders the metrics to be alphabetical |Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |Availability |Yes |Availability |Percent |Average |Availability of the APIs |ApiCategory, ApiName |-|CreatorUsage |No |Creator Usage |Bytes |Average |Azure Maps Creator usage statistics |ServiceName | |Usage |No |Usage |Count |Count |Count of API calls |ApiCategory, ApiName, ResultType, ResponseCode | ## Microsoft.Media/mediaservices This latest update adds a new column and reorders the metrics to be alphabetical |||||||| |DataIngested |No |Data Ingested |Bytes |Total |The volume of data ingested by the pipeline (bytes). |No Dimensions | |MalformedData |Yes |Malformed Data |Count |Total |The number of files unable to be processed by the pipeline. |No Dimensions |+|MalformedRecords |No |Malformed Records |Count |Total |The number of records unable to be processed by the pipeline. |No Dimensions | |ProcessedFileCount |Yes |Processed File Count |Count |Total |The number of files processed by the data connector. |No Dimensions | |Running |Yes |Running |Unspecified |Count |Values greater than 0 indicate that the pipeline is ready to process data. |No Dimensions | +## Microsoft.NetworkCloud/bareMetalMachines +<!-- Data source : naam--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|HostBootTimeSeconds |No |Host Boot Seconds |Seconds |Average |Unix time of last boot |Host | +|HostDiskReadCompleted |No |Host Disk Reads Completed |Count |Average |Disk reads completed by node |Device, Host | +|HostDiskReadSeconds |No |Host Disk Read Seconds |Seconds |Average |Disk read time by node |Device, Host | +|HostDiskWriteCompleted |No |Total Number of Writes Completed |Count |Average |Disk writes completed by node |Device, Host | +|HostDiskWriteSeconds |No |Host Disk Write In Seconds |Seconds |Average |Disk write time by node |Device, Host | +|HostDmiInfo |No |Host DMI Info |Unspecified |Count |Host Desktop Management Interface (DMI) environment information |BiosDate, BiosRelease, BiosVendor, BiosVersion, BoardAssetTag, BoardName, BoardVendor, BoardVersion, ChassisAssetTag, ChassisVendor, ChassisVersion, Host, ProductFamily, ProductName, ProductSku, ProductUuid, ProductVersion, SystemVendor | +|HostEntropyAvailableBits |No |Host Entropy Available Bits (Preview) |Count |Average |Available bits in node entropy |Host | +|HostFilesystemAvailBytes |No |Host Filesystem Available Bytes |Count |Average |Available filesystem size by node |Device, FSType, Host, Mountpoint | +|HostFilesystemDeviceError |No |Host Filesystem Device Errors |Count |Average |Indicates if there was a problem getting information for the filesystem |Device, FSType, Host, Mountpoint | +|HostFilesystemFiles |No |Host Filesystem Files |Count |Average |Total number of permitted inodes |Device, FSType, Host, Mountpoint | +|HostFilesystemFilesFree |No |Total Number of Free inodes |Count |Average |Total number of free inodes |Device, FSType, Host, Mountpoint | +|HostFilesystemReadOnly |No |Host Filesystem Read Only |Unspecified |Count |Indicates if the filesystem is readonly |Device, FSType, Host, Mountpoint | +|HostFilesystemSizeBytes |No |Host Filesystem Size In Bytes |Count |Average |Filesystem size by node |Device, FSType, Host, Mountpoint | +|HostHwmonTempCelsius |No |Host Hardware Monitor Temp |Count |Average |Hardware monitor for temperature (celsius) |Chip, Host, Sensor | +|HostHwmonTempMax |No |Host Hardware Monitor Temp Max |Count |Average |Hardware monitor for maximum temperature (celsius) |Chip, Host, Sensor | +|HostLoad1 |No |Average Load In 1 Minute |Count |Average |1 minute load average |Host | +|HostLoad15 |No |Average Load In 15 Minutes |Count |Average |15 minute load average |Host | +|HostLoad5 |No |Average load in 5 minutes |Count |Average |5 minute load average |Host | +|HostMemAvailBytes |No |Host Memory Available Bytes |Count |Average |Available memory in bytes by node |Host | +|HostMemHWCorruptedBytes |No |Total Amount of Memory In Corrupted Pages |Count |Average |Corrupted bytes in hardware by node |Host | +|HostMemTotalBytes |No |Host Memory Total Bytes |Bytes |Average |Total bytes of memory by node |Host | +|HostSpecificCPUUtilization |No |Host Specific CPU Utilization |Seconds |Average |A counter metric that counts the number of seconds the CPU has been running in a particular mode |Cpu, Host, Mode | +|IdracPowerCapacityWatts |No |IDRAC Power Capacity Watts |Unspecified |Average |Power Capacity |Host, PSU | +|IdracPowerInputWatts |No |IDRAC Power Input Watts |Unspecified |Average |Power Input |Host, PSU | +|IdracPowerOn |No |IDRAC Power On |Unspecified |Count |IDRAC Power On Status |Host | +|IdracPowerOutputWatts |No |IDRAC Power Output Watts |Unspecified |Average |Power Output |Host, PSU | +|IdracSensorsTemperature |No |IDRAC Sensors Temperature |Unspecified |Average |IDRAC sensor temperature |Host, Name, Units | +|NcNodeNetworkReceiveErrsTotal |No |Network Device Receive Errors |Count |Average |Total network device errors received |Hostname, Interface Name | +|NcNodeNetworkTransmitErrsTotal |No |Network Device Transmit Errors |Count |Average |Total network device errors transmitted |Hostname, Interface Name | +|NcTotalCpusPerNuma |No |Total CPUs Available to Nexus per NUMA |Count |Average |Total number of CPUs available to Nexus per NUMA |Hostname, NUMA Node | +|NcTotalWorkloadCpusAllocatedPerNuma |No |CPUs per NUMA Allocated for Nexus Kubernetes |Count |Average |Total number of CPUs per NUMA allocated for Nexus Kubernetes and Tenant Workloads |Hostname, NUMA Node | +|NcTotalWorkloadCpusAvailablePerNuma |No |CPUs per NUMA Available for Nexus Kubernetes |Count |Average |Total number of CPUs per NUMA available to Nexus Kubernetes and Tenant Workloads |Hostname, NUMA Node | +|NodeBondingActive |No |Node Bonding Active |Count |Average |Number of active interfaces per bonding interface |Master | +|NodeMemHugePagesFree |No |Node Memory Huge Pages Free |Bytes |Average |NUMA hugepages free by node |Host, Node | +|NodeMemHugePagesTotal |No |Node Memory Huge Pages Total |Bytes |Average |NUMA huge pages total by node |Host, Node | +|NodeMemNumaFree |No |Node Memory NUMA (Free Memory) |Bytes |Average |NUMA memory free |Name, Host | +|NodeMemNumaShem |No |Node Memory NUMA (Shared Memory) |Bytes |Average |NUMA shared memory |Host, Node | +|NodeMemNumaUsed |No |Node Memory NUMA (Used Memory) |Bytes |Average |NUMA memory used |Host, Node | +|NodeNetworkCarrierChanges |No |Node Network Carrier Changes |Count |Average |Node network carrier changes |Device, Host | +|NodeNetworkMtuBytes |No |Node Network Maximum Transmission Unit Bytes |Bytes |Average |Node network Maximum Transmission Unit (mtu_bytes) value of /sys/class/net/<iface> |Device, Host | +|NodeNetworkReceiveMulticastTotal |No |Node Network Received Multicast Total |Bytes |Average |Network device statistic receive_multicast |Device, Host | +|NodeNetworkReceivePackets |No |Node Network Received Packets |Count |Average |Network device statistic receive_packets |Device, Host | +|NodeNetworkSpeedBytes |No |Node Network Speed Bytes |Bytes |Average |speed_bytes value of /sys/class/net/<iface> |Device, Host | +|NodeNetworkTransmitPackets |No |Node Network Transmited Packets |Count |Average |Network device statistic transmit_packets |Device, Host | +|NodeNetworkUp |No |Node Network Up |Count |Count |Value is 1 if operstate is 'up', 0 otherwise. |Device, Host | +|NodeNvmeInfo |No |Node NVMe Info |Count |Count |Non-numeric data from /sys/class/nvme/<device>, value is always 1. Provides firmware, model, state and serial for a device |Device, State | +|NodeOsInfo |No |Node OS Info |Count |Count |Node OS information |Host, Name, Version | +|NodeTimexMaxErrorSeconds |No |Node Timex Max Error Seconds |Seconds |Average |Maximum time error between the local system and reference clock |Host | +|NodeTimexOffsetSeconds |No |Node Timex Offset Seconds |Seconds |Average |Time offset in between the local system and reference clock |Host | +|NodeTimexSyncStatus |No |Node Timex Sync Status |Count |Average |Is clock synchronized to a reliable server (1 = yes, 0 = no) |Host | +|NodeVmOomKill |No |Node VM Out Of Memory Kill |Count |Average |Information in /proc/vmstat pertaining to the field oom_kill |Host | +|NodeVmstatPswpIn |No |Node VM PSWP In |Count |Average |Information in /proc/vmstat pertaining to the field pswpin |Host | +|NodeVmstatPswpout |No |Node VM PSWP Out |Count |Average |Information in /proc/vmstat pertaining to the field pswpout |Host | ++## Microsoft.NetworkCloud/clusters +<!-- Data source : naam--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|ApiserverAuditRequestsRejectedTotal |No |API Server Audit Requests Rejected Total |Count |Average |Counter of API server requests rejected due to an error in the audit logging backend |Component, Pod Name | +|ApiserverClientCertificateExpirationSecondsSum |No |API Server Client Certificate Expiration Seconds Sum |Seconds |Average |Sum of API server client certificate expiration (seconds) |Component, Pod Name | +|ApiserverStorageDataKeyGenerationFailuresTotal |No |API Server Storage Data Key Generation Failures Total |Count |Average |Total number of operations that failed Data Encryption Key (DEK) generation |Component, Pod Name | +|ApiserverTlsHandshakeErrorsTotal |No |API Server TLS Handshake Errors Total |Count |Average |Number of requests dropped with 'TLS handshake' error |Component, Pod Name | +|ContainerFsIoTimeSecondsTotal |No |Container FS I/O Time Seconds Total |Seconds |Average |Time taken for container Input/Output (I/O) operations |Device, Host | +|ContainerMemoryFailcnt |No |Container Memory Fail Count |Count |Average |Number of times a container's memory usage limit is hit |Container, Host, Namespace, Pod | +|ContainerMemoryUsageBytes |No |Container Memory Usage Bytes |Bytes |Average |Current memory usage, including all memory regardless of when it was accessed |Container, Host, Namespace, Pod | +|ContainerNetworkReceiveErrorsTotal |No |Container Network Receive Errors Total |Count |Average |Number of errors encountered while receiving bytes over the network |Interface, Namespace, Pod | +|ContainerNetworkTransmitErrorsTotal |No |Container Network Transmit Errors Total |Count |Average |Count of errors that happened while transmitting |Interface, Namespace, Pod | +|ContainerScrapeError |No |Container Scrape Error |Unspecified |Average |Indicates whether there was an error while getting container metrics |Host | +|ContainerTasksState |No |Container Tasks State |Count |Average |Number of tasks or processes in a given state (sleeping, running, stopped, uninterruptible, or waiting) in a container |Container, Host, Namespace, Pod, State | +|ControllerRuntimeReconcileErrorsTotal |No |Controller Reconcile Errors Total |Count |Average |Total number of reconciliation errors per controller |Controller, Namespace, Pod Name | +|ControllerRuntimeReconcileTotal |No |Controller Reconciliations Total |Count |Average |Total number of reconciliations per controller |Controller, Namespace, Pod Name | +|CorednsDnsRequestsTotal |No |CoreDNS Requests Total |Count |Average |Total number of DNS requests |Family, Pod Name, Proto, Server, Type | +|CorednsDnsResponsesTotal |No |CoreDNS Responses Total |Count |Average |Total number of DNS responses |Pod Name, Server, Rcode | +|CorednsForwardHealthcheckBrokenTotal |No |CoreDNS Forward Healthcheck Broken Total |Count |Average |Total number of times all upstreams are unhealthy |Pod Name, Namespace | +|CorednsForwardMaxConcurrentRejectsTotal |No |CoreDNS Forward Max Concurrent Rejects Total |Count |Average |Total number of rejected queries because concurrent queries were at the maximum limit |Pod Name, Namespace | +|CorednsHealthRequestFailuresTotal |No |CoreDNS Health Request Failures Total |Count |Average |The number of times the self health check failed |Pod Name | +|CorednsPanicsTotal |No |CoreDNS Panics Total |Count |Average |Total number of panics |Pod Name | +|CorednsReloadFailedTotal |No |CoreDNS Reload Failed Total |Count |Average |Total number of failed reload attempts |Pod Name, Namespace | +|EtcdDiskBackendCommitDurationSecondsSum |No |Etcd Disk Backend Commit Duration Seconds Sum |Seconds |Total |The latency distribution of commits called by the backend |Component, Pod Name, Tier | +|EtcdDiskWalFsyncDurationSecondsSum |No |Etcd Disk WAL Fsync Duration Seconds Sum |Seconds |Total |The sum of latency distributions of 'fsync' called by the write-ahead log (WAL) |Component, Pod Name, Tier | +|EtcdServerHealthFailures |No |Etcd Server Health Failures |Count |Average |Total server health failures |Pod Name | +|EtcdServerIsLeader |No |Etcd Server Is Leader |Unspecified |Count |Whether or not this member is a leader; 1 if is, 0 otherwise |Component, Pod Name, Tier | +|EtcdServerIsLearner |No |Etcd Server Is Learner |Unspecified |Count |Whether or not this member is a learner; 1 if is, 0 otherwise |Component, Pod Name, Tier | +|EtcdServerLeaderChangesSeenTotal |No |Etcd Server Leader Changes Seen Total |Count |Average |The number of leader changes seen |Component, Pod Name, Tier | +|EtcdServerProposalsAppliedTotal |No |Etcd Server Proposals Applied Total |Count |Average |The total number of consensus proposals applied |Component, Pod Name, Tier | +|EtcdServerProposalsCommittedTotal |No |Etcd Server Proposals Committed Total |Count |Average |The total number of consensus proposals committed |Component, Pod Name, Tier | +|EtcdServerProposalsFailedTotal |No |Etcd Server Proposals Failed Total |Count |Average |The total number of failed proposals |Component, Pod Name, Tier | +|EtcdServerSlowApplyTotal |No |Etcd Server Slow Apply Total |Count |Average |The total number of slow apply requests |Pod Name, Tier | +|FelixActiveLocalEndpoints |No |Felix Active Local Endpoints |Count |Average |Number of active endpoints on this host |Host | +|FelixClusterNumHostEndpoints |No |Felix Cluster Num Host Endpoints |Count |Average |Total number of host endpoints cluster-wide |Host | +|FelixClusterNumHosts |No |Felix Cluster Number of Hosts |Count |Average |Total number of Calico hosts in the cluster |Host | +|FelixClusterNumWorkloadEndpoints |No |Felix Cluster Number of Workload Endpoints |Count |Average |Total number of workload endpoints cluster-wide |Host | +|FelixIntDataplaneFailures |No |Felix Interface Dataplane Failures |Count |Average |Number of times dataplane updates failed and will be retried |Host | +|FelixIpsetErrors |No |Felix Ipset Errors |Count |Average |Number of 'ipset' command failures |Host | +|FelixIpsetsCalico |No |Felix Ipsets Calico |Count |Average |Number of active Calico IP sets |Host | +|FelixIptablesRestoreErrors |No |Felix IP Tables Restore Errors |Count |Average |Number of 'iptables-restore' errors |Host | +|FelixIptablesSaveErrors |No |Felix IP Tables Save Errors |Count |Average |Number of 'iptables-save' errors |Host | +|FelixResyncsStarted |No |Felix Resyncs Started |Count |Average |Number of times Felix has started resyncing with the datastore |Host | +|FelixResyncState |No |Felix Resync State |Unspecified |Average |Current datastore state |Host | +|KubeDaemonsetStatusCurrentNumberScheduled |No |Daemonsets Current Number Scheduled |Count |Average |Number of daemonsets currently scheduled |Daemonset, Namespace | +|KubeDaemonsetStatusDesiredNumberScheduled |No |Daemonsets Desired Number Scheduled |Count |Average |Number of daemonsets desired scheduled |Daemonset, Namespace | +|KubeDeploymentStatusReplicasAvailable |No |Deployment Replicas Available |Count |Average |Number of deployment replicas available |Deployment, Namespace | +|KubeDeploymentStatusReplicasReady |No |Deployment Replicas Ready |Count |Average |Number of deployment replicas ready |Deployment, Namespace | +|KubeJobStatusActive |No |Jobs Active |Count |Average |Number of jobs active |Job, Namespace | +|KubeJobStatusFailed |No |Jobs Failed |Count |Average |Number and reason of jobs failed |Job, Namespace, Reason | +|KubeJobStatusSucceeded |No |Jobs Succeeded |Count |Average |Number of jobs succeeded |Job, Namespace | +|KubeletRunningContainers |No |Kubelet Running Containers |Count |Average |Number of containers currently running |Container State, Host | +|KubeletRunningPods |No |Kubelet Running Pods |Count |Average |Number of pods running on the node |Host | +|KubeletRuntimeOperationsErrorsTotal |No |Kubelet Runtime Operations Errors Total |Count |Average |Cumulative number of runtime operation errors by operation type |Host, Operation Type | +|KubeletStartedPodsErrorsTotal |No |Kubelet Started Pods Errors Total |Count |Average |Cumulative number of errors when starting pods |Host | +|KubeletVolumeStatsAvailableBytes |No |Volume Available Bytes |Bytes |Average |Number of available bytes in the volume |Host, Namespace, Persistent Volume Claim | +|KubeletVolumeStatsCapacityBytes |No |Volume Capacity Bytes |Bytes |Average |Capacity (in bytes) of the volume |Host, Namespace, Persistent Volume Claim | +|KubeletVolumeStatsUsedBytes |No |Volume Used Bytes |Bytes |Average |Number of used bytes in the volume |Host, Namespace, Persistent Volume Claim | +|KubeNodeStatusAllocatable |No |Node Resources Allocatable |Count |Average |Node resources allocatable for pods |Node, Resource, Unit | +|KubeNodeStatusCapacity |No |Node Resources Capacity |Count |Average |Total amount of node resources available |Node, Resource, Unit | +|KubeNodeStatusCondition |No |Node Status Condition |Count |Average |The condition of a node |Condition, Node, Status | +|KubePodContainerResourceLimits |No |Container Resources Limits |Count |Average |The container's resources limits |Container, Namespace, Node, Pod, Resource, Unit | +|KubePodContainerResourceRequests |No |Container Resources Requests |Count |Average |The container's resources requested |Container, Namespace, Node, Pod, Resource, Unit | +|KubePodContainerStateStarted |No |Container State Started |Count |Average |Unix timestamp start time of a container |Container, Namespace, Pod | +|KubePodContainerStatusLastTerminatedReason |No |Container Status Last Terminated Reason |Count |Average |The reason of a container's last terminated status |Container, Namespace, Pod, Reason | +|KubePodContainerStatusReady |No |Container Status Ready |Count |Average |Describes whether the container's readiness check succeeded |Container, Namespace, Pod | +|KubePodContainerStatusRestartsTotal |No |Container Restarts |Count |Average |The number of container restarts |Container, Namespace, Pod | +|KubePodContainerStatusRunning |No |Container Status Running |Count |Average |The number of containers with a status of 'running' |Container, Namespace, Pod | +|KubePodContainerStatusTerminated |No |Container Status Terminated |Count |Average |The number of containers with a status of 'terminated' |Container, Namespace, Pod | +|KubePodContainerStatusTerminatedReason |No |Container Status Terminated Reason |Count |Average |The number and reason of containers with a status of 'terminated' |Container, Namespace, Pod, Reason | +|KubePodContainerStatusWaiting |No |Container Status Waiting |Count |Average |The number of containers with a status of 'waiting' |Container, Namespace, Pod | +|KubePodContainerStatusWaitingReason |No |Container Status Waiting Reason |Count |Average |The number and reason of containers with a status of 'waiting' |Container, Namespace, Pod, Reason | +|KubePodDeletionTimestamp |No |Pod Deletion Timestamp |Count |Average |The timestamp of the pod's deletion |Namespace, Pod | +|KubePodInitContainerStatusReady |No |Pod Init Container Ready |Count |Average |The number of ready pod init containers |Namespace, Container, Pod | +|KubePodInitContainerStatusRestartsTotal |No |Pod Init Container Restarts |Count |Average |The number of pod init containers restarts |Namespace, Container, Pod | +|KubePodInitContainerStatusRunning |No |Pod Init Container Running |Count |Average |The number of running pod init containers |Namespace, Container, Pod | +|KubePodInitContainerStatusTerminated |No |Pod Init Container Terminated |Count |Average |The number of terminated pod init containers |Namespace, Container, Pod | +|KubePodInitContainerStatusTerminatedReason |No |Pod Init Container Terminated Reason |Count |Average |The number of pod init containers with terminated reason |Namespace, Container, Pod, Reason | +|KubePodInitContainerStatusWaiting |No |Pod Init Container Waiting |Count |Average |The number of pod init containers waiting |Namespace, Container, Pod | +|KubePodInitContainerStatusWaitingReason |No |Pod Init Container Waiting Reason |Count |Average |The reason the pod init container is waiting |Namespace, Container, Pod, Reason | +|KubePodStatusPhase |No |Pod Status Phase |Count |Average |The pod status phase |Namespace, Pod, Phase | +|KubePodStatusReady |No |Pod Ready State |Count |Average |Signifies if the pod is in ready state |Namespace, Pod | +|KubePodStatusReason |No |Pod Status Reason |Count |Average |NodeAffinity |Namespace, Pod, Reason | +|KubeStatefulsetReplicas |No |Statefulset Desired Replicas Number |Count |Average |The desired number of statefulset replicas |Namespace, Statefulset | +|KubeStatefulsetStatusReplicas |No |Statefulset Replicas Number |Count |Average |The number of replicas per statefulset |Namespace, Statefulset | +|KubevirtInfo |No |Kubevirt Info |Unspecified |Average |Kubevirt version information |Kube Version | +|KubevirtVirtControllerLeading |No |Kubevirt Virt Controller Leading |Unspecified |Average |Indication for an operating virt-controller |Pod Name | +|KubevirtVirtControllerReady |No |Kubevirt Virt Controller Ready |Unspecified |Average |Indication for a virt-controller that is ready to take the lead |Pod Name | +|KubevirtVirtOperatorReady |No |Kubevirt Virt Operator Ready |Unspecified |Average |Indication for a virt operator being ready |Pod Name | +|KubevirtVmiMemoryActualBalloonBytes |No |Kubevirt VMI Memory Actual BalloonBytes |Bytes |Average |Current balloon size (in bytes) |Name, Node | +|KubevirtVmiMemoryAvailableBytes |No |Kubevirt VMI Memory Available Bytes |Bytes |Average |Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages |Name, Node | +|KubevirtVmiMemoryDomainBytesTotal |No |Kubevirt VMI Memory Domain Bytes Total |Bytes |Average |The amount of memory (in bytes) allocated to the domain. The memory value in domain XML file |Node | +|KubevirtVmiMemorySwapInTrafficBytesTotal |No |Kubevirt VMI Memory Swap In Traffic Bytes Total |Bytes |Average |The total amount of data read from swap space of the guest (in bytes) |Name, Node | +|KubevirtVmiMemorySwapOutTrafficBytesTotal |No |Kubevirt VMI Memory Swap Out Traffic Bytes Total |Bytes |Average |The total amount of memory written out to swap space of the guest (in bytes) |Name, Node | +|KubevirtVmiMemoryUnusedBytes |No |Kubevirt VMI Memory Unused Bytes |Bytes |Average |The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free |Name, Node | +|KubevirtVmiNetworkReceivePacketsTotal |No |Kubevirt VMI Network Receive Packets Total |Bytes |Average |Total network traffic received packets |Interface, Name, Node | +|KubevirtVmiNetworkTransmitPacketsDroppedTotal |No |Kubevirt VMI Network Transmit Packets Dropped Total |Bytes |Average |The total number of transmit packets dropped on virtual NIC (vNIC) interfaces |Interface, Name, Node | +|KubevirtVmiNetworkTransmitPacketsTotal |No |Kubevirt VMI Network Transmit Packets Total |Bytes |Average |Total network traffic transmitted packets |Interface, Name, Node | +|KubevirtVmiOutdatedCount |No |Kubevirt VMI Outdated Count |Count |Average |Indication for the total number of VirtualMachineInstance (VMI) workloads that are not running within the most up-to-date version of the virt-launcher environment |Name | +|KubevirtVmiPhaseCount |No |Kubevirt VMI Phase Count |Count |Average |Sum of VirtualMachineInstances (VMIs) per phase and node |Node, Phase, Workload | +|KubevirtVmiStorageIopsReadTotal |No |Kubevirt VMI Storage IOPS Read Total |Count |Average |Total number of Input/Output (I/O) read operations |Drive, Name, Node | +|KubevirtVmiStorageIopsWriteTotal |No |Kubevirt VMI Storage IOPS Write Total |Count |Average |Total number of Input/Output (I/O) write operations |Drive, Name, Node | +|KubevirtVmiStorageReadTimesMsTotal |No |Kubevirt VMI Storage Read Times Total |Milliseconds |Average |Total time in milliseconds (ms) spent on read operations |Drive, Name, Node | +|KubevirtVmiStorageWriteTimesMsTotal |No |Kubevirt VMI Storage Write Times Total |Milliseconds |Average |Total time in milliseconds (ms) spent on write operations |Drive, Name, Node | +|NcVmiCpuAffinity |No |CPU Pinning Map |Count |Average |Pinning map of virtual CPUs (vCPUs) to CPUs |CPU, NUMA Node, VMI Namespace, VMI Node, VMI Name | +|TyphaConnectionsAccepted |No |Typha Connections Accepted |Count |Average |Total number of connections accepted over time |Pod Name | +|TyphaConnectionsDropped |No |Typha Connections Dropped |Count |Average |Total number of connections dropped due to rebalancing |Pod Name | +|TyphaPingLatencyCount |No |Typha Ping Latency |Count |Average |Round-trip ping/pong latency to client. Typha's protocol includes a regular ping/pong keepalive to verify that the connection is still up |Pod Name | ++## Microsoft.NetworkCloud/storageAppliances +<!-- Data source : naam--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|PurefaAlertsTotal |No |Nexus Storage Alerts Total |Count |Average |Number of alert events |Severity | +|PurefaArrayPerformanceAvgBlockBytes |No |Nexus Storage Array Avg Block Bytes |Bytes |Average |Average block size |Dimension | +|PurefaArrayPerformanceBandwidthBytes |No |Nexus Storage Array Bandwidth Bytes |Bytes |Average |Array throughput in bytes per second |Dimension | +|PurefaArrayPerformanceIOPS |No |Nexus Storage Array IOPS |Count |Average |Storage array IOPS |Dimension | +|PurefaArrayPerformanceLatencyUsec |No |Nexus Storage Array Latency (Microseconds) |MilliSeconds |Average |Storage array latency in microseconds |Dimension | +|PurefaArrayPerformanceQdepth |No |Nexus Storage Array Queue Depth |Bytes |Average |Storage array queue depth |No Dimensions | +|PurefaArraySpaceCapacityBytes |No |Nexus Storage Array Capacity Bytes |Bytes |Average |Storage array overall space capacity |No Dimensions | +|PurefaArraySpaceDatareductionRatio |No |Nexus Storage Array Space Datareduction Ratio |Percent |Average |Storage array overall data reduction |No Dimensions | +|PurefaArraySpaceProvisionedBytes |No |Nexus Storage Array Space Provisioned Bytes |Bytes |Average |Storage array overall provisioned space |No Dimensions | +|PurefaArraySpaceUsedBytes |No |Nexus Storage Array Space Used Bytes |Bytes |Average |Storage Array overall used space |Dimension | +|PurefaHardwareComponentHealth |No |Nexus Storage Hardware Component Health |Count |Average |Storage array hardware component health status |Component, Controller, Index | +|PurefaHardwarePowerVolts |No |Nexus Storage Hardware Power Volts |Unspecified |Average |Storage array hardware power supply voltage |Power Supply | +|PurefaHardwareTemperatureCelsius |No |Nexus Storage Hardware Temperature Celsius |Unspecified |Average |Storage array hardware temperature sensors |Controller, Sensor | +|PurefaHostPerformanceBandwidthBytes |No |Nexus Storage Host Bandwidth Bytes |Bytes |Average |Storage array host bandwidth in bytes per second |Dimension, Host | +|PurefaHostPerformanceIOPS |No |Nexus Storage Host IOPS |Count |Average |Storage array host IOPS |Dimension, Host | +|PurefaHostPerformanceLatencyUsec |No |Nexus Storage Host Latency (Microseconds) |MilliSeconds |Average |Storage array host latency in microseconds |Dimension, Host | +|PurefaHostSpaceBytes |No |Nexus Storage Host Space Bytes |Bytes |Average |Storage array host space in bytes |Dimension, Host | +|PurefaHostSpaceDatareductionRatio |No |Nexus Storage Host Space Datareduction Ratio |Percent |Average |Storage array host volumes data reduction ratio |Host | +|PurefaHostSpaceSizeBytes |No |Nexus Storage Host Space Size Bytes |Bytes |Average |Storage array host volumes size |Host | +|PurefaInfo |No |Nexus Storage Info |Unspecified |Average |Storage array system information |Array Name | +|PurefaVolumePerformanceIOPS |No |Nexus Storage Volume Performance IOPS |Count |Average |Storage array volume IOPS |Dimension, Volume | +|PurefaVolumePerformanceLatencyUsec |No |Nexus Storage Volume Performance Latency (Microseconds) |MilliSeconds |Average |Storage array volume latency in microseconds |Dimension, Volume | +|PurefaVolumePerformanceThroughputBytes |No |Nexus Storage Volume Performance Throughput Bytes |Bytes |Average |Storage array volume throughput |Dimension, Volume | +|PurefaVolumeSpaceBytes |No |Nexus Storage Volume Space Bytes |Bytes |Average |Storage array volume space in bytes |Dimension, Volume | +|PurefaVolumeSpaceDatareductionRatio |No |Nexus Storage Volume Space Datareduction Ratio |Percent |Average |Storage array overall data reduction |Volume | +|PurefaVolumeSpaceSizeBytes |No |Nexus Storage Volume Space Size Bytes |Bytes |Average |Storage array volumes size |Volume | + ## Microsoft.NetworkFunction/azureTrafficCollectors <!-- Data source : naam--> This latest update adds a new column and reorders the metrics to be alphabetical |SubmissionsOutstanding |No |Outstanding Submissions |Count |Average |The average number of outstanding submissions that are queued for processing. |Region | |SubmissionsSucceeded |No |Successful Submissions / Hr |Count |Maximum |The number of successful submissions / Hr. |Region | +## Microsoft.SecurityDetonation/SecurityDetonationChambers +<!-- Data source : arm--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|% Processor Time |Yes |% CPU |Percent |Average |Percent CPU utilization |No Dimensions | + ## Microsoft.ServiceBus/Namespaces <!-- Data source : naam--> This latest update adds a new column and reorders the metrics to be alphabetical |UserErrors |No |User Errors. |Count |Total |User Errors for Microsoft.ServiceBus. |EntityName, OperationResult | |WSXNS |No |Memory Usage (Deprecated) |Percent |Maximum |Service bus premium namespace memory usage metric. This metric is deprecated. Please use the Memory Usage (NamespaceMemoryUsage) metric instead. |Replica | +## Microsoft.ServiceNetworking/trafficControllers +<!-- Data source : naam--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|BackendConnectionTimeouts |Yes |Backend Connection Timeouts |Count |Total |Count of requests that timed out waiting for a response from the backend target (includes all retry requests initiated from Traffic Controller to the backend target) |Microsoft.regionName, BackendService | +|BackendHealthyTargets |Yes |Backend Healthy Targets |Count |Average |Count of healthy backend targets |Microsoft.regionName, BackendService | +|BackendHTTPResponseStatus |Yes |Backend HTTP Response Status |Count |Total |HTTP response status returned by the backend target to Traffic Controller |Microsoft.regionName, BackendService, HttpResponseCode | +|ClientConnectionIdleTimeouts |Yes |Total Connection Idle Timeouts |Count |Total |Count of connections closed, between client and Traffic Controller frontend, due to exceeding idle timeout |Microsoft.regionName, Frontend | +|ConnectionTimeouts |Yes |Connection Timeouts |Count |Total |Count of connections closed due to timeout between clients and Traffic Controller |Microsoft.regionName, Frontend | +|HTTPResponseStatus |Yes |HTTP Response Status |Count |Total |HTTP response status returned by Traffic Controller |Microsoft.regionName, Frontend, HttpResponseCode | +|TotalRequests |Yes |Total Requests |Count |Total |Count of requests Traffic Controller has served |Microsoft.regionName, Frontend | + ## Microsoft.SignalRService/SignalR <!-- Data source : naam--> This latest update adds a new column and reorders the metrics to be alphabetical |dwu_consumption_percent |Yes |DWU percentage |Percent |Maximum |DWU percentage. Applies only to data warehouses. |No Dimensions | |dwu_limit |Yes |DWU limit |Count |Maximum |DWU limit. Applies only to data warehouses. |No Dimensions | |dwu_used |Yes |DWU used |Count |Maximum |DWU used. Applies only to data warehouses. |No Dimensions |+|free_amount_consumed |Yes |Free amount consumed |Count |Maximum |Free amount of vCore seconds consumed this month. Applies only to free database offer. |No Dimensions | +|free_amount_remaining |Yes |Free amount remaining |Count |Minimum |Free amount of vCore seconds remaining this month. Applies only to free database offer. |No Dimensions | |full_backup_size_bytes |Yes |Full backup storage size |Bytes |Maximum |Cumulative full backup storage size. Applies to vCore-based databases. Not applicable to Hyperscale databases. |No Dimensions | |ledger_digest_upload_failed |Yes |Failed Ledger Digest Uploads |Count |Count |Ledger digests that failed to be uploaded. |No Dimensions | |ledger_digest_upload_success |Yes |Successful Ledger Digest Uploads |Count |Count |Ledger digests that were successfully uploaded. |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |workers_percent |Yes |Workers percentage |Percent |Average |Workers percentage |No Dimensions | |xtp_storage_percent |Yes |In-Memory OLTP storage percent |Percent |Average |In-Memory OLTP storage percent. Not applicable to hyperscale |No Dimensions | +## Microsoft.Sql/servers/jobAgents +<!-- Data source : naam--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|elastic_jobs_failed |Yes |Elastic Jobs Executions Failed |Count |Total |Number of job executions that failed while trying to execute on target |No Dimensions | +|elastic_jobs_successful |Yes |Elastic Jobs Executions Successful |Count |Total |Number of job executions that were able to successfully execute on target |No Dimensions | +|elastic_jobs_timeout |Yes |Elastic Jobs Executions Timed Out |Count |Total |Number of job executions that expired before execution was able to finish on target. |No Dimensions | + ## Microsoft.Storage/storageAccounts <!-- Data source : naam--> This latest update adds a new column and reorders the metrics to be alphabetical |FileReads |Yes |File Reads |BytesPerSecond |Average |Number of bytes per second read from a file. |SourceFile, Rank, FileType | |FileUpdates |Yes |File Updates |CountPerSecond |Average |Number of directory updates and metadata operations per second. |SourceFile, Rank, FileType | |FileWrites |Yes |File Writes |BytesPerSecond |Average |Number of bytes per second written to a file. |SourceFile, Rank, FileType |+|StorageTargetAccessErrors |Yes |Storage Target Access Errors Received |Count |Total |The rate of access error responses received by the cache from a specific StorageTarget. For more details, see https://www.rfc-editor.org/rfc/rfc1813#section-2.6 (NFS3ERR_ACCES). |StorageTarget | |StorageTargetAsyncWriteThroughput |Yes |StorageTarget Asynchronous Write Throughput |BytesPerSecond |Average |The rate the Cache asynchronously writes data to a particular StorageTarget. These are opportunistic writes that do not cause clients to block. |StorageTarget | |StorageTargetBlocksRecycled |Yes |Storage Target Blocks Recycled |Count |Average |Total number of 16k cache blocks recycled (freed) per Storage Target. |StorageTarget |+|StorageTargetFileTooLargeErrors |Yes |Storage Target File Too Large Errors Received |Count |Total |The rate of file too large error responses received by the cache from a specific StorageTarget. For more details, see https://www.rfc-editor.org/rfc/rfc1813#section-2.6 (NFS3ERR_FBIG). |StorageTarget | |StorageTargetFillThroughput |Yes |StorageTarget Fill Throughput |BytesPerSecond |Average |The rate the Cache reads data from the StorageTarget to handle a cache miss. |StorageTarget |+|StorageTargetFlushFailureErrors |Yes |Storage Target Total Flush Failures |Count |Total |The rate of file flush request failures reported by the writeback state machine for a specific StorageTarget. |StorageTarget | |StorageTargetFreeReadSpace |Yes |Storage Target Free Read Space |Bytes |Average |Read space available for caching files associated with a storage target. |StorageTarget | |StorageTargetFreeWriteSpace |Yes |Storage Target Free Write Space |Bytes |Average |Write space available for changed files associated with a storage target. |StorageTarget | |StorageTargetHealth |Yes |Storage Target Health |Count |Average |Boolean results of connectivity test between the Cache and Storage Targets. |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |StorageTargetLatency |Yes |StorageTarget Latency |MilliSeconds |Average |The average round trip latency of all the file operations the Cache sends to a partricular StorageTarget. |StorageTarget | |StorageTargetMetadataReadIOPS |Yes |StorageTarget Metadata Read IOPS |CountPerSecond |Average |The rate of file operations that do not modify persistent state, and excluding the read operation, that the Cache sends to a particular StorageTarget. |StorageTarget | |StorageTargetMetadataWriteIOPS |Yes |StorageTarget Metadata Write IOPS |CountPerSecond |Average |The rate of file operations that do modify persistent state and excluding the write operation, that the Cache sends to a particular StorageTarget. |StorageTarget |+|StorageTargetNoSpaceErrors |Yes |Storage Target No Space Errors Received |Count |Total |The rate of no space available error responses received by the cache from a specific StorageTarget. For more details, see https://www.rfc-editor.org/rfc/rfc1813#section-2.6 (NFS3ERR_NOSPC). |StorageTarget | +|StorageTargetPermissionErrors |Yes |Storage Target Permission Errors Received |Count |Total |The rate of permission error responses received by the cache from a specific StorageTarget. For more details, see https://www.rfc-editor.org/rfc/rfc1813#section-2.6 (NFS3ERR_PERM). |StorageTarget | +|StorageTargetQuotaLimitErrors |Yes |Storage Target Quota Limit Errors Received |Count |Total |The rate of quota limit error responses received by the cache from a specific StorageTarget. For more details, see https://www.rfc-editor.org/rfc/rfc1813#section-2.6 (NFS3ERR_DQUOT). |StorageTarget | |StorageTargetReadAheadThroughput |Yes |StorageTarget Read Ahead Throughput |BytesPerSecond |Average |The rate the Cache opportunisticly reads data from the StorageTarget. |StorageTarget | |StorageTargetReadIOPS |Yes |StorageTarget Read IOPS |CountPerSecond |Average |The rate of file read operations the Cache sends to a particular StorageTarget. |StorageTarget |+|StorageTargetReadOnlyErrors |Yes |Storage Target Read-Only Filesystem Errors Received |Count |Total |The rate of read-only filesystem error responses received by the cache from a specific StorageTarget. For more details, see https://www.rfc-editor.org/rfc/rfc1813#section-2.6 (NFS3ERR_ROFS). |StorageTarget | |StorageTargetRecycleRate |Yes |Storage Target Recycle Rate |BytesPerSecond |Average |Cache space recycle rate associated with a storage target in the HPC Cache. This is the rate at which existing data is cleared from the cache to make room for new data. |StorageTarget |+|StorageTargetRequestTooSmallErrors |Yes |Storage Target Request Too Small Errors Received |Count |Total |The rate of request too small error responses received by the cache from a specific StorageTarget. For more details, see https://www.rfc-editor.org/rfc/rfc1813#section-2.6 (NFS3ERR_TOOSMALL). |StorageTarget | +|StorageTargetRetryableFlushErrors |Yes |Storage Target Retryable Flush Request Errors |Count |Total |The rate of retryable file flush errors reported by the writeback state machine for a specific StorageTarget. |StorageTarget | |StorageTargetSpaceAllocation |Yes |Storage Target Space Allocation |Bytes |Average |Total space (read and write) allocated for a storage target. |StorageTarget | |StorageTargetSyncWriteThroughput |Yes |StorageTarget Synchronous Write Throughput |BytesPerSecond |Average |The rate the Cache synchronously writes data to a particular StorageTarget. These are writes that do cause clients to block. |StorageTarget |+|StorageTargetTotalCacheOps |Yes |Storage Target Total Cache Ops |Count |Total |The rate of operations the cache is servicing for the namespace represented by a specific StorageTarget. |StorageTarget | |StorageTargetTotalReadSpace |Yes |Storage Target Total Read Space |Bytes |Average |Total read space allocated for caching files associated with a storage target. |StorageTarget | |StorageTargetTotalReadThroughput |Yes |StorageTarget Total Read Throughput |BytesPerSecond |Average |The total rate that the Cache reads data from a particular StorageTarget. |StorageTarget | |StorageTargetTotalWriteSpace |Yes |Storage Target Total Write Space |Bytes |Average |Total write space allocated for changed files associated with a storage target. |StorageTarget | |StorageTargetTotalWriteThroughput |Yes |StorageTarget Total Write Throughput |BytesPerSecond |Average |The total rate that the Cache writes data to a particular StorageTarget. |StorageTarget |+|StorageTargetUnrecoverableFlushErrors |Yes |Storage Target Uncoverable Flush Request Errors |Count |Total |The rate of unrecoverable file flush errors reported by the writeback state machine for a specific StorageTarget. |StorageTarget | +|StorageTargetUpdateFoundAsyncCacheOps |Yes |Storage Target Update Found Asynchronous Verification Cache Ops |Count |Total |The rate of file updates discovered by asynchronous verification operations sent by the cache to a specific StorageTarget. |StorageTarget | +|StorageTargetUpdateFoundSyncCacheOps |Yes |Storage Target Update Found Synchronous Verification Cache Ops |Count |Total |The rate of file updates discovered by synchronous verification operations sent by the cache to a specific StorageTarget. |StorageTarget | |StorageTargetUsedReadSpace |Yes |Storage Target Used Read Space |Bytes |Average |Read space used by cached files associated with a storage target. |StorageTarget | |StorageTargetUsedWriteSpace |Yes |Storage Target Used Write Space |Bytes |Average |Write space used by changed files associated with a storage target. |StorageTarget |+|StorageTargetVerificationAsyncCacheOps |Yes |Storage Target Asynchronous Verification Cache Ops |Count |Total |The rate of asynchronous verification operations sent by the cache to a specific StorageTarget. |StorageTarget | +|StorageTargetVerificationSyncCacheOps |Yes |Storage Target Synchronous Verification Cache Ops |Count |Total |The rate of synchronous verification operations sent by the cache to a specific StorageTarget. |StorageTarget | |StorageTargetWriteIOPS |Yes |StorageTarget Write IOPS |Count |Average |The rate of the file write operations the Cache sends to a particular StorageTarget. |StorageTarget | |TotalBlocksRecycled |Yes |Total Blocks Recycled |Count |Average |Total number of 16k cache blocks recycled (freed) for the HPC Cache. |No Dimensions | |TotalFreeReadSpace |Yes |Free Read Space |Bytes |Average |Total space available for caching read files. |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |BigDataPoolApplicationsActive |No |Active Apache Spark applications |Count |Maximum |Total Active Apache Spark Pool Applications |JobState | |BigDataPoolApplicationsEnded |No |Ended Apache Spark applications |Count |Total |Count of Apache Spark pool applications ended |JobType, JobResult | +## Microsoft.Synapse/workspaces/kustoPools +<!-- Data source : naam--> ++|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| +|||||||| +|BatchBlobCount |Yes |Batch Blob Count |Count |Average |Number of data sources in an aggregated batch for ingestion. |Database | +|BatchDuration |Yes |Batch Duration |Seconds |Average |The duration of the aggregation phase in the ingestion flow. |Database | +|BatchesProcessed |Yes |Batches Processed |Count |Total |Number of batches aggregated for ingestion. Batching Type: whether the batch reached batching time, data size or number of files limit set by batching policy |Database, SealReason | +|BatchSize |Yes |Batch Size |Bytes |Average |Uncompressed expected data size in an aggregated batch for ingestion. |Database | +|BlobsDropped |Yes |Blobs Dropped |Count |Total |Number of blobs permanently rejected by a component. |Database, ComponentType, ComponentName | +|BlobsProcessed |Yes |Blobs Processed |Count |Total |Number of blobs processed by a component. |Database, ComponentType, ComponentName | +|BlobsReceived |Yes |Blobs Received |Count |Total |Number of blobs received from input stream by a component. |Database, ComponentType, ComponentName | +|CacheUtilization |Yes |Cache utilization (deprecated) |Percent |Average |Utilization level in the cluster scope. The metric is deprecated and presented for backward compatibility only, you should use the 'Cache utilization factor' metric instead. |No Dimensions | +|CacheUtilizationFactor |Yes |Cache utilization factor |Percent |Average |Percentage of utilized disk space dedicated for hot cache in the cluster. 100% means that the disk space assigned to hot data is optimally utilized. No action is needed in terms of the cache size. More than 100% means that the cluster's disk space is not large enough to accommodate the hot data, as defined by your caching policies. To ensure that sufficient space is available for all the hot data, the amount of hot data needs to be reduced or the cluster needs to be scaled out. Enabling auto scale is recommended. |No Dimensions | +|ContinuousExportMaxLatenessMinutes |Yes |Continuous Export Max Lateness |Count |Maximum |The lateness (in minutes) reported by the continuous export jobs in the cluster |No Dimensions | +|ContinuousExportNumOfRecordsExported |Yes |Continuous export - num of exported records |Count |Total |Number of records exported, fired for every storage artifact written during the export operation |ContinuousExportName, Database | +|ContinuousExportPendingCount |Yes |Continuous Export Pending Count |Count |Maximum |The number of pending continuous export jobs ready for execution |No Dimensions | +|ContinuousExportResult |Yes |Continuous Export Result |Count |Count |Indicates whether Continuous Export succeeded or failed |ContinuousExportName, Result, Database | +|CPU |Yes |CPU |Percent |Average |CPU utilization level |No Dimensions | +|DiscoveryLatency |Yes |Discovery Latency |Seconds |Average |Reported by data connections (if exist). Time in seconds from when a message is enqueued or event is created until it is discovered by data connection. This time is not included in the Azure Data Explorer total ingestion duration. |ComponentType, ComponentName | +|EventsDropped |Yes |Events Dropped |Count |Total |Number of events dropped permanently by data connection. An Ingestion result metric with a failure reason will be sent. |ComponentType, ComponentName | +|EventsProcessed |Yes |Events Processed |Count |Total |Number of events processed by the cluster |ComponentType, ComponentName | +|EventsProcessedForEventHubs |Yes |Events Processed (for Event/IoT Hubs) |Count |Total |Number of events processed by the cluster when ingesting from Event/IoT Hub |EventStatus | +|EventsReceived |Yes |Events Received |Count |Total |Number of events received by data connection. |ComponentType, ComponentName | +|ExportUtilization |Yes |Export Utilization |Percent |Maximum |Export utilization |No Dimensions | +|FollowerLatency |Yes |FollowerLatency |MilliSeconds |Average |The follower databases synchronize changes in the leader databases. Because of the synchronization, there's a data lag of a few seconds to a few minutes in data availability.This metric measures the length of the time lag. The time lag depends on the overall size of the leader database metadata.This is a cluster level metrics: the followers catch metadata of all databases that are followed. This metric represents the latency of the process. |State, RoleInstance | +|IngestionLatencyInSeconds |Yes |Ingestion Latency |Seconds |Average |Latency of data ingested, from the time the data was received in the cluster until it's ready for query. The ingestion latency period depends on the ingestion scenario. |No Dimensions | +|IngestionResult |Yes |Ingestion result |Count |Total |Total number of sources that either failed or succeeded to be ingested. Splitting the metric by status, you can get detailed information about the status of the ingestion operations. |IngestionResultDetails, FailureKind | +|IngestionUtilization |Yes |Ingestion utilization |Percent |Average |Ratio of used ingestion slots in the cluster |No Dimensions | +|IngestionVolumeInMB |Yes |Ingestion Volume |Bytes |Total |Overall volume of ingested data to the cluster |Database | +|InstanceCount |Yes |Instance Count |Count |Average |Total instance count |No Dimensions | +|KeepAlive |Yes |Keep alive |Count |Average |Sanity check indicates the cluster responds to queries |No Dimensions | +|MaterializedViewAgeMinutes |Yes |Materialized View Age |Count |Average |The materialized view age in minutes |Database, MaterializedViewName | +|MaterializedViewAgeSeconds |Yes |Materialized View Age |Seconds |Average |The materialized view age in seconds |Database, MaterializedViewName | +|MaterializedViewDataLoss |Yes |Materialized View Data Loss |Count |Maximum |Indicates potential data loss in materialized view |Database, MaterializedViewName, Kind | +|MaterializedViewExtentsRebuild |Yes |Materialized View Extents Rebuild |Count |Average |Number of extents rebuild |Database, MaterializedViewName | +|MaterializedViewHealth |Yes |Materialized View Health |Count |Average |The health of the materialized view (1 for healthy, 0 for non-healthy) |Database, MaterializedViewName | +|MaterializedViewRecordsInDelta |Yes |Materialized View Records In Delta |Count |Average |The number of records in the non-materialized part of the view |Database, MaterializedViewName | +|MaterializedViewResult |Yes |Materialized View Result |Count |Average |The result of the materialization process |Database, MaterializedViewName, Result | +|QueryDuration |Yes |Query duration |MilliSeconds |Average |Queries duration in seconds |QueryStatus | +|QueryResult |No |Query Result |Count |Count |Total number of queries. |QueryStatus | +|QueueLength |Yes |Queue Length |Count |Average |Number of pending messages in a component's queue. |ComponentType | +|QueueOldestMessage |Yes |Queue Oldest Message |Count |Average |Time in seconds from when the oldest message in queue was inserted. |ComponentType | +|ReceivedDataSizeBytes |Yes |Received Data Size Bytes |Bytes |Average |Size of data received by data connection. This is the size of the data stream, or of raw data size if provided. |ComponentType, ComponentName | +|StageLatency |Yes |Stage Latency |Seconds |Average |Cumulative time from when a message is discovered until it is received by the reporting component for processing (discovery time is set when message is enqueued for ingestion queue, or when discovered by data connection). |Database, ComponentType | +|StreamingIngestDataRate |Yes |Streaming Ingest Data Rate |Bytes |Average |Streaming ingest data rate |No Dimensions | +|StreamingIngestDuration |Yes |Streaming Ingest Duration |MilliSeconds |Average |Streaming ingest duration in milliseconds |No Dimensions | +|StreamingIngestResults |Yes |Streaming Ingest Result |Count |Count |Streaming ingest result |Result | +|TotalNumberOfConcurrentQueries |Yes |Total number of concurrent queries |Count |Maximum |Total number of concurrent queries |No Dimensions | +|TotalNumberOfExtents |Yes |Total number of extents |Count |Average |Total number of data extents |No Dimensions | +|TotalNumberOfThrottledCommands |Yes |Total number of throttled commands |Count |Total |Total number of throttled commands |CommandType | +|TotalNumberOfThrottledQueries |Yes |Total number of throttled queries |Count |Maximum |Total number of throttled queries |No Dimensions | +|WeakConsistencyLatency |Yes |Weak consistency latency |Seconds |Average |The max latency between the previous metadata sync and the next one (in DB/node scope) |Database, RoleInstance | + ## Microsoft.Synapse/workspaces/scopePools <!-- Data source : naam--> This latest update adds a new column and reorders the metrics to be alphabetical |DirectoriesCreatedCount |Yes |Directories Created Count |Count |Total |This provides a running view of how many directories have been created as part of a migration. |No Dimensions | |FileMigrationCount |Yes |Files Migration Count |Count |Total |This provides a running total of how many files have been migrated. |No Dimensions | |InitialScanDataMigratedInBytes |Yes |Initial Scan Data Migrated in Bytes |Bytes |Total |This provides the view of the total bytes which have been transferred in a new migrator as a result of the initial scan of the On-Premises file system. Any data which is added to the migration after the initial scan migration, is NOT included in this metric. |No Dimensions |-|LiveDataMigratedInBytes |Yes |Live Data Migrated in Bytes |Count |Total |Provides a running total of LiveData which has been changed due to Client activity, since the migration started. |No Dimensions | +|LiveDataMigratedInBytes |Yes |Live Data Migrated in Bytes |Bytes |Total |Provides a running total of LiveData which has been changed due to Client activity, since the migration started. |No Dimensions | |MigratorCPULoad |Yes |Migrator CPU Load |Percent |Average |CPU consumption by the migrator process. |No Dimensions | |NumberOfExcludedPaths |Yes |Number of Excluded Paths |Count |Total |Provides a running count of the paths which have been excluded from the migration due to Exclusion Rules. |No Dimensions | |NumberOfFailedPaths |Yes |Number of Failed Paths |Count |Total |A count of which paths have failed to migrate. |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical |DirectoriesCreatedCount |Yes |Directories Created Count |Count |Total |This provides a running view of how many directories have been created as part of a migration. |No Dimensions | |FileMigrationCount |Yes |Files Migration Count |Count |Total |This provides a running total of how many files have been migrated. |No Dimensions | |InitialScanDataMigratedInBytes |Yes |Initial Scan Data Migrated in Bytes |Bytes |Total |This provides the view of the total bytes which have been transferred in a new migrator as a result of the initial scan of the On-Premises file system. Any data which is added to the migration after the initial scan migration, is NOT included in this metric. |No Dimensions |-|LiveDataMigratedInBytes |Yes |Live Data Migrated in Bytes |Count |Total |Provides a running total of LiveData which has been changed due to Client activity, since the migration started. |No Dimensions | +|LiveDataMigratedInBytes |Yes |Live Data Migrated in Bytes |Bytes |Total |Provides a running total of LiveData which has been changed due to Client activity, since the migration started. |No Dimensions | |NumberOfExcludedPaths |Yes |Number of Excluded Paths |Count |Total |Provides a running count of the paths which have been excluded from the migration due to Exclusion Rules. |No Dimensions | |NumberOfFailedPaths |Yes |Number of Failed Paths |Count |Total |A count of which paths have failed to migrate. |No Dimensions | |TotalBytesTransferred |Yes |Total Bytes Transferred |Bytes |Total |This metric covers how many bytes have been transferred (does not reflect how many have successfully migrated, only how much has been transferred). |No Dimensions | This latest update adds a new column and reorders the metrics to be alphabetical - [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md) - [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)---<!--Gen Date: Sun Jun 04 2023 10:14:09 GMT+0300 (Israel Daylight Time)--> +++<!--Gen Date: Wed Jun 28 2023 02:42:02 GMT+0800 (China Standard Time)--> |
azure-monitor | Resource Logs Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md | Title: Supported categories for Azure Monitor resource logs description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 06/04/2023 Last updated : 06/27/2023 If you think something is missing, you can open a GitHub comment at the bottom o |||| |Network Security Group Rule Flow Event |Network Security Group Rule Flow Event |No | +## Microsoft.Cloudtest/hostedpools +<!-- Data source : naam--> ++|Category|Category Display Name|Costs To Export| +|||| +|ProvisioningScriptLogs |Provisioning Script Logs |Yes | + ## Microsoft.CodeSigning/codesigningaccounts <!-- Data source : naam--> If you think something is missing, you can open a GitHub comment at the bottom o |Category|Category Display Name|Costs To Export| |||| |AgentHealthStatus |AgentHealthStatus |No |-|AutoscaleEvaluationPooled |Do not use - internal testing |Yes | +|AutoscaleEvaluationPooled |Autoscale logs for pooled host pools - private preview |Yes | |Checkpoint |Checkpoint |No | |Connection |Connection |No | |ConnectionGraphicsData |Connection Graphics Data Logs Preview |Yes | If you think something is missing, you can open a GitHub comment at the bottom o |Category|Category Display Name|Costs To Export| |||| |DataplaneAuditEvent |Dataplane audit logs |Yes |-|ResourceLifecycle |Resource lifecycle |Yes | +|ResourceOperation |Resource Operations |Yes | ## Microsoft.Devices/IotHubs <!-- Data source : naam--> If you think something is missing, you can open a GitHub comment at the bottom o |AuditLogs |Audit logs |No | |DiagnosticLogs |Diagnostic logs |Yes | -## Microsoft.HealthcareApis/workspaces/analyticsconnectors -<!-- Data source : arm--> --|Category|Category Display Name|Costs To Export| -|||| -|DiagnosticLogs |Diagnostic logs for Analytics Connector |Yes | - ## Microsoft.HealthcareApis/workspaces/dicomservices <!-- Data source : arm--> If you think something is missing, you can open a GitHub comment at the bottom o |RuntimeAuditLogs |Runtime Audit Logs |Yes | |VNetAndIPFilteringLogs |VNet/IP Filtering Connection Logs |No | +## Microsoft.ServiceNetworking/trafficControllers +<!-- Data source : naam--> ++|Category|Category Display Name|Costs To Export| +|||| +|TrafficControllerAccessLog |Traffic Controller Access Log |Yes | + ## Microsoft.SignalRService/SignalR <!-- Data source : naam--> If you think something is missing, you can open a GitHub comment at the bottom o |StorageRead |StorageRead |Yes | |StorageWrite |StorageWrite |Yes | +## Microsoft.StorageCache/amlFilesystems +<!-- Data source : naam--> ++|Category|Category Display Name|Costs To Export| +|||| +|AmlfsAuditEvent |Azure Managed Lustre audit event |Yes | + ## Microsoft.StorageCache/caches <!-- Data source : naam--> If you think something is missing, you can open a GitHub comment at the bottom o |Category|Category Display Name|Costs To Export| ||||-|Command |Synapse Data Explorer Command |Yes | -|FailedIngestion |Synapse Data Explorer Failed Ingestion |Yes | -|IngestionBatching |Synapse Data Explorer Ingestion Batching |Yes | -|Query |Synapse Data Explorer Query |Yes | -|SucceededIngestion |Synapse Data Explorer Succeeded Ingestion |Yes | -|TableDetails |Synapse Data Explorer Table Details |Yes | -|TableUsageStatistics |Synapse Data Explorer Table Usage Statistics |Yes | +|Command |Command |Yes | +|FailedIngestion |Failed ingestion |Yes | +|IngestionBatching |Ingestion batching |Yes | +|Journal |Journal |Yes | +|Query |Query |Yes | +|SucceededIngestion |Succeeded ingestion |Yes | +|TableDetails |Table details |Yes | +|TableUsageStatistics |Table usage statistics |Yes | ## Microsoft.Synapse/workspaces/scopePools <!-- Data source : naam--> If you think something is missing, you can open a GitHub comment at the bottom o * [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace) -<!--Gen Date: Sun Jun 04 2023 10:14:09 GMT+0300 (Israel Daylight Time)--> +<!--Gen Date: Wed Jun 28 2023 02:42:02 GMT+0800 (China Standard Time)--> |
azure-netapp-files | Azacsnap Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md | Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page. +## Jun-2023 ++### AzAcSnap 8b (Build: 1AD3679) ++AzAcSnap 8b is being released with the following fixes and improvements: ++- Fixes and Improvements: + - General improvement to `azacsnap` command exit codes. + - `azacsnap` should return an exit code of 0 (zero) when it has run as expected, otherwise it should return an exit code of non-zero. For example, running `azacsnap` will return non-zero as it has not done anything and will show usage information whereas `azacsnap -h` will return exit-code of zero as it's expected to return usage information. + - Any failure in `--runbefore` exits before any backup activity and returns the `--runbefore` exit code. + - Any failure in `--runafter` returns the `--runafter` exit code. + - Backup (`-c backup`) changes: + - Change in the Db2 workflow to move the protected-paths query outside the WRITE SUSPEND, Storage Snapshot, WRITE RESUME workflow to improve resilience. (Preview) + - Fix for missing snapshot name (`azSnapshotName`) in `--runafter` command environment. ++Download the [AzAcSnap 8b](https://aka.ms/azacsnap-8b) installer. + ## May-2023 ### AzAcSnap 8a (Build: 1AC55A6) AzAcSnap 8 is being released with the following fixes and improvements: - Fixes and Improvements: - Restore (`-c restore`) changes:- - New ability to use `-c restore` to revertvolume for Azure NetApp Files. + - New ability to use `-c restore` to `--restore revertvolume` for Azure NetApp Files. - Backup (`-c backup`) changes: - Fix for incorrect error output when using `-c backup` and the database has ΓÇÿbackintΓÇÖ configured. - Remove lower-case conversion for anfBackup rename-only option using `-c backup` so the snapshot name maintains case of Volume name.- - Fix for when a snapshot is created even though SAP HANA wasn't put into backup-mode. Now if SAP HANA cannot be put into backup-mode, AzAcSnap will immediately exit with an error. + - Fix for when a snapshot is created even though SAP HANA wasn't put into backup-mode. Now if SAP HANA cannot be put into backup-mode, AzAcSnap immediately exits with an error. - Details (`-c details`) changes: - Fix for listing snapshot details with `-c details` when using Azure Large Instance storage. - Logging enhancements:- - Extra logging output to syslog (e.g., /var/log/messages) on failure. - - New ΓÇ£mainlogΓÇ¥ (azacsnap.log) to provide a more parse-able high-level log of commands run with success or failure result. + - Extra logging output to syslog (for example, `/var/log/messages`) on failure. + - New ΓÇ£mainlogΓÇ¥ (`azacsnap.log`) to provide a more parse-able high-level log of commands run with success or failure result. - New global settings file (`.azacsnaprc`) to control behavior of azacsnap, including location of ΓÇ£mainlogΓÇ¥ file. Download the [AzAcSnap 8](https://aka.ms/azacsnap-8) installer. Download the [AzAcSnap 7](https://aka.ms/azacsnap-7) installer. > [!IMPORTANT] > AzAcSnap 6 brings a new release model for AzAcSnap and includes fully supported GA features and Preview features in a single release. - + Since AzAcSnap v5.0 was released as GA in April 2021, there have been eight releases of AzAcSnap across two branches. Our goal with the new release model is to align with how Azure components are released. This change allows moving features from Preview to GA (without having to move an entire branch), and introduce new Preview features (without having to create a new branch). From AzAcSnap 6, we have a single branch with fully supported GA features and Preview features (which are subject to Microsoft's Preview Ts&Cs). ItΓÇÖs important to note customers can't accidentally use Preview features, and must enable them with the `--preview` command line option. Therefore the next release will be AzAcSnap 7, which could include; patches (if necessary) for GA features, current Preview features moving to GA, or new Preview features. AzAcSnap 6 is being released with the following fixes and improvements: AzAcSnap v5.0 Preview (Build: 20210318.30771) has been released with the followi - [Get started with Azure Application Consistent Snapshot tool](azacsnap-get-started.md) - [Download the latest release of the installer](https://aka.ms/azacsnapinstaller)++ |
azure-resource-manager | Common Deployment Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/common-deployment-errors.md | If your error code isn't listed, submit a GitHub issue. On the right side of the | NoRegisteredProviderFound | Check resource provider registration status. | [Resolve registration](error-register-resource-provider.md) | | NotFound | You might be attempting to deploy a dependent resource in parallel with a parent resource. Check if you need to add a dependency. | [Resolve dependencies](error-not-found.md) | | OperationNotAllowed | The deployment is attempting an operation that exceeds the quota for the subscription, resource group, or region. If possible, revise your deployment to stay within the quotas. Otherwise, consider requesting a change to your quotas. | [Resolve quotas](error-resource-quota.md) |+| OperationNotAllowedOnVMImageAsVMsBeingProvisioned | You might be attempting to delete an image that is currently being used to provision VMs. You cannot delete an image that is being used by any virtual machine during the deployment process. Retry the image delete operation after the deployment of the VM is complete. | | | ParentResourceNotFound | Make sure a parent resource exists before creating the child resources. | [Resolve parent resource](error-parent-resource.md) | | PasswordTooLong | You might have selected a password with too many characters, or converted your password value to a secure string before passing it as a parameter. If the template includes a **secure string** parameter, you don't need to convert the value to a secure string. Provide the password value as text. | | | PrivateIPAddressInReservedRange | The specified IP address includes an address range required by Azure. Change IP address to avoid reserved range. | [Private IP addresses](../../virtual-network/ip-services/private-ip-addresses.md) |
cognitive-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription-create.md | Here are some property options that you can use to configure a transcription whe |`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.| |`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|-|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| +|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1.| |`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.| |`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.| |`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.| |
cognitive-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md | You can also check the pronunciation assessment result in JSON. The word-level, ### [Display](#tab/display) -The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes. +The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. The error types in the pronunciation assessment are represented using different colors. Yellow indicates mispronunciations, gray indicates omissions, and red indicates insertions. This visual distinction makes it easier to identify and analyze specific errors. It provides a clear overview of the error types and frequencies in the spoken audio, helping you focus on areas that need improvement. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes. ### [JSON](#tab/json) |
cognitive-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/use-your-data.md | One of the key features of Azure OpenAI on your data is its ability to retrieve Azure OpenAI on your data uses an [Azure Cognitive Services](/azure/search/search-what-is-azure-search) index to determine what data to retrieve based on user inputs and provided conversation history. We recommend using Azure OpenAI Studio to create your index from a blob storage or local files. See the [quickstart article](../use-your-data-quickstart.md?pivots=programming-language-studio) for more information. -You can optionally use an existing Azure Cognitive Search index as a data source. If you use an existing service, youΓÇÖll get better quality if your data is broken down into smaller chunks so that the model can use only the most relevant portions when composing a response. You can also use the available [data preparation script](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) to create an index you can use Azure OpenAI, and with your documents broken down into manageable chunks. +## Ingesting your data into Azure cognitive search ++For documents and datasets with long text, you should use the available [data preparation script](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts) to ingest the data into cognitive search. The script chunks the data so that your response with the service will be more accurate. This script also supports scanned PDF file and images and ingests the data using [Form Recognizer](../../../applied-ai-services/form-recognizer/overview.md). + ## Data formats and file types |
communication-services | Chat Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/chat-metrics.md | The following operations are available on Chat API request metrics: | AddChatThreadParticipants | Adds thread members to a thread. If members already exist, no change occurs. | | RemoveChatThreadParticipant | Remove a member from a thread. | - If a request is made to an operation that isn't recognized, you receive a "Bad Route" value response. ## Next steps |
communication-services | Rooms Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/rooms-metrics.md | + + Title: Rooms metrics definitions for Azure Communication Service ++description: This document covers definitions of rooms metrics available in the Azure portal. ++++ Last updated : 06/26/2023+++++# Rooms metrics overview ++Azure Communication Services currently provides metrics for all ACS primitives. [Azure Metrics Explorer](../../../azure-monitor\essentials\metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that rooms requests emit. ++## Where to find metrics ++Primitives in Azure Communication Services emit metrics for API requests. These metrics can be found in the Metrics tab under your Communication Services resource. You can also create permanent dashboards using the workbooks tab under your Communication Services resource. ++## Metric definitions ++All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`. ++More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../../azure-monitor/essentials/metrics-charts.md#aggregation). ++- **Operation** - All operations or routes that can be called on the Azure Communication Services Chat gateway. +- **Status Code** - The status code response sent after the request. +- **StatusSubClass** - The status code series sent after the response. ++### Rooms API requests ++The following operations are available on Rooms API request metrics: ++| Operation / Route | Description | +| -- | - | +| CreateRoom | Creates a Room. | +| DeleteRoom | Deletes a Room. | +| GetRoom | Gets a Room by Room ID. | +| PatchRoom | Updates a Room by Room ID. | +| ListRooms | Lists all the Rooms for an ACS Resource. | +| AddParticipants | Adds participants to a Room.| +| RemoveParticipants | Removes participants from a Room. | +| GetParticipants | Gets list of participants for a Room. | +| UpdateParticipants | Updates list of participants for a Room. | +++## Next steps ++- Learn more about [Data Platform Metrics](../../../azure-monitor/essentials/data-platform-metrics.md) |
communication-services | Sms Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/sms-metrics.md | + + Title: SMS metrics definitions for Azure Communication Service ++description: This document covers definitions of SMS metrics available in the Azure portal. ++++ Last updated : 06/26/2023+++++# SMS metrics overview ++Azure Communication Services currently provides metrics for all ACS primitives. [Azure Metrics Explorer](../../../azure-monitor\essentials\metrics-getting-started.md) can be used to plot your own charts, investigate abnormalities in your metric values, and understand your API traffic by using the metrics data that Chat and SMS requests emit. ++## Where to find metrics ++Primitives in Azure Communication Services emit metrics for API requests. These metrics can be found in the Metrics tab under your Communication Services resource. You can also create permanent dashboards using the workbooks tab under your Communication Services resource. ++## Metric definitions ++All API request metrics contain three dimensions that you can use to filter your metrics data. These dimensions can be aggregated together using the `Count` aggregation type and support all standard Azure Aggregation time series including `Sum`, `Average`, `Min`, and `Max`. ++More information on supported aggregation types and time series aggregations can be found [Advanced features of Azure Metrics Explorer](../../../azure-monitor/essentials/metrics-charts.md#aggregation). ++- **Operation** - All operations or routes that can be called on the Azure Communication Services Chat gateway. +- **Status Code** - The status code response sent after the request. +- **StatusSubClass** - The status code series sent after the response. +- +### SMS API requests ++The following operations are available on SMS API request metrics: ++| Operation / Route | Description | +| -- | - | +| SMSMessageSent | Sends an SMS message. | +| SMSDeliveryReportsReceived | Gets SMS Delivery Reports | +| SMSMessagesReceived | Gets SMS messages. | ++ |
communication-services | Email Domain And Sender Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/email/email-domain-and-sender-authentication.md | An email domain is a unique name that appears after the @ sign-in email addresse Email Communication Services allows you to configure email with two types of domains: **Azure Managed Domains** and **Custom Domains**. ### Azure Managed Domains-Getting Azure manged Domains is one click setup. You can add a free Azure Subdomain to your email communication resource and you'll able to send emails using mail from domains like donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net. Your Azure Managed domain will be pre-configured with required sender authentication support. +Getting Azure managed Domains is one click setup. You can add a free Azure Subdomain to your email communication resource and you'll able to send emails using mail from domains like donotreply@xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.azurecomm.net. Your Azure Managed domain will be pre-configured with required sender authentication support. ### Custom Domains In this option you're adding a domain that you already own. You have to add your domain and verify the ownership to send email and then configure for required authentication support. In this option you're adding a domain that you already own. You have to add you Email authentication (also known as email validation) is a group of standards that tries to stop spoofing (email messages from forged senders). Our email pipeline uses these standards to verify the emails that are sent. Trust in email begins with Authentication and Azure communication Services Email helps senders to properly configure the following email authentication protocols to set proper authentication for the emails. **SPF (Sender Policy Framework)**-SPF [RFC 7208](https://tools.ietf.org/html/rfc7208) is a mechanism that allows domain owners to publish and maintain, via a standard DNS TXT record, a list of systems authorized to send email on their behalf. Azure Commuication Services allows you to configure the required SPF record that needs to be added to your DNS to verify your custom domains. +SPF [RFC 7208](https://tools.ietf.org/html/rfc7208) is a mechanism that allows domain owners to publish and maintain, via a standard DNS TXT record, a list of systems authorized to send email on their behalf. Azure Communication Services allows you to configure the required SPF record that needs to be added to your DNS to verify your custom domains. **DKIM (Domain Keys Identified Mail)**-DKIM [RFC 6376](https://tools.ietf.org/html/rfc6376) allows an organization to claim responsibility for transmitting a message in a way that can be validated by the recipient. Azure Commuication Services allows you to configure the required DKIM records that need to be added to your DNS to verify your custom domains. +DKIM [RFC 6376](https://tools.ietf.org/html/rfc6376) allows an organization to claim responsibility for transmitting a message in a way that can be validated by the recipient. Azure Communication Services allows you to configure the required DKIM records that need to be added to your DNS to verify your custom domains. Please follow the steps [to setup sender authentication for your domain.](../../quickstarts/email/add-custom-verified-domains.md) |
communication-services | Certified Session Border Controllers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/certified-session-border-controllers.md | Microsoft works with each vendor to: - Run daily tests with all certified devices in production and preproduction environments. Validating the devices in preproduction environments guarantees that new versions of Azure Communication Services code in the cloud work with certified SBCs. - Establish a joint support process with the SBC vendors. Media bypass is not yet supported by Azure Communication Services. The table that follows list devices certified for Azure Communication Services direct routing. |
communication-services | Direct Routing Infrastructure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md | |
communication-services | Direct Routing Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md | |
communication-services | Inbound Calling Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/inbound-calling-capabilities.md | -Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel. You can use phone numbers [provided by Microsoft](./telephony-concept.md#voice-calling-pstn) and phone numbers supplied by [direct routing](./telephony-concept.md#azure-direct-routing). +Inbound PSTN calling is currently supported in GA for Dynamics Omnichannel and Call Automation SDK. You can use phone numbers [provided by Microsoft](./telephony-concept.md#voice-calling-pstn) and phone numbers supplied by [direct routing](./telephony-concept.md#azure-direct-routing). **Inbound calling with Omnichannel for Customer Service** Supported in General Availability, to set up inbound calling in Omnichannel for **Inbound calling with Azure Communication Services Call Automation SDK** -Call Automation enables you to build custom calling workflows within your applications to optimize business processes and boost customer satisfaction. The library supports managing incoming calls to the phone numbers acquired from Communication Services and incoming calls via Direct Routing. You can also use Call Automation to place outbound calls from the phone numbers owned by your resource, among other capabilities. +Call Automation enables you to build custom calling workflows within your applications to optimize business processes and boost customer satisfaction. The library supports managing incoming calls to the phone numbers acquired from Communication Services and incoming calls via direct routing. You can also use Call Automation to place outbound calls from the phone numbers owned by your resource, among other capabilities. Learn more about [Call Automation](../voice-video-calling/call-automation.md), supported in General Availability. **Inbound calling with Power Virtual Agents** -*Coming soon* +*Coming soon* |
communication-services | Known Limitations Acs Telephony | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/known-limitations-acs-telephony.md | description: Known limitations of direct routing in Azure Communication Services - Previously updated : 05/11/2023 Last updated : 06/22/2023 This article provides information about limitations and known issues related to ## Azure Communication Services direct routing known limitations - Anonymous calling isn't supported.- - will be fixed in GA release - Maximum number of configured Session Border Controllers (SBC) is 250 per communication resource. - When you change direct routing configuration (add SBC, change Voice Route, etc.), wait approximately five minutes for changes to take effect. - If you move SBC FQDN to another Communication resource, wait approximately an hour, or restart SBC to force configuration change. This article provides information about limitations and known issues related to - Location-based routing isn't supported. - No quality dashboard is available for customers. - Enhanced 911 isn't supported.-- PSTN numbers missing from Call Summary logs. ## Next steps This article provides information about limitations and known issues related to - [Phone number types in Azure Communication Services](./plan-solution.md) - [Plan for Azure direct routing](./direct-routing-infrastructure.md) - [Pair the Session Border Controller and configure voice routing](./direct-routing-provisioning.md)+- [Managing calls with Call Automation](../call-automation/call-automation.md). - [Pricing](../pricing.md) ### Quickstarts |
communication-services | Telephony Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/telephony-concept.md | For cloud calling, outbound calls are billed at per-minute rates depending on th ### Azure direct routing - With this option, you can connect legacy on-premises telephony and your carrier of choice to Azure Communication Services. It provides PSTN calling capabilities to your Communication Services application even if Voice Calling (PSTN) is not available in your country/region.  |
connectors | Connectors Create Api Azureblobstorage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md | The Azure Blob Storage connector has different versions, based on [logic app typ - For logic app workflows running in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE-labeled version uses the [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) instead. -- By default, Azure Blob Storage managed connector actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Blob actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The [**Get blob content** action](/connectors/azureblobconnector/#get-blob-content) implicitly uses chunking.+- Azure Blob Storage *managed* connector actions can read or write files that are *50 MB or smaller*. To handle files larger than 50 MB but up to 1024 MB, Azure Blob Storage actions support [message chunking](../logic-apps/logic-apps-handle-large-messages.md). The Blob Storage action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content) implicitly uses chunking. -- Azure Blob Storage triggers don't support chunking. When a trigger requests file content, the trigger selects only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern:+- While Azure Blob Storage *managed* and *built-in* triggers don't support chunking, the *built-in* triggers can handle files that are 50 MB or more. However, when a *managed* trigger requests file content, the trigger selects only files that are 50 MB or smaller. To get files larger than 50 MB, follow this pattern: - 1. Use a Blob trigger that returns file properties, such as [**When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)). + 1. Use a Blob trigger that returns file properties, such as [**When a blob is added or modified (properties only)**](/connectors/azureblobconnector/#when-a-blob-is-added-or-modified-(properties-only)). - 1. Follow the trigger with the Azure Blob Storage managed connector action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking. + 1. Follow the trigger with the Azure Blob Storage managed connector action named [**Get blob content**](/connectors/azureblobconnector/#get-blob-content), which reads the complete file and implicitly uses chunking. ## Prerequisites |
container-apps | Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md | -Billing in Azure Container apps is based on your [plan type](plans.md). +Billing in Azure Container Apps is based on your [plan type](plans.md). | Plan type | Description | |--|--|-| [Consumption](#consumption-plan) | Serverless environment where you're only billed for the resources your apps use when they're running. | -| [Consumption + Dedicated workload profiles plan structure](#consumption-dedicated) | A fully managed environment that supports both Consumption-based apps and Dedicated workload profiles that offer customized compute options for your apps. You're billed for each node in each [workload profile](workload-profiles-overview.md). +| [Consumption plan](#consumption-plan) | Serverless compute option where you're only billed for the resources your apps use as they're running. | +| [Dedicated plan](#consumption-dedicated) | Customized compute options where you're billed for instances allocated to each [workload profile](workload-profiles-overview.md). | -Charges apply to resources allocated to each running replica. | +- Your plan selection determines billing calculations. +- Different applications in an environment can use different plans. ++For more information, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). ## Consumption plan -Azure Container Apps consumption plan billing consists of two types of charges: +Billing for apps running in the Consumption plan consists of two types of charges: - **[Resource consumption](#resource-consumption-charges)**: The amount of resources allocated to your container app on a per-second basis, billed in vCPU-seconds and GiB-seconds. - **[HTTP requests](#request-charges)**: The number of HTTP requests your container app receives. The following resources are free during each calendar month, per subscription: This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). + > [!NOTE] > If you use Container Apps with [your own virtual network](networking.md#managed-resources) or your apps utilize other Azure resources, additional charges may apply. When a revision is scaled above the [minimum replica count](scale-app.md), all o ### Request charges -In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app. Only requests that come from outside a Container Apps environment are billable. +In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app. Only requests that come from outside a Container Apps environment are billable. - The first 2 million requests in each subscription per calendar month are free.-- [Health probe](./health-probes.md) requests are not billable.+- [Health probe](./health-probes.md) requests aren't billable. <a id="consumption-dedicated"></a> -## Consumption + Dedicated workload profiles plan structure (preview) --Azure Container Apps Consumption + Dedicated plan structure consists of two plans withing a single environment, each with their own billing model. +## Dedicated plan -The billing for apps running in the Consumption plan within the Consumption + Dedicated plan structure is the same as the Consumption plan. +You're billed based on workload profile instances, not by individual applications. -The billing for apps running in the Dedicated plan within the Consumption + Dedicated plan structure is as follows: +Billing for apps running in the Dedicated plan is based on workload profile instances, not by individual applications. The charges are as follows: -- **Dedicated workload profiles**: You're billed on a per-second basis for vCPU-seconds and GiB-seconds resources in all the workload profile instances in use. As profiles scale out, extra costs apply for the extra instances; as profiles scale in, billing is reduced.+| Fixed management costs | Variable costs | +||| +| If you have one or more dedicated workload profiles in your environment, you're charged a Dedicated plan management fee. You aren't billed any plan management charges unless you use a Dedicated workload profile in your environment. | You're billed on a per-second basis for vCPU-seconds and GiB-seconds resources in all the workload profile instances in use. As profiles scale out, extra costs apply for the extra instances; as profiles scale in, billing is reduced. | -- **Dedicated plan management**: You're billed a fixed cost for the Dedicated management plan when using Dedicated workload profiles. This cost is the same regardless of how many Dedicated workload profiles in use.+Make sure to optimize the applications you deploy to a dedicated workload profile. Evaluate the needs of your applications so that they can use the most amount of resources available to the profile. -For instance, you are not billed any charges for Dedicated unless you use a Dedicated workload profile in your environment. - ## General terms -For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). +- For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). -For best results, maximize the use of your allocated resources by calculating the needs of your container apps. Often you can run multiple apps on a single instance of a workload profile. |
container-apps | Blue Green Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/blue-green-deployment.md | The following example shows how the traffic section is configured. The revision { "traffic": [ {- "revisionName": "<APP_NAME>--0b699ef", + "revisionName": "<APP_NAME>--fb699ef", "weight": 100, "label": "blue" }, The following example shows how the `traffic` section is configured after this s "label": "blue" }, {- "revisionName": "<APP_NAME>--0b699ef", + "revisionName": "<APP_NAME>--fb699ef", "weight": 100, "label": "green" } |
data-factory | Copy Activity Performance Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-troubleshooting.md | Activity execution time varies when the dataset is based on different Integratio - To copy large excel file (>100 MB) into other store, you can use Data Flow Excel source which sport streaming read and perform better. ++### The OOM Issue of reading large JSON/Excel/XML files ++- **Symptoms**: When you read large JSON/Excel/XML files, you meet the out of memory (OOM) issue during the activity execution. ++- **Cause**: ++ - **For large XML files**: + The OOM issue of reading large XML files is by design. The cause is that the whole XML file must be read into memory as it is a single object, then the schema is inferred, and the data is retrieved. + - **For large Excel files**: + The OOM issue of reading large Excel files is by design. The cause is that the SDK (POI/NPOI) used must read the whole excel file into memory, then infer the schema and get data. + - **For large JSON files**: + The OOM issue of reading large JSON files is by design when the JSON file is a single object. ++- **Recommendation**: Apply one of the following options to solve your issue. ++ - **Option-1**: Register an online self-hosted integration runtime with powerful machine (high CPU/memory) to read data from your large file through your copy activity. + - **Option-2**: Use optimized memory and big size cluster (for example, 48 cores) to read data from your large file through the mapping data flow activity. + - **Option-3**: Split the large file into small ones, then use copy or mapping data flow activity to read the folder. + - **Option-4**: If you are stuck or meet the OOM issue during copy the XML/Excel/JSON folder, use the foreach activity + copy/mapping data flow activity in your pipeline to handle each file or subfolder. + - **Option-5**: Others: + - For XML, use Notebook activity with memory optimized cluster to read data from files if each file has the same schema. Currently, Spark has different implementations to handle XML. + - For JSON, use different document forms (for example, **Single document**, **Document per line** and **Array of documents**) in [JSON settings](format-json.md#source-format-options) under mapping data flow source. If the JSON file content is **Document per line**, it consumes very little memory. ++ ## Other references Here is performance monitoring and tuning references for some of the supported data stores: |
data-manager-for-agri | Concepts Byol And Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-byol-and-credentials.md | Last updated 06/23/2023 -# Store and use your license keys. +# Store and use your own license keys Azure Data Manager for Agriculture supports a range of data ingress connectors to centralize your fragmented accounts. These connections require the customer to populate their credentials in a Bring Your Own License (BYOL) model, so that the data manager may retrieve data on behalf of the customer. Azure Data Manager for Agriculture supports a range of data ingress connectors t ## Prerequisites -To access Azure Key Vault, you need an Azure subscription. If you don't already have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. +To use BYOL, you need an Azure subscription. If you don't already have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ## Overview -In BYOL model, you're responsible for providing your own licenses for satellite imagery and weather connector. In the vault reference model, you store your credentials as secret in a customer managed Azure Key Vault. The URI of the secret must be shared and read permissions granted to Azure Data Manager for Agriculture so that the APIs can work seamlessly. This process is a one-time setup for each connector. Our Data Manager then refers to and reads the secret from the customersΓÇÖ key vault as part of the API call with no exposure of the secret. +In BYOL model, you're responsible for providing your own licenses for satellite and weather data connectors. In this model, you store the secret part of credentials in a customer managed Azure Key Vault. The URI of the secret must be shared with Azure Data Manager for Agriculture instance. Azure Data Manager for Agriculture instance should be given secrets read permissions so that the APIs can work seamlessly. This process is a one-time setup for each connector. Our Data Manager then refers to and reads the secret from the customersΓÇÖ key vault as part of the API call with no exposure of the secret. Flow diagram showing creation and sharing of credentials. :::image type="content" source="./media/concepts-byol-and-credentials/vault-usage-flow.png" alt-text="Screenshot showing credential sharing flow."::: -The steps to use Azure Key Vault in Data Manager for Agriculture are as follows: +Customer can optionally override credentials to be used for a data plane request by providing credentials as part of the data plane API request. -## Step 1: Create Key Vault -Customers can create a key vault or use an existing key vault to share license credentials for satellite (Sentinel Hub) and weather (IBM Weather). Customer [creates Azure Key Vault](/azure/key-vault/general/quick-create-portal) or reuses existing an existing key vault. The following properties are recommended: +## Sequence of steps for setting up connectors ++### Step 1: Create or use existing Key Vault +Customers can create a key vault or use an existing key vault to share license credentials for satellite (Sentinel Hub) and weather (IBM Weather). Customer [creates Azure Key Vault](/azure/key-vault/general/quick-create-portal) or reuses existing an existing key vault. ++Enable following properties: :::image type="content" source="./media/concepts-byol-and-credentials/create-key-vault.png" alt-text="Screenshot showing key vault properties."::: Data Manager for Agriculture is a Microsoft trusted service and supports private :::image type="content" source="./media/concepts-byol-and-credentials/enable-access-to-keys.png" alt-text="Screenshot showing key vault access."::: -## Step 2: Store secret in Azure Key Vault -For sharing your satellite or weather service credentials, store client secrets in a key vault, for example `ClientSecret` for `SatelliteSentinelHub` and `APIKey` for `WeatherIBM`. Customers are in control of secret name and rotation. +### Step 2: Store secret in Azure Key Vault +For sharing your satellite or weather service credentials, store secret part of credentials in the key vault, for example `ClientSecret` for `SatelliteSentinelHub` and `APIKey` for `WeatherIBM`. Customers are in control of secret name and rotation. Refer to [this guidance](/azure/key-vault/secrets/quick-create-portal#add-a-secret-to-key-vault) to store and retrieve your secret from the vault. :::image type="content" source="./media/concepts-byol-and-credentials/store-your-credential-keys.png" alt-text="Screenshot showing storage of key values."::: -## Step 3: Enable system identity -As a customer you have to enable system identity for your Data Manager for Agriculture instance. There are two options: +### Step 3: Enable system identity +As a customer you have to enable system identity for your Data Manager for Agriculture instance. This identity is used while given secret read permissions for Azure Data Manager for Agriculture instance. ++Follow one of the following methods to enable: -1. Via UI +1. Via Azure portal UI :::image type="content" source="./media/concepts-byol-and-credentials/enable-system-via-ui.png" alt-text="Screenshot showing usage of UI to enable key."::: 2. Via Azure Resource Manager client ```cmd- armclient patch /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AgFoodPlatform/farmBeats/{ADMA_instance_name}?api-version=2023-04-01-preview "{identity: { type: 'systemAssigned' }} + armclient patch /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AgFoodPlatform/farmBeats/{ADMA_instance_name}?api-version=2023-06-01-preview "{identity: { type: 'systemAssigned' }} ``` -## Step 4: Access policy -Add an access policy in key vault for your Data Manager for Agriculture instance. +### Step 4: Access policy +Add an access policy in the key vault for your Data Manager for Agriculture instance. -1. Go to access policies tab in the created key vault. +1. Go to access policies tab in the key vault. :::image type="content" source="./media/concepts-byol-and-credentials/select-access-policies.png" alt-text="Screenshot showing selection of access policy."::: Add an access policy in key vault for your Data Manager for Agriculture instance :::image type="content" source="./media/concepts-byol-and-credentials/access-policy-creation.png" alt-text="Screenshot showing selection create and review tab."::: -## Step 5: Invoke control plane API call -Use the [API call](/rest/api/data-manager-for-agri/controlplane-version2021-09-01-preview/farm-beats-models/create-or-update?tabs=HTTP) to specify credentials. Key vault URI/ key name/ key version can be found after creating secret as shown in the following figure. +### Step 5: Invoke control plane API call +Use the [API call](/rest/api/data-manager-for-agri/controlplane-version2023-06-01-preview/data-connectors) to specify connector credentials. Key vault URI/ key name/ key version can be found after creating secret as shown in the following figure. :::image type="content" source="./media/concepts-byol-and-credentials/details-key-vault.png" alt-text="Screenshot showing where key name and key version is available."::: -Flow showing how Azure Data Manager for Agriculture accesses secret. +#### Following values should be used for the connectors while invoking above APIs: ++| Scenario | DataConnectorName | Credentials | +|--|--|--| +| For Satellite SentinelHub connector | SatelliteSentinelHub | OAuthClientCredentials | +| For Weather IBM connector | WeatherIBM | ApiKeyAuthCredentials | ++## Overriding connector details +As part of Data plane APIs, customer can choose to override the connector details that need to be used for that request. ++Customer can refer to API version `2023-06-01-preview` documentation where the Data plane APIs for satellite and weather take the credentials as part of the request body. ++## How Azure Data Manager for Agriculture accesses secret +Following flow shows how Azure Data Manager for Agriculture accesses secret. :::image type="content" source="./media/concepts-byol-and-credentials/key-access-flow.png" alt-text="Screenshot showing how the data manager accesses credentials."::: If you disable and then re-enable system identity, then you have to delete the access policy in key vault and add it again. You can use our data plane APIs and reference license keys in your key vault. Yo ## Next steps -* Test our APIs [here](/rest/api/data-manager-for-agri). +* Test our APIs [here](/rest/api/data-manager-for-agri). |
defender-for-cloud | Alert Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md | Title: Alert validation in Microsoft Defender for Cloud description: Learn how to validate that your security alerts are correctly configured in Microsoft Defender for Cloud Previously updated : 06/20/2023 Last updated : 06/27/2023 If you're using the new preview alerts experience as described in [Manage and re Use sample alerts to: -- evaluate the value and capabilities of your Microsoft Defender plans-- validate any configurations you've made for your security alerts (such as SIEM integrations, workflow automation, and email notifications)+- evaluate the value and capabilities of your Microsoft Defender plans. +- validate any configurations you've made for your security alerts (such as SIEM integrations, workflow automation, and email notifications). To create sample alerts: To create sample alerts: ## Simulate alerts on your Azure VMs (Windows) <a name="validate-windows"></a> -After the Log Analytics agent is installed on your machine, follow these steps from the computer where you want to be the attacked resource of the alert: +After the Microsoft Defender for Endpoint agent is installed on your machine, as part of Defender for Servers integration, follow these steps from the machine where you want to be the attacked resource of the alert: -1. Copy an executable (for example **calc.exe**) to the computer's desktop, or other directory of your convenience, and rename it as **ASC_AlertTest_662jfi039N.exe**. -1. Open the command prompt and execute this file with an argument (just a fake argument name), such as: ```ASC_AlertTest_662jfi039N.exe -foo``` -1. Wait 5 to 10 minutes and open Defender for Cloud Alerts. An alert should appear. +1. Open an elevated command-line prompt on the device and run the script: -> [!NOTE] -> When reviewing this test alert for Windows, make sure the field **Arguments Auditing Enabled** is **true**. If it is **false**, then you need to enable command-line arguments auditing. To enable it, use the following command: -> ->```reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\policies\system\Audit" /f /v "ProcessCreationIncludeCmdLine_Enabled"``` + 1. Go to **Start** and type `cmd`. -## Simulate alerts on your Azure VMs (Linux) <a name="validate-linux"></a> + 1. Right-select **Command Prompt** and select **Run as administrator** + + :::image type="content" source="media/alert-validation/command-prompt.png" alt-text="Screenshot showing where to select Run as Administrator." lightbox="media/alert-validation/command-prompt.png"::: -After the Log Analytics agent is installed on your machine, follow these steps from the computer where you want to be the attacked resource of the alert: +1. At the prompt, copy and run the following command: `powershell.exe -NoExit -ExecutionPolicy Bypass -WindowStyle Hidden $ErrorActionPreference = 'silentlycontinue';(New-Object System.Net.WebClient).DownloadFile('http://127.0.0.1/1.exe', 'C:\\test-MDATP-test\\invoice.exe');Start-Process 'C:\\test-MDATP-test\\invoice.exe'` -1. Copy an executable to a convenient location and rename it to `./asc_alerttest_662jfi039n`. For example: +1. The Command Prompt window closes automatically. If successful, a new alert should appear in Defender for Cloud Alerts blade in 10 minutes. - `cp /bin/echo ./asc_alerttest_662jfi039n` +1. The message line in the PowerShell box should appear similar to how it's presented here: -1. Open the command prompt and execute this file: + :::image type="content" source="media/alert-validation/powershell-no-exit.png" alt-text="Screenshot showing PowerShell message line." lightbox="media/alert-validation/powershell-no-exit.png"::: - `./asc_alerttest_662jfi039n testing eicar pipe` +Alternately, you can also use the [EICAR](https://www.eicar.org/download/eicar.com.txt) test string to perform this test: Create a text file, paste the EICAR line, and save the file as an executable file to your machine's local drive. -1. Wait 5 to 10 minutes and then open Defender for Cloud Alerts. An alert should appear. +> [!NOTE] +> When reviewing test alerts for Windows, make sure that you have Defender for Endpoint running with Real-Time protection enabled. Learn how to [validate this configuration](https://learn.microsoft.com/microsoft-365/security/defender-endpoint/configure-real-time-protection-microsoft-defender-antivirus?view=o365-worldwide). ++## Simulate alerts on your Azure VMs (Linux) <a name="validate-linux"></a> ++After the Microsoft Defender for Endpoint agent is installed on your machine, as part of Defender for Servers integration, follow these steps from the machine where you want to be the attacked resource of the alert: ++1. Open a Terminal window, copy and run the following command: +[`curl -o ~/Downloads/eicar.com.txt`](https://www.eicar.org/download/eicar.com.txt). +1. The Command Prompt window closes automatically. If successful, a new alert should appear in Defender for Cloud Alerts blade in 10 minutes. ++> [!NOTE] +> When reviewing test alerts for Linux, make sure that you have Defender for Endpoint running with Real-Time protection enabled. Learn how to [validate this configuration](https://learn.microsoft.com/microsoft-365/security/defender-endpoint/configure-real-time-protection-microsoft-defender-antivirus?view=o365-worldwide). ## Simulate alerts on Kubernetes <a name="validate-kubernetes"></a> You can simulate alerts for resources running on [App Service](/azure/app-servic **To simulate an app services EICAR alert:** -1. Find the HTTP endpoint of the website either by going into Azure portal blade for the App Services website or using the custom DNS entry associated with this website. (The default URL endpoint for Azure App Services website has the suffix `https://XXXXXXX.azurewebsites.net`). The website should be an existing website and not one that was created just prior to the alert simulation. -1. Browse to the website URL and add to it the following fixed suffix: `/This_Will_Generate_ASC_Alert`. The URL should look like this: `https://XXXXXXX.azurewebsites.net/This_Will_Generate_ASC_Alert`. It might take some time for the alert to be generated (~1.5 hours). +1. Find the HTTP endpoint of the website either by going into Azure portal blade for the App Services website or using the custom DNS entry associated with this website. (The default URL endpoint for Azure App Services website has the suffix `https://XXXXXXX.azurewebsites.net`). The website should be an existing website and not one that was created prior to the alert simulation. +1. Browse to the website URL and add the following fixed suffix: `/This_Will_Generate_ASC_Alert`. The URL should look like this: `https://XXXXXXX.azurewebsites.net/This_Will_Generate_ASC_Alert`. It might take some time for the alert to be generated (~1.5 hours). ## Validate Azure Key Vault Threat Detection You can simulate alerts for resources running on [App Service](/azure/app-servic 1. If you donΓÇÖt have a Key Vault created yet, make sure to [create one](https://learn.microsoft.com/azure/key-vault/general/quick-create-portal). 1. After finishing creating the Key Vault and the secret, go to a VM that has Internet access and [download the TOR Browser](https://www.torproject.org/download/). 1. Install the TOR Browser on your VM.-1. Once you finished the installation, open your regular browser, logon to the Azure portal, and access the Key Vault page. Select the URL highlighted below and copy the address. +1. Once you finished the installation, open your regular browser, sign-in to the Azure portal, and access the Key Vault page. Select the highlighted URL and copy the address. 1. Open TOR and paste this URL (you need to authenticate again to access the Azure portal). 1. After finishing access, you can also select the Secrets option in the left pane. 1. In the TOR Browser, sign out from Azure portal and close the browser. |
defender-for-cloud | Express Configuration Azure Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/express-configuration-azure-commands.md | This article lists the Azure Command Line Interface (CLI) commands that can be u - [Set SQL vulnerability assessment server setting](#set-sql-vulnerability-assessment-server-setting) - [Remove SQL vulnerability assessment server setting](#remove-sql-vulnerability-assessment-server-setting) -### Set SQL vulnerability assessment baseline on system database +> [!NOTE] +> For Azure CLI reference for the classic configuration, see [Manage findings in your Azure SQL databases](sql-azure-vulnerability-assessment-manage.md#azure-cli) ++## Set SQL vulnerability assessment baseline on system database **Example 1**: az rest --method Get --uri /subscriptions/00000000-1111-2222-3333-444444444444/r } ``` - ### Get SQL vulnerability assessment scans on user database **Example 1**: az rest --method Put --uri /subscriptions/00000000-1111-2222-3333-444444444444/r "type": "Microsoft.Sql/servers/sqlVulnerabilityAssessments" } ```+ **Example 2**: ```azurecli az rest --method Put --uri /subscriptions/00000000-1111-2222-3333-444444444444/r ```azurecli az rest --method Delete --uri /subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/vulnerabilityaseessmenttestRg/providers/Microsoft.Sql/servers/vulnerabilityaseessmenttest/sqlVulnerabilityAssessments/default?api-version=2022-02-01-preview ```+ ## Next steps [Find and remediate vulnerabilities in your Azure SQL databases](sql-azure-vulnerability-assessment-find.md) |
defender-for-cloud | Multi Factor Authentication Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md | Title: Microsoft Defender for Cloud's security recommendations for MFA description: Learn how to enforce multi-factor authentication for your Azure subscriptions using Microsoft Defender for Cloud Previously updated : 06/15/2023 Last updated : 06/28/2023 # Manage multi-factor authentication (MFA) enforcement on your subscriptions If you're using passwords, only to authenticate your users, you're leaving an at There are multiple ways to enable MFA for your Azure Active Directory (AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud. +## MFA and Microsoft Defender for Cloud -## MFA and Microsoft Defender for Cloud --Defender for Cloud places a high value on MFA. The security control that contributes the most to your secure score is **Enable MFA**. +Defender for Cloud places a high value on MFA. The security control that contributes the most to your secure score is **Enable MFA**. The recommendations in the Enable MFA control ensure you're meeting the recommended practices for users of your subscriptions: From the recommendation details page, select a subscription from the **Unhealthy ### View the accounts without MFA enabled using Azure Resource Graph -To see which accounts don't have MFA enabled, use the following Azure Resource Graph query. The query returns all unhealthy resources - accounts - of the recommendation "MFA should be enabled on accounts with owner permissions on your subscription". +To see which accounts don't have MFA enabled, use the following Azure Resource Graph query. The query returns all unhealthy resources - accounts - of the recommendation "Accounts with owner permissions on Azure resources should be MFA enabled". 1. Open **Azure Resource Graph Explorer**. To see which accounts don't have MFA enabled, use the following Azure Resource G ```kusto securityresources | where type == "microsoft.security/assessments"- | where properties.displayName == "MFA should be enabled on accounts with owner permissions on subscriptions" + | where properties.displayName contains "Accounts with owner permissions on Azure resources should be MFA enabled" | where properties.status.code == "Unhealthy" ``` -1. The `additionalData` property reveals the list of account object IDs for accounts that don't have MFA enforced. +1. The `additionalData` property reveals the list of account object IDs for accounts that don't have MFA enforced. > [!NOTE] > The accounts are shown as object IDs rather than account names to protect the privacy of the account holders. To see which accounts don't have MFA enabled, use the following Azure Resource G > Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get). ## Next steps+ To learn more about recommendations that apply to other Azure resource types, see the following article: - [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md) |
energy-data-services | How To Add More Data Partitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md | Title: How to manage partitions -description: This is a how-to article on managing data partitions using the Microsoft Azure Data Manager for Energy Preview instance UI. +description: This is a how-to article on managing data partitions using the Microsoft Azure Data Manager for Energy instance UI. -In this article, you'll learn how to add data partitions to an existing Azure Data Manager for Energy Preview instance. The concept of "data partitions" is picked from [OSDU™](https://osduforum.org/) where single deployment can contain multiple partitions. +In this article, you'll learn how to add data partitions to an existing Azure Data Manager for Energy instance. The concept of "data partitions" is picked from [OSDU™](https://osduforum.org/) where single deployment can contain multiple partitions. Each partition provides the highest level of data isolation within a single deployment. All access rights are governed at a partition level. Data is separated in a way that allows for the partition's life cycle and deployment to be handled independently. (See [Partition Service](https://community.opengroup.org/osdu/platform/home/-/issues/31) in OSDU™) |
energy-data-services | How To Create Lockbox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-create-lockbox.md | Title: Use Lockbox for Microsoft Azure Data Manager for Energy Preview + Title: Use Lockbox for Microsoft Azure Data Manager for Energy description: Learn how to use Customer Lockbox as an interface to review and approve or reject access requests. -#Customer intent: As a developer, I want to set up Lockbox for Azure Data Manager for Energy Preview. +#Customer intent: As a developer, I want to set up Lockbox for Azure Data Manager for Energy. -# Use Customer Lockbox for Azure Data Manager for Energy Preview +# Use Customer Lockbox for Azure Data Manager for Energy -Azure Data Manager for Energy Preview is the managed service offering for OSDU™. There are instances where Microsoft Support may need to access your data or compute layer during a support request. You can use Customer Lockbox as an interface to review and approve or reject these access requests. +Azure Data Manager for Energy is the managed service offering for OSDU™. There are instances where Microsoft Support may need to access your data or compute layer during a support request. You can use Customer Lockbox as an interface to review and approve or reject these access requests. -This article covers how Customer Lockbox requests are initiated and tracked for Azure Data Manager for Energy Preview. +This article covers how Customer Lockbox requests are initiated and tracked for Azure Data Manager for Energy. -## Lockbox workflow for Azure Data Manager for Energy Preview access +## Lockbox workflow for Azure Data Manager for Energy access -The Azure Data Manager for Energy Preview team at Microsoft typically does not access customer data. The team tries to resolve issues by using standard tools and telemetry. +The Azure Data Manager for Energy team at Microsoft typically does not access customer data. The team tries to resolve issues by using standard tools and telemetry. If the issues cannot be resolved and require Microsoft Support to investigate, the team needs to request elevated access to the limited resources via Just in Time (JIT) portal (internal to Microsoft). The JIT portal validates permission level, provides multi-factor authentication, and includes approval from the Internal Microsoft Approvers. After the request for elevated access is approved via the JIT (just-in-time syst ## Prerequisites for access request Before you begin, make sure:-1. You have created a [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). +1. You have created a [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). 2. You have enabled [Lockbox within the Azure portal](../security/fundamentals/customer-lockbox-overview.md). ## Track, approve request via Lockbox To track and approve a request to access customer data, follow these steps:-1. You raise an issue for Azure Data Manager for Energy Preview using the Azure portal. The support engineer connects to Azure Data Manager for Energy Preview via Support session and tries to troubleshoot the issue by using standard tools and telemetry. Let us say to mitigate the issue, the recommendation is to restart an AKS (Azure Kubernetes Service) cluster. +1. You raise an issue for Azure Data Manager for Energy using the Azure portal. The support engineer connects to Azure Data Manager for Energy via Support session and tries to troubleshoot the issue by using standard tools and telemetry. Let us say to mitigate the issue, the recommendation is to restart an AKS (Azure Kubernetes Service) cluster. 2. In this case, the support engineer creates a Lockbox request to access the AKS cluster for the given subscription. 3. When the request is created, usually the notification goes to the subscription owner, but you can also configure a group for notifications. 4. You can see the lockbox request in the Azure portal for your approval. To track and approve a request to access customer data, follow these steps: 6. Once the request is approved, the AKS clusters are accessible in the support session. 7. The support engineer restarts the AKS cluster to resolve the issue and then disables the support session or the session will expire in 4 to 8 hours. -If you have not enabled Lockbox, then your consent is not needed to access the compute or data layer of Azure Data Manager for Energy Preview. +If you have not enabled Lockbox, then your consent is not needed to access the compute or data layer of Azure Data Manager for Energy. ## Next steps <!-- Add a context sentence for the following links --> To learn more about data security and encryption > [!div class="nextstepaction"]-> [Data security and encryption in Azure Data Manager for Energy Preview](how-to-manage-data-security-and-encryption.md) +> [Data security and encryption in Azure Data Manager for Energy](how-to-manage-data-security-and-encryption.md) |
energy-data-services | How To Generate Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md | Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy Preview + Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy description: This article describes how to generate a refresh token In this article, you will learn how to generate a refresh token. The following a 2. Get authorization. 3. Get a refresh token. - ## Register your app with Azure AD-To use the Azure Data Manager for Energy Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. +To use the Azure Data Manager for Energy platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. To configure an app to use the OAuth 2.0 authorization code grant flow, save the following values when registering the app: |
energy-data-services | How To Integrate Airflow Logs With Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-airflow-logs-with-azure-monitor.md | Title: Integrate airflow logs with Azure Monitor - Microsoft Microsoft Azure Data Manager for Energy Preview + Title: Integrate airflow logs with Azure Monitor - Microsoft Microsoft Azure Data Manager for Energy description: This is a how-to article on how to start collecting Airflow Task logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace. -In this article, you'll learn how to start collecting Airflow Logs for your Microsoft Azure Data Manager for Energy Preview instances into Azure Monitor. This integration feature helps you debug Airflow DAG ([Directed Acyclic Graph](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html)) run failures. +In this article, you'll learn how to start collecting Airflow Logs for your Microsoft Azure Data Manager for Energy instances into Azure Monitor. This integration feature helps you debug Airflow DAG ([Directed Acyclic Graph](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html)) run failures. ## Prerequisites In this article, you'll learn how to start collecting Airflow Logs for your Micr ## Enabling diagnostic settings to collect logs in a storage account-Every Azure Data Manager for Energy Preview instance comes inbuilt with an Azure Data Factory-managed Airflow instance. We collect Airflow logs for internal troubleshooting and debugging purposes. Airflow logs can be integrated with Azure Monitor in the following ways: +Every Azure Data Manager for Energy instance comes inbuilt with an Azure Data Factory-managed Airflow instance. We collect Airflow logs for internal troubleshooting and debugging purposes. Airflow logs can be integrated with Azure Monitor in the following ways: * Storage account * Log Analytics workspace To access logs via any of the above two options, you need to create a Diagnostic Follow the following steps to set up Diagnostic Settings: -1. Open Microsoft Azure Data Manager for Energy Preview' *Overview* page +1. Open Microsoft Azure Data Manager for Energy' *Overview* page 1. Select *Diagnostic Settings* from the left panel [](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png#lightbox) After a diagnostic setting is created for archiving Airflow task logs into a sto ## Enabling diagnostic settings to integrate logs with Log Analytics Workspace -You can integrate Airflow logs with Log Analytics Workspace by using **Diagnostic Settings** under the left panel of your Microsoft Azure Data Manager for Energy Preview instance overview page. +You can integrate Airflow logs with Log Analytics Workspace by using **Diagnostic Settings** under the left panel of your Microsoft Azure Data Manager for Energy instance overview page. [](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-choosing-destination-retention.png#lightbox) |
energy-data-services | How To Integrate Elastic Logs With Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md | Title: Integrate elastic logs with Azure Monitor - Microsoft Azure Data Manager for Energy Preview + Title: Integrate elastic logs with Azure Monitor - Microsoft Azure Data Manager for Energy description: This is a how-to article on how to start collecting ElasticSearch logs in Azure Monitor, archiving them to a storage account, and querying them in Log Analytics workspace. -In this article, you'll learn how to start collecting Elasticsearch logs for your Azure Data Manager for Energy Preview instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor. +In this article, you'll learn how to start collecting Elasticsearch logs for your Azure Data Manager for Energy instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor. ## Prerequisites In this article, you'll learn how to start collecting Elasticsearch logs for you ## Enabling Diagnostic Settings to collect logs in a storage account & a Log Analytics workspace-Every Azure Data Manager for Energy Preview instance comes inbuilt with a managed Elasticsearch service. We collect Elasticsearch logs for internal troubleshooting and debugging purposes. You can get access to these logs by integrating Elasticsearch logs with Azure Monitor. +Every Azure Data Manager for Energy instance comes inbuilt with a managed Elasticsearch service. We collect Elasticsearch logs for internal troubleshooting and debugging purposes. You can get access to these logs by integrating Elasticsearch logs with Azure Monitor. Each diagnostic setting has three basic parts: | Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) | | Destinations | One or more destinations to send the logs. All Azure services share the same set of possible destinations. Each diagnostic setting can define one or more destinations but no more than one destination of a particular type. It should be a storage account, an Event Hubs namespace or an event hub. | -We support two destinations for your Elasticsearch logs from Azure Data Manager for Energy Preview instance: +We support two destinations for your Elasticsearch logs from Azure Data Manager for Energy instance: * Storage account * Log Analytics workspace We support two destinations for your Elasticsearch logs from Azure Data Manager ## Steps to enable diagnostic setting to collect Elasticsearch logs -1. Open *Azure Data Manager for Energy Preview* overview page +1. Open *Azure Data Manager for Energy* overview page 1. Select *Diagnostic Settings* from the left panel [](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-overview-page.png#lightbox) Go back to the Diagnostic Settings page. You would now see a new diagnostic sett ## View Elasticsearch logs in Log Analytics workspace or download them as JSON files using storage account ### How to view & query logs in Log Analytics workspace-The editor in Log Analytics workspace support Kusto (KQL) queries through which you can easily perform complicated queries to extract interesting logs data from the Elasticsearch service running in your Azure Data Manager for Energy Preview instance. +The editor in Log Analytics workspace support Kusto (KQL) queries through which you can easily perform complicated queries to extract interesting logs data from the Elasticsearch service running in your Azure Data Manager for Energy instance. * Run queries and see Elasticsearch logs in the Log Analytics workspace. After collecting resource logs as explained in this article, there are more capa * Create a log query alert to be proactively notified when interesting data is identified in your log data. [Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md) -* Start collecting logs from other sources such as Airflow in your Azure Data Manager for Energy Preview instance. +* Start collecting logs from other sources such as Airflow in your Azure Data Manager for Energy instance. [How to Integrate Airflow logs with Azure Monitor](how-to-integrate-airflow-logs-with-azure-monitor.md) |
energy-data-services | How To Integrate Osdu Service Logs With Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-osdu-service-logs-with-azure-monitor.md | Title: Integrate OSDU Service Logs with Azure Monitor - Microsoft Azure Data Manager for Energy Preview + Title: Integrate OSDU Service Logs with Azure Monitor - Microsoft Azure Data Manager for Energy description: This how-to article shows you how to integrate OSDU service logs with Azure Monitor. This feature helps you better troubleshoot, debug, & monitor the OSDU services. -Azure Data Manager for Energy Preview supports exporting OSDU Service Logs to Azure Monitor using a diagnostic setting. This feature helps you better troubleshoot, debug, & monitor the OSDU services. The instructions here are similar to how you would integrate other logs, such as Airflow and Elastic, with Azure Monitor. +Azure Data Manager for Energy supports exporting OSDU Service Logs to Azure Monitor using a diagnostic setting. This feature helps you better troubleshoot, debug, & monitor the OSDU services. The instructions here are similar to how you would integrate other logs, such as Airflow and Elastic, with Azure Monitor. ## Prerequisites Azure Data Manager for Energy Preview supports exporting OSDU Service Logs to Az ## Enabling diagnostic settings for OSDU service logs integration -1. Open Microsoft Azure Data Manager for Energy Preview *Overview* page. +1. Open Microsoft Azure Data Manager for Energy *Overview* page. 1. Select *Diagnostic Settings* from the left panel. [](media/how-to-integrate-osdu-service-logs-with-azure-monitor/diagnostic-settings-overview-page.png#lightbox) |
energy-data-services | How To Manage Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-audit-logs.md | Title: How to manage audit logs for Microsoft Azure Data Manager for Energy Preview -description: Learn how to use audit logs on Azure Data Manager for Energy Preview + Title: How to manage audit logs for Microsoft Azure Data Manager for Energy +description: Learn how to use audit logs on Azure Data Manager for Energy Last updated 04/11/2023 -#Customer intent: As a developer, I want to use audit logs to check audit trail for data plane APIs for Azure Data Manager for Energy Preview. +#Customer intent: As a developer, I want to use audit logs to check audit trail for data plane APIs for Azure Data Manager for Energy. OEPAuditLogs Learn about Managed Identity: > [!div class="nextstepaction"]-> [Managed Identity in Azure Data Manager for Energy Preview](how-to-use-managed-identity.md) +> [Managed Identity in Azure Data Manager for Energy](how-to-use-managed-identity.md) |
energy-data-services | How To Manage Data Security And Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-data-security-and-encryption.md | Title: Data security and encryption in Microsoft Azure Data Manager for Energy Preview -description: Guide on security in Azure Data Manager for Energy Preview and how to set up customer managed keys on Azure Data Manager for Energy Preview + Title: Data security and encryption in Microsoft Azure Data Manager for Energy +description: Guide on security in Azure Data Manager for Energy and how to set up customer managed keys on Azure Data Manager for Energy -#Customer intent: As a developer, I want to set up customer-managed keys on Azure Data Manager for Energy Preview. +#Customer intent: As a developer, I want to set up customer-managed keys on Azure Data Manager for Energy. -# Data security and encryption in Azure Data Manager for Energy Preview +# Data security and encryption in Azure Data Manager for Energy -This article provides an overview of security features in Azure Data Manager for Energy Preview. It covers the major areas of [encryption at rest](../security/fundamentals/encryption-atrest.md), encryption in transit, TLS, https, microsoft-managed keys, and customer managed key. +This article provides an overview of security features in Azure Data Manager for Energy. It covers the major areas of [encryption at rest](../security/fundamentals/encryption-atrest.md), encryption in transit, TLS, https, microsoft-managed keys, and customer managed key. ## Encrypt data at rest -Azure Data Manager for Energy Preview uses several storage resources for storing metadata, user data, in-memory data etc. The platform uses service-side encryption to automatically encrypt all the data when it is persisted to the cloud. Data encryption at rest protects your data to help you to meet your organizational security and compliance commitments. All data in Azure Data Manager for Energy Preview is encrypted with Microsoft-managed keys by default. -In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. +Azure Data Manager for Energy uses several storage resources for storing metadata, user data, in-memory data etc. The platform uses service-side encryption to automatically encrypt all the data when it is persisted to the cloud. Data encryption at rest protects your data to help you to meet your organizational security and compliance commitments. All data in Azure Data Manager for Energy is encrypted with Microsoft-managed keys by default. +In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. ## Encrypt data in transit -Azure Data Manager for Energy Preview supports Transport Layer Security (TLS 1.2) protocol to protect data when itΓÇÖs traveling between the cloud services and customers. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, and algorithm flexibility. +Azure Data Manager for Energy supports Transport Layer Security (TLS 1.2) protocol to protect data when itΓÇÖs traveling between the cloud services and customers. TLS provides strong authentication, message privacy, and integrity (enabling detection of message tampering, interception, and forgery), interoperability, and algorithm flexibility. -In addition to TLS, when you interact with Azure Data Manager for Energy Preview, all transactions take place over HTTPS. +In addition to TLS, when you interact with Azure Data Manager for Energy, all transactions take place over HTTPS. -## Set up Customer Managed Keys (CMK) for Azure Data Manager for Energy Preview instance +## Set up Customer Managed Keys (CMK) for Azure Data Manager for Energy instance > [!IMPORTANT]-> You cannot edit CMK settings once the Azure Data Manager for Energy Preview instance is created. +> You cannot edit CMK settings once the Azure Data Manager for Energy instance is created. ### Prerequisites **Step 1- Configure the key vault** 1. You can use a new or existing key vault to store customer-managed keys. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../key-vault/general/overview.md) and [What is Azure Key Vault](../key-vault/general/basic-concepts.md)?-2. Using customer-managed keys with Azure Data Manager for Energy Preview requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created. +2. Using customer-managed keys with Azure Data Manager for Energy requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created. 3. To learn how to create a key vault with the Azure portal, see [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md). When you create the key vault, select Enable purge protection. [](media/how-to-manage-data-security-and-encryption/customer-managed-key-1-create-key-vault.png#lightbox) In addition to TLS, when you interact with Azure Data Manager for Energy Preview 3. It is recommended that the RSA key size is 3072, see [Configure customer-managed keys for your Azure Cosmos DB account | Microsoft Learn](../cosmos-db/how-to-setup-customer-managed-keys.md#generate-a-key-in-azure-key-vault). **Step 3 - Choose a managed identity to authorize access to the key vault**-1. When you enable customer-managed keys for an existing Azure Data Manager for Energy Preview instance you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault. +1. When you enable customer-managed keys for an existing Azure Data Manager for Energy instance you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault. 2. You can create a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). ### Configure customer-managed keys for an existing account-1. Create a **Azure Data Manager for Energy Preview** instance. +1. Create a **Azure Data Manager for Energy** instance. 2. Select the **Encryption** tab. - [](media/how-to-manage-data-security-and-encryption/customer-managed-key-2-encryption-tab.png#lightbox) + [](media/how-to-manage-data-security-and-encryption/customer-managed-key-2-encryption-tab.png#lightbox) 3. In the encryption tab, select **Customer-managed keys (CMK)**. 4. For using CMK, you need to select the key vault where the key is stored. In addition to TLS, when you interact with Azure Data Manager for Energy Preview 12. Next, select ΓÇ£**Review+Create**ΓÇ¥ after completing other tabs. 13. Select the "**Create**" button. -14. An Azure Data Manager for Energy Preview instance is created with customer-managed keys. +14. An Azure Data Manager for Energy instance is created with customer-managed keys. 15. Once CMK is enabled you will see its status on the **Overview** screen. [](media/how-to-manage-data-security-and-encryption/customer-managed-key-6-cmk-enabled-meds-overview.png#lightbox) |
energy-data-services | How To Manage Legal Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-legal-tags.md | Title: How to manage legal tags in Microsoft Azure Data Manager for Energy Preview -description: This article describes how to manage legal tags in Azure Data Manager for Energy Preview + Title: How to manage legal tags in Microsoft Azure Data Manager for Energy +description: This article describes how to manage legal tags in Azure Data Manager for Energy -In this article, you'll know how to manage legal tags in your Azure Data Manager for Energy Preview instance. A Legal tag is the entity that represents the legal status of data in the Azure Data Manager for Energy Preview instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Azure Data Manager for Energy Preview instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Azure Data Manager for Energy Preview instance. Legal tags are defined at a data partition level individually. +In this article, you'll know how to manage legal tags in your Azure Data Manager for Energy instance. A Legal tag is the entity that represents the legal status of data in the Azure Data Manager for Energy instance. Legal tag is a collection of properties that governs how data can be ingested and consumed. A legal tag is required for data to be [ingested](concepts-csv-parser-ingestion.md) into your Azure Data Manager for Energy instance. It's also required for the [consumption](concepts-index-and-search.md) of the data from your Azure Data Manager for Energy instance. Legal tags are defined at a data partition level individually. -While in Azure Data Manager for Energy Preview instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so-until certain legal requirements are fulfilled. -+While in Azure Data Manager for Energy instance, [entitlement service](concepts-entitlements.md) defines access to data for a given user(s), legal tag defines the overall access to the data across users. A user may have access to manage the data within a data partition however, they may not be able to do so-until certain legal requirements are fulfilled. ## Create a legal tag-Run the below curl command in Azure Cloud Bash to create a legal tag for a given data partition of your Azure Data Manager for Energy Preview instance. +Run the below curl command in Azure Cloud Bash to create a legal tag for a given data partition of your Azure Data Manager for Energy instance. ```bash curl --location --request POST 'https://<URI>/api/legal/v1/legaltags' \ Run the below curl command in Azure Cloud Bash to create a legal tag for a given ``` ### Sample request-Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/legal/v1/legaltags' \ Consider an Azure Data Manager for Energy Preview instance named "medstest" with --header 'Content-Type: application/json' \ --data-raw '{ "name": "medstest-dp1-legal-tag",- "description": "Azure Data Manager for Energy Preview Legal Tag", + "description": "Azure Data Manager for Energy Legal Tag", "properties": { "contractId": "A1234", "countryOfOrigin": ["US"], Consider an Azure Data Manager for Energy Preview instance named "medstest" with ```JSON { "name": "medsStest-dp1-legal-tag",- "description": "Azure Data Manager for Energy Preview Legal Tag", + "description": "Azure Data Manager for Energy Legal Tag", "properties": { "countryOfOrigin": [ "US" The Create Legal Tag api, internally appends data-partition-id to legal tag name --header 'Content-Type: application/json' \ --data-raw '{ "name": "legal-tag",- "description": "Azure Data Manager for Energy Preview Legal Tag", + "description": "Azure Data Manager for Energy Legal Tag", "properties": { "contractId": "A1234", "countryOfOrigin": ["US"], The sample response will have data-partition-id appended to the legal tag name a ```JSON { "name": "medstest-dp1-legal-tag",- "description": "Azure Data Manager for Energy Preview Legal Tag", + "description": "Azure Data Manager for Energy Legal Tag", "properties": { "countryOfOrigin": [ "US" The sample response will have data-partition-id appended to the legal tag name a ``` ## Get a legal tag-Run the below curl command in Azure Cloud Bash to get the legal tag associated with a data partition of your Azure Data Manager for Energy Preview instance. +Run the below curl command in Azure Cloud Bash to get the legal tag associated with a data partition of your Azure Data Manager for Energy instance. ```bash curl --location --request GET 'https://<URI>/api/legal/v1/legaltags/<legal-tag-name>' \ Run the below curl command in Azure Cloud Bash to get the legal tag associated w ``` ### Sample request-Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" ```bash curl --location --request GET 'https://medstest.energy.azure.com/api/legal/v1/legaltags/medstest-dp1-legal-tag' \ Consider an Azure Data Manager for Energy Preview instance named "medstest" with ```JSON { "name": "medstest-dp1-legal-tag",- "description": "Azure Data Manager for Energy Preview Legal Tag", + "description": "Azure Data Manager for Energy Legal Tag", "properties": { "countryOfOrigin": [ "US" |
energy-data-services | How To Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md | Title: How to manage users in Microsoft Azure Data Manager for Energy Preview -description: This article describes how to manage users in Azure Data Manager for Energy Preview + Title: How to manage users in Microsoft Azure Data Manager for Energy +description: This article describes how to manage users in Azure Data Manager for Energy -In this article, you'll know how to manage users in Azure Data Manager for Energy Preview. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Azure Data Manager for Energy Preview instance. For more information about Azure Data Manager for Energy Preview entitlements, see [entitlement services](concepts-entitlements.md). -+In this article, you'll know how to manage users in Azure Data Manager for Energy. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Azure Data Manager for Energy instance. For more information about Azure Data Manager for Energy entitlements, see [entitlement services](concepts-entitlements.md). ## Prerequisites -Create an Azure Data Manager for Energy Preview instance using the tutorial at [How to create Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). +Create an Azure Data Manager for Energy instance using the tutorial at [How to create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). -You will need to pass parameters for generating the access token, which you'll need to make valid calls to the Entitlements API of your Azure Data Manager for Energy Preview instance. You will also need these parameters for different user management requests to the Entitlements API. Hence Keep the following values handy for these actions. +You will need to pass parameters for generating the access token, which you'll need to make valid calls to the Entitlements API of your Azure Data Manager for Energy instance. You will also need these parameters for different user management requests to the Entitlements API. Hence Keep the following values handy for these actions. #### Find `tenant-id` Navigate to the Azure Active Directory account for your organization. One way to do so is by searching for "Azure Active Directory" in the Azure portal's search bar. Once there, locate `tenant-id` under the basic information section in the *Overview* tab. Copy the `tenant-id` and paste in an editor to be used later. Navigate to the Azure Active Directory account for your organization. One way to :::image type="content" source="media/how-to-manage-users/tenant-id.png" alt-text="Screenshot of finding the tenant-id."::: #### Find `client-id`-Often called `app-id`, it's the same value that you used to register your application during the provisioning of your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md). You'll find the `client-id` in the *Essentials* pane of Azure Data Manager for Energy Preview *Overview* page. Copy the `client-id` and paste in an editor to be used later. +Often called `app-id`, it's the same value that you used to register your application during the provisioning of your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). You'll find the `client-id` in the *Essentials* pane of Azure Data Manager for Energy *Overview* page. Copy the `client-id` and paste in an editor to be used later. > [!IMPORTANT]-> The 'client-id' that is passed as values in the entitlement API calls needs to be the same which was used for provisioning of your Azure Data Manager for Energy Preview instance. +> The 'client-id' that is passed as values in the entitlement API calls needs to be the same which was used for provisioning of your Azure Data Manager for Energy instance. :::image type="content" source="media/how-to-manage-users/client-id-or-app-id.png" alt-text="Screenshot of finding the client-id for your registered App."::: #### Find `client-secret`-Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section. Create a `client-secret` for the `client-id` that you used to create your Azure Data Manager for Energy Preview instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code. +Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section. Create a `client-secret` for the `client-id` that you used to create your Azure Data Manager for Energy instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code. > [!CAUTION] > Don't forget to record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page at the time of creation of 'client secret'. :::image type="content" source="media/how-to-manage-users/client-secret.png" alt-text="Screenshot of finding the client secret."::: -#### Find the `url`for your Azure Data Manager for Energy Preview instance -Navigate to your Azure Data Manager for Energy Preview *Overview* page on Azure portal. Copy the URI from the essentials pane. +#### Find the `url`for your Azure Data Manager for Energy instance +Navigate to your Azure Data Manager for Energy *Overview* page on Azure portal. Copy the URI from the essentials pane. #### Find the `data-partition-id` for your group-You have two ways to get the list of data-partitions in your Azure Data Manager for Energy Preview instance. -- One option is to navigate *Data Partitions* menu item under the Advanced section of your Azure Data Manager for Energy Preview UI.+You have two ways to get the list of data-partitions in your Azure Data Manager for Energy instance. +- One option is to navigate *Data Partitions* menu item under the Advanced section of your Azure Data Manager for Energy UI. -- Another option is by clicking on the *view* below the *data partitions* field in the essentials pane of your Azure Data Manager for Energy Preview *Overview* page. +- Another option is by clicking on the *view* below the *data partitions* field in the essentials pane of your Azure Data Manager for Energy *Overview* page. ## Generate access token You need to generate access token to use entitlements API. Run the below curl command in Azure Cloud Bash after replacing the placeholder values with the corresponding values found earlier in the pre-requisites step. curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa "access_token": "abcdefgh123456............." } ```-Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Azure Data Manager for Energy Preview instance. +Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Azure Data Manager for Energy instance. ## User management activities -You can manage users' access to your Azure Data Manager for Energy Preview instance or data partitions. As a prerequisite for this step, you need to find the 'object-id' (OID) of the user(s) first. If you are managing an application's access to your instance or data partition, then you must find and use the application ID (or client ID) instead of the OID. +You can manage users' access to your Azure Data Manager for Energy instance or data partitions. As a prerequisite for this step, you need to find the 'object-id' (OID) of the user(s) first. If you are managing an application's access to your instance or data partition, then you must find and use the application ID (or client ID) instead of the OID. -You'll need to input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy Preview Instance. `object-id` (OID) is the Azure Active Directory User Object ID. +You'll need to input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy Instance. `object-id` (OID) is the Azure Active Directory User Object ID. :::image type="content" source="media/how-to-manage-users/azure-active-directory-object-id.png" alt-text="Screenshot of finding the object-id from Azure Active Directory."::: You'll need to input the `object-id` (OID) of the users (or the application or c ### Get the list of all available groups -Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Azure Data Manager for Energy Preview instance and its data partitions. +Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Azure Data Manager for Energy instance and its data partitions. ```bash curl --location --request GET "https://<URI>/api/entitlements/v2/groups/" \ The value to be sent for the param **"email"** is the **Object_ID (OID)** of the **Sample request** -Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/users@medstest-dp1.dataservices.energy/members' \ The value to be sent for the param **"email"** is the **Object_ID (OID)** of the **Sample request** -Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.search.user@medstest-dp1.dataservices.energy/members' \ Run the below curl command in Azure Cloud Bash to get all the groups associated **Sample request** -Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" ```bash curl --location --request GET 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX/groups?type=none' \ Consider an Azure Data Manager for Energy Preview instance named "medstest" with ### Delete entitlement groups of a given user -Run the below curl command in Azure Cloud Bash to delete a given user to your Azure Data Manager for Energy Preview instance data partition. +Run the below curl command in Azure Cloud Bash to delete a given user to your Azure Data Manager for Energy instance data partition. As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER that can manage users in that group. As stated above, **DO NOT** delete the OWNER of a group unless you have another **Sample request** -Consider an Azure Data Manager for Energy Preview instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" ```bash curl --location --request DELETE 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX' \ No output for a successful response ## Next steps <!-- Add a context sentence for the following links -->-Create a legal tag for your Azure Data Manager for Energy Preview instance's data partition. +Create a legal tag for your Azure Data Manager for Energy instance's data partition. > [!div class="nextstepaction"] > [How to manage legal tags](how-to-manage-legal-tags.md) -Begin your journey by ingesting data into your Azure Data Manager for Energy Preview instance. +Begin your journey by ingesting data into your Azure Data Manager for Energy instance. > [!div class="nextstepaction"] > [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md) > [!div class="nextstepaction"] |
energy-data-services | How To Set Up Private Links | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md | Title: Create a private endpoint for Microsoft Azure Data Manager for Energy Preview -description: Learn how to set up private endpoints for Azure Data Manager for Energy Preview by using Azure Private Link. + Title: Create a private endpoint for Microsoft Azure Data Manager for Energy +description: Learn how to set up private endpoints for Azure Data Manager for Energy by using Azure Private Link. Last updated 09/29/2022 -#Customer intent: As a developer, I want to set up private endpoints for Azure Data Manager for Energy Preview. +#Customer intent: As a developer, I want to set up private endpoints for Azure Data Manager for Energy. -# Create a private endpoint for Azure Data Manager for Energy Preview +# Create a private endpoint for Azure Data Manager for Energy [Azure Private Link](../private-link/private-link-overview.md) provides private connectivity from a virtual network to Azure platform as a service (PaaS). It simplifies the network architecture and secures the connection between endpoints in Azure by eliminating data exposure to the public internet. -By using Azure Private Link, you can connect to an Azure Data Manager for Energy Preview instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Energy Preview instance over these private IP addresses. +By using Azure Private Link, you can connect to an Azure Data Manager for Energy instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Energy instance over these private IP addresses. -You can connect to an Azure Data Manager for Energy Preview instance that's configured with Private Link by using an automatic or manual approval method. To learn more, see the [Private Link documentation](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow). +You can connect to an Azure Data Manager for Energy instance that's configured with Private Link by using an automatic or manual approval method. To learn more, see the [Private Link documentation](../private-link/private-endpoint-overview.md#access-to-a-private-link-resource-using-approval-workflow). -This article describes how to set up a private endpoint for Azure Data Manager for Energy Preview. -+This article describes how to set up a private endpoint for Azure Data Manager for Energy. > [!NOTE] > Terraform currently does not support private endpoint creation for Azure Data Manager for Energy. ## Prerequisites -[Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Azure Data Manager for Energy Preview instance. This virtual network allows automatic approval of the Private Link endpoint. +[Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Azure Data Manager for Energy instance. This virtual network allows automatic approval of the Private Link endpoint. ## Create a private endpoint during instance provisioning by using the Azure portal When you see Validation passed, select the **Create** button. ## Create a private endpoint post instance provisioning by using the Azure portal -Use the following steps to create a private endpoint for an existing Azure Data Manager for Energy Preview instance by using the Azure portal: +Use the following steps to create a private endpoint for an existing Azure Data Manager for Energy instance by using the Azure portal: -1. From the **All resources** pane, choose an Azure Data Manager for Energy Preview instance. +1. From the **All resources** pane, choose an Azure Data Manager for Energy instance. 1. Select **Networking** from the list of settings. 1. On the **Public Access** tab, select **Enabled from all networks** to allow traffic from all networks. Use the following steps to create a private endpoint for an existing Azure Data [](media/how-to-manage-private-links/private-links-3-basics.png#lightbox) > [!NOTE]- > Automatic approval happens only when the Azure Data Manager for Energy Preview instance and the virtual network for the private endpoint are in the same subscription. + > Automatic approval happens only when the Azure Data Manager for Energy instance and the virtual network for the private endpoint are in the same subscription. 1. Select **Next: Resource**. On the **Resource** page, confirm the following information: Use the following steps to create a private endpoint for an existing Azure Data |--|--| |**Subscription**| Your subscription| |**Resource type**| **Microsoft.OpenEnergyPlatform/energyServices**|- |**Resource**| Your Azure Data Manager for Energy Preview instance| - |**Target sub-resource**| **Azure Data Manager for Energy** (for Azure Data Manager for Energy Preview) by default| + |**Resource**| Your Azure Data Manager for Energy instance| + |**Target sub-resource**| **Azure Data Manager for Energy** (for Azure Data Manager for Energy) by default| [](media/how-to-manage-private-links/private-links-4-resource.png#lightbox) Use the following steps to create a private endpoint for an existing Azure Data [](media/how-to-manage-private-links/private-links-8-request-response.png#lightbox) -1. Select the **Azure Data Manager for Energy Preview** instance, select **Networking**, and then select the **Private Access** tab. Confirm that your newly created private endpoint connection appears in the list. +1. Select the **Azure Data Manager for Energy** instance, select **Networking**, and then select the **Private Access** tab. Confirm that your newly created private endpoint connection appears in the list. [](media/how-to-manage-private-links/private-links-9-auto-approved.png#lightbox) > [!NOTE]-> When the Azure Data Manager for Energy Preview instance and the virtual network are in different tenants or subscriptions, you have to manually approve the request to create a private endpoint. The **Approve** and **Reject** buttons appear on the **Private Access** tab. +> When the Azure Data Manager for Energy instance and the virtual network are in different tenants or subscriptions, you have to manually approve the request to create a private endpoint. The **Approve** and **Reject** buttons appear on the **Private Access** tab. > > [](media/how-to-manage-private-links/private-links-10-awaiting-approval.png#lightbox) Use the following steps to create a private endpoint for an existing Azure Data <!-- Add a context sentence for the following links --> To learn more about using Customer Lockbox as an interface to review and approve or reject access requests. > [!div class="nextstepaction"]-> [Use Lockbox for Azure Data Manager for Energy Preview](how-to-create-lockbox.md) +> [Use Lockbox for Azure Data Manager for Energy](how-to-create-lockbox.md) |
energy-data-services | How To Upload Large Files Using File Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-upload-large-files-using-file-service.md | Title: How to upload large files using file service API in Microsoft Azure Data Manager for Energy Preview -description: This article describes how to to upload large files using File service API in Microsoft Azure Data Manager for Energy Preview + Title: How to upload large files using file service API in Microsoft Azure Data Manager for Energy +description: This article describes how to to upload large files using File service API in Microsoft Azure Data Manager for Energy Last updated 06/13/2023 -# How to upload files in Azure Data Manager for Energy Preview using File service -In this article, you know how to upload large files (~5GB) using File service API in Microsoft Azure Data Manager for Energy Preview. The upload process involves fetching a signed URL from [File API](https://community.opengroup.org/osdu/platform/system/file/-/tree/master/) and then using the signed URL to store the file into Azure Blob Storage +# How to upload files in Azure Data Manager for Energy using File service +In this article, you know how to upload large files (~5GB) using File service API in Microsoft Azure Data Manager for Energy. The upload process involves fetching a signed URL from [File API](https://community.opengroup.org/osdu/platform/system/file/-/tree/master/) and then using the signed URL to store the file into Azure Blob Storage ## Generate a signed URL-Run the below curl command in Azure Cloud Bash to get a signed URL from file service for a given data partition of your Azure Data Manager for Energy Preview resource. +Run the below curl command in Azure Cloud Bash to get a signed URL from file service for a given data partition of your Azure Data Manager for Energy resource. ```bash curl --location 'https://<URI>/api/file/v2/files/uploadURL' \ Run the below curl command in Azure Cloud Bash to get a signed URL from file ser ``` ### Sample request-Consider an Azure Data Manager for Energy Preview resource named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy resource named "medstest" with a data partition named "dp1" ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/file/v2/files/uploadURL' \ To upload files with sizes >= 5 GB, we would need [azcopy](https://github.com/Az ``` ## Next steps-Begin your journey by ingesting data into your Azure Data Manager for Energy Preview resource. +Begin your journey by ingesting data into your Azure Data Manager for Energy resource. > [!div class="nextstepaction"] > [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md) > [!div class="nextstepaction"] |
energy-data-services | How To Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-use-managed-identity.md | Title: Use managed identities for Microsoft Azure Data Manager for Energy Preview on Azure -description: Learn how to use a managed identity to access Azure Data Manager for Energy Preview from other Azure services. + Title: Use managed identities for Microsoft Azure Data Manager for Energy on Azure +description: Learn how to use a managed identity to access Azure Data Manager for Energy from other Azure services. Last updated 01/04/2023 -#Customer intent: As a developer, I want to use a managed identity to access Azure Data Manager for Energy Preview from other Azure services, such as Azure Functions. +#Customer intent: As a developer, I want to use a managed identity to access Azure Data Manager for Energy from other Azure services, such as Azure Functions. -# Use a managed identity to access Azure Data Manager for Energy Preview from other Azure services +# Use a managed identity to access Azure Data Manager for Energy from other Azure services -This article describes how to access the data plane or control plane of Azure Data Manager for Energy Preview from other Microsoft Azure services by using a *managed identity*. +This article describes how to access the data plane or control plane of Azure Data Manager for Energy from other Microsoft Azure services by using a *managed identity*. -There's a need for services such as Azure Functions to be able to consume Azure Data Manager for Energy Preview APIs. This interoperability allows you to use the best capabilities of multiple Azure services. +There's a need for services such as Azure Functions to be able to consume Azure Data Manager for Energy APIs. This interoperability allows you to use the best capabilities of multiple Azure services. -For example, you can write a script in Azure Functions to ingest data in Azure Data Manager for Energy Preview. In that scenario, you should assume that Azure Functions is the source service and Azure Data Manager for Energy Preview is the target service. +For example, you can write a script in Azure Functions to ingest data in Azure Data Manager for Energy. In that scenario, you should assume that Azure Functions is the source service and Azure Data Manager for Energy is the target service. -This article walks you through the five main steps for configuring Azure Functions to access Azure Data Manager for Energy Preview. +This article walks you through the five main steps for configuring Azure Functions to access Azure Data Manager for Energy. ## Overview of managed identities -A managed identity from Azure Active Directory (Azure AD) allows your application to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to create or rotate any secrets. Any Azure service that wants to access Azure Data Manager for Energy Preview control plane or data plane for any operation can use a managed identity to do so. +A managed identity from Azure Active Directory (Azure AD) allows your application to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to create or rotate any secrets. Any Azure service that wants to access Azure Data Manager for Energy control plane or data plane for any operation can use a managed identity to do so. There are two types of managed identities: There are two types of managed identities: To learn more about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md). -Currently, other services can connect to Azure Data Manager for Energy Preview by using a system-assigned or user-assigned managed identity. However, Azure Data Manager for Energy Preview doesn't support system-assigned managed identities. +Currently, other services can connect to Azure Data Manager for Energy by using a system-assigned or user-assigned managed identity. However, Azure Data Manager for Energy doesn't support system-assigned managed identities. -For the scenario in this article, you'll use a user-assigned managed identity in Azure Functions to call a data plane API in Azure Data Manager for Energy Preview. +For the scenario in this article, you'll use a user-assigned managed identity in Azure Functions to call a data plane API in Azure Data Manager for Energy. ## Prerequisites Before you begin, create the following resources: -* [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) +* [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) * [Azure function app](../azure-functions/functions-create-function-app-portal.md) Before you begin, create the following resources: ## Step 1: Retrieve the object ID -To retrieve the object ID for the user-assigned identity that will access the Azure Data Manager for Energy Preview APIs: +To retrieve the object ID for the user-assigned identity that will access the Azure Data Manager for Energy APIs: 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Go to the managed identity, and then select **Overview**. Retrieve the application ID of the user-assigned identity by using the object ID ## Step 4: Add the application ID to entitlement groups -Next, add the application ID to the appropriate groups that will use the entitlement service to access Azure Data Manager for Energy Preview APIs. The following example adds the application ID to two groups: +Next, add the application ID to the appropriate groups that will use the entitlement service to access Azure Data Manager for Energy APIs. The following example adds the application ID to two groups: * users@[partition ID].dataservices.energy * users.datalake.editors@[partition ID].dataservices.energy To add the application ID: * Tenant ID * Client ID * Client secret- * Azure Data Manager for Energy Preview URI + * Azure Data Manager for Energy URI * Data partition ID * [Access token](how-to-manage-users.md#prerequisites) * Application ID of the managed identity To add the application ID: 1. To add the application ID to the users@[partition ID].dataservices.energy group, run the following cURL command via Bash in Azure: ```bash- curl --location --request POST 'https://<Azure Data Manager for Energy Preview URI>/api/entitlements/v2/groups/users@ <data-partition-id>.dataservices.energy/members' \ + curl --location --request POST 'https://<Azure Data Manager for Energy URI>/api/entitlements/v2/groups/users@ <data-partition-id>.dataservices.energy/members' \ --header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer \ --header 'Content-Type: application/json' \ To add the application ID: 1. To add the application ID to the users.datalake.editors@[partition ID].dataservices.energy group, run the following cURL command via Bash in Azure: ```bash- curl --location --request POST 'https://<Azure Data Manager for Energy Preview URI>/api/entitlements/v2/groups/ users.datalake.editors@ <data-partition-id>.dataservices.energy/members' \ + curl --location --request POST 'https://<Azure Data Manager for Energy URI>/api/entitlements/v2/groups/ users.datalake.editors@ <data-partition-id>.dataservices.energy/members' \ --header 'data-partition-id: <data-partition-id>' \ --header 'Authorization: Bearer \ --header 'Content-Type: application/json' \ To add the application ID: ## Step 5: Generate a token -Now Azure Functions is ready to access Azure Data Manager for Energy Preview APIs. +Now Azure Functions is ready to access Azure Data Manager for Energy APIs. -The Azure function generates a token by using the user-assigned identity. The function uses the application ID that's present in the Azure Data Manager for Energy Preview instance while generating the token. +The Azure function generates a token by using the user-assigned identity. The function uses the application ID that's present in the Azure Data Manager for Energy instance while generating the token. Here's an example of the Azure function code: from msrestazure.azure_active_directory import MSIAuthentication def main(req: func.HttpRequest) -> str: logging.info('Python HTTP trigger function processed a request.') - //To authenticate by using a managed identity, you need to pass the Azure Data Manager for Energy Preview application ID as the resource. + //To authenticate by using a managed identity, you need to pass the Azure Data Manager for Energy application ID as the resource. //To use a user-assigned identity, you should include the //client ID as an additional parameter. //Managed identity using user-assigned identity: MSIAuthentication(client_id, resource) def main(req: func.HttpRequest) -> str: creds = MSIAuthentication(client_id="<client_id_of_managed_identity>ΓÇ¥, resource="<meds_app_id>") url = "https://<meds-uri>/api/entitlements/v2/groups" payload = {}- // Passing the data partition ID of Azure Data Manager for Energy Preview in headers along with the token received using the managed instance. + // Passing the data partition ID of Azure Data Manager for Energy in headers along with the token received using the managed instance. headers = { 'data-partition-id': '<data partition id>', 'Authorization': 'Bearer ' + creds.token["access_token"] You should get the following successful response from Azure Functions: [](media/how-to-use-managed-identity/5-azure-function-success.png#lightbox) -With the preceding steps completed, you can now use Azure Functions to access Azure Data Manager for Energy Preview APIs with appropriate use of managed identities. +With the preceding steps completed, you can now use Azure Functions to access Azure Data Manager for Energy APIs with appropriate use of managed identities. ## Next steps Learn about Lockbox: > [!div class="nextstepaction"]-> [Lockbox in Azure Data Manager for Energy Preview](how-to-create-lockbox.md) +> [Lockbox in Azure Data Manager for Energy](how-to-create-lockbox.md) |
energy-data-services | Overview Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md | Title: Overview of domain data management services - Microsoft Azure Data Manager for Energy Preview + Title: Overview of domain data management services - Microsoft Azure Data Manager for Energy description: This article provides an overview of Domain data management services -Domain data management services (DDMS) store, access, and retrieve metadata and bulk data from applications connected to the data platform. Developers, therefore, use DDMS to deliver seamless and secure consumption of data in the applications they┬ábuild┬áon Azure Data Manager for Energy Preview. The Azure Data Manager for Energy Preview suite of DDMS adheres to [Open Subsurface Data Universe](https://osduforum.org/) (OSDU™) standards and provides enhancements in performance, geo-availability, and access controls. DDMS service is optimized for each data type and can be extended to accommodate new data types. The DDMS service┬ápreserves raw data and offers multi format support and conversion for consuming applications such as Petrel while tracking lineage. Data within the DDMS service is discoverable and governed by entitlement and legal tags. -+Domain data management services (DDMS) store, access, and retrieve metadata and bulk data from applications connected to the data platform. Developers, therefore, use DDMS to deliver seamless and secure consumption of data in the applications they┬ábuild┬áon Azure Data Manager for Energy. The Azure Data Manager for Energy suite of DDMS adheres to [Open Subsurface Data Universe](https://osduforum.org/) (OSDU™) standards and provides enhancements in performance, geo-availability, and access controls. DDMS service is optimized for each data type and can be extended to accommodate new data types. The DDMS service┬ápreserves raw data and offers multi format support and conversion for consuming applications such as Petrel while tracking lineage. Data within the DDMS service is discoverable and governed by entitlement and legal tags. ### OSDU™ definition Domain data management services (DDMS) store, access, and retrieve metadata and ### Frictionless Exploration and Production(E&P) -The Azure Data Manager for Energy Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they can achieve unparalleled streaming performance and use the standards and output from OSDU™. The Azure DDMS service includes the OSDU™ DDMS and SLB proprietary DMS. Microsoft also continues to contribute to the OSDU™ community DDMS to ensure compatibility and architectural alignment. +The Azure Data Manager for Energy DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they can achieve unparalleled streaming performance and use the standards and output from OSDU™. The Azure DDMS service includes the OSDU™ DDMS and SLB proprietary DMS. Microsoft also continues to contribute to the OSDU™ community DDMS to ensure compatibility and architectural alignment. ### Seamless connection between applications and data -You can deploy applications on top of Azure Data Manager for Energy Preview that has been developed as per the OSDU™ standard. They're able to connect applications to Core Services and DDMS without spending extensive cycles on deployment. Customers can also easily connect DELFI to Azure Data Manager for Energy Preview, eliminating the cycles associated with Petrel deployments and connection to data management systems. By connecting applications to DDMS service, Geoscientists can execute integrated E&P workflows with unparalleled performance on Azure and use OSDU™ core services. For example, a geophysicist can pick well ties on a seismic volume in Petrel and stream data from the seismic DMS. +You can deploy applications on top of Azure Data Manager for Energy that has been developed as per the OSDU™ standard. They're able to connect applications to Core Services and DDMS without spending extensive cycles on deployment. Customers can also easily connect DELFI to Azure Data Manager for Energy, eliminating the cycles associated with Petrel deployments and connection to data management systems. By connecting applications to DDMS service, Geoscientists can execute integrated E&P workflows with unparalleled performance on Azure and use OSDU™ core services. For example, a geophysicist can pick well ties on a seismic volume in Petrel and stream data from the seismic DMS. ## Types of DMS OSDU™ DMS supports the following Seismic data is a fundamental data type for oil and gas exploration. Seismic dat Due to this extraordinary data size, geoscientists working on-premises struggle to use seismic data in domain applications. They suffer from crashes as the seismic dataset exceeds their workstation's RAM, which leads to significant non-productive time. To achieve performance needed for domain workflows, geoscientists must chunk a seismic dataset and view each chunk in isolation. As a result, users suffer from the time spent wrangling seismic data and the opportunity cost of missing the significant picture view of the subsurface and target reservoirs. -The seismic DMS is part of the OSDU™ platform and enables users to connect seismic data to cloud storage to applications. It allows secure access to metadata associated with seismic data to efficiently retrieve and handle large blocks of data for OpenVDS, ZGY, and other seismic data formats. The DMS therefore enables users to stream huge amounts of data in OSDU™ compliant applications in real time. Enabling the seismic DMS on Azure Data Manager for Energy Preview opens a pathway for Azure customers to bring their seismic data to the cloud and take advantage of Azure storage and high performance computing. +The seismic DMS is part of the OSDU™ platform and enables users to connect seismic data to cloud storage to applications. It allows secure access to metadata associated with seismic data to efficiently retrieve and handle large blocks of data for OpenVDS, ZGY, and other seismic data formats. The DMS therefore enables users to stream huge amounts of data in OSDU™ compliant applications in real time. Enabling the seismic DMS on Azure Data Manager for Energy opens a pathway for Azure customers to bring their seismic data to the cloud and take advantage of Azure storage and high performance computing. ### OSDU™ - Wellbore DMS Here are the services that the Wellbore DMS offers - The Well Delivery DMS stores critical drilling domain information related to the planning and execution of a well. Throughout a drilling program, engineers and domain experts need to access a wide variety of data types including activities, trajectories, risks, subsurface information, equipment used, fluid and cementing, rig utilization, and reports. Integrating this collection of data types together are the cornerstone to drilling insights. At the same time, until now, there was no industry wide standardization or enforced format. The common standards the Well Delivery DMS enables is critical to the Drilling Value Chain as it connects a diverse group of personas including operations, oil companies, service companies, logistics companies, etc. ### SLB™ - Petrel Data Services-Geoscientists working in [Petrel](https://www.software.slb.com/products/petrel) build Petrel Projects to store, track, share, and communicate their technical work. A Petrel project stores associated data in a ```.PET``` manifest file. It also keeps track of your windows within Petrel and setup. Petrel Data Services is an open DMS and doesn't require any additional licensing to get started. You can ingest Petrel projects to Petrel Data Services using OpenAPIΓÇÖs. By moving to Petrel on Azure Data Manager for Energy Preview, you can use Petrel Data Services Project Explorer UI to discover all the Petrel projects across your organization. You can create and save projects as well as track version history and experience unparalleled performance. This enables you to collaborate in real time with data permanently stored in Azure Data Manager for Energy. +Geoscientists working in [Petrel](https://www.software.slb.com/products/petrel) build Petrel Projects to store, track, share, and communicate their technical work. A Petrel project stores associated data in a ```.PET``` manifest file. It also keeps track of your windows within Petrel and setup. Petrel Data Services is an open DMS and doesn't require any additional licensing to get started. You can ingest Petrel projects to Petrel Data Services using OpenAPIΓÇÖs. By moving to Petrel on Azure Data Manager for Energy, you can use Petrel Data Services Project Explorer UI to discover all the Petrel projects across your organization. You can create and save projects as well as track version history and experience unparalleled performance. This enables you to collaborate in real time with data permanently stored in Azure Data Manager for Energy. -Additionally, Petrel Data Services serves to liberate data stored in Petrel ```.PET``` files to their respective DDMS for search and utilization in external applications. For example, you can upload a Petrel project containing many well logs to Azure Data Manager for Energy Preview. With data liberation, once the project is saved, the Wellbore data liberation service is triggered and that well log is extracted to the wellbore DMS. The association with the ```.PET``` Petrel project is tracked through lineage and you can use that well log in any ISV open app ecosystem. Petrel Data Services offers round trip data liberation and consumption for seismic, wellbore, and Petrel Project data. +Additionally, Petrel Data Services serves to liberate data stored in Petrel ```.PET``` files to their respective DDMS for search and utilization in external applications. For example, you can upload a Petrel project containing many well logs to Azure Data Manager for Energy. With data liberation, once the project is saved, the Wellbore data liberation service is triggered and that well log is extracted to the wellbore DMS. The association with the ```.PET``` Petrel project is tracked through lineage and you can use that well log in any ISV open app ecosystem. Petrel Data Services offers round trip data liberation and consumption for seismic, wellbore, and Petrel Project data. ## Next steps Learn more about DDMS concepts. |
energy-data-services | Overview Microsoft Energy Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-microsoft-energy-data-services.md | Title: What is Microsoft Azure Data Manager for Energy Preview? -description: This article provides an overview of Azure Data Manager for Energy Preview -+ Title: What is Microsoft Azure Data Manager for Energy? +description: This article provides an overview of Azure Data Manager for Energy + -+ Last updated 02/08/2023 -# What is Azure Data Manager for Energy Preview? +# What is Azure Data Manager for Energy? -Azure Data Manager for Energy Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU™ Data Platform, Microsoft's secure and trusted Azure cloud platform, and SLB's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Azure Data Manager for Energy Preview ensures compatibility with evolving community standards like OSDU™ and enables value addition through interoperability with both first-party and third-party solutions. -+Azure Data Manager for Energy is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU™ Data Platform, Microsoft's secure and trusted Azure cloud platform, and SLB's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Azure Data Manager for Energy ensures compatibility with evolving community standards like OSDU™ and enables value addition through interoperability with both first-party and third-party solutions. ## Principles -Azure Data Manager for Energy Preview conforms to the following principles: +Azure Data Manager for Energy conforms to the following principles: ### Fully managed OSDU™ platform -Azure Data Manager for Energy Preview is a first-party PaaS (Platform as a Service) offering where Microsoft manages the deployment, monitoring, management, scale, security, updates, and upgrades of the service so that the customers can focus on the value from the platform. Microsoft offers seamless upgrades to the latest OSDU™ milestone versions after testing and validation. +Azure Data Manager for Energy is a first-party PaaS (Platform as a Service) offering where Microsoft manages the deployment, monitoring, management, scale, security, updates, and upgrades of the service so that the customers can focus on the value from the platform. Microsoft offers seamless upgrades to the latest OSDU™ milestone versions after testing and validation. -Furthermore, Azure Data Manager for Energy Preview provides security capabilities like encryption for data-in-transit and data-at-rest. The authentication and authorization are provided by Azure Active Directory. Microsoft also assumes the responsibility of providing regular security patches and updates. +Furthermore, Azure Data Manager for Energy provides security capabilities like encryption for data-in-transit and data-at-rest. The authentication and authorization are provided by Azure Active Directory. Microsoft also assumes the responsibility of providing regular security patches and updates. -Azure Data Manager for Energy Preview also supports multiple data partitions for every platform instance. More data partitions can also be created after creating an instance, as needed. +Azure Data Manager for Energy also supports multiple data partitions for every platform instance. More data partitions can also be created after creating an instance, as needed. As an Azure-based service, it also provides elasticity with auto-scaling to handle dynamically varying workload requirements. The service provides out-of-the-box compatibility and built-in integration with industry-leading applications from SLB, including Petrel to provide quick time to value. Microsoft will provide support for the platform to enable our customers' use cas ### Accelerated innovation with openness in mind -Azure Data Manager for Energy Preview is compatible with the OSDU™ Technical Standard enables seamless integration of existing applications that have been developed in alignment with the emerging requirements of the OSDU™ Standard. +Azure Data Manager for Energy is compatible with the OSDU™ Technical Standard enables seamless integration of existing applications that have been developed in alignment with the emerging requirements of the OSDU™ Standard. The platform's openness and integration with Microsoft Azure Marketplace brings industry-leading applications, solutions, and integration services offered by our extensive partner ecosystem to our customers. ### Extensibility with the Microsoft ecosystem -Most of our customers rely on ubiquitous tools and applications from Microsoft. The Azure Data Manager for Energy Preview platform is piloting how it can seamlessly work with deeply used Microsoft apps like SharePoint for data ingestion, Synapse for data transformations and pipelines, Power BI for data visualization, and other possibilities. A Power BI connector has already been released in the community, and partners are leveraging these tools and connectors to enhance their integrations with Microsoft apps and services. +Most of our customers rely on ubiquitous tools and applications from Microsoft. The Azure Data Manager for Energy platform is piloting how it can seamlessly work with deeply used Microsoft apps like SharePoint for data ingestion, Synapse for data transformations and pipelines, Power BI for data visualization, and other possibilities. A Power BI connector has already been released in the community, and partners are leveraging these tools and connectors to enhance their integrations with Microsoft apps and services. OSDU™ is a trademark of The Open Group. ## Next steps-Follow the quickstart guide to quickly deploy Azure Data Manager for Energy Preview in your Azure subscription +Follow the quickstart guide to quickly deploy Azure Data Manager for Energy in your Azure subscription > [!div class="nextstepaction"]-> [Quickstart: Create Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) +> [Quickstart: Create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) |
energy-data-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md | Title: Release notes for Microsoft Azure Data Manager for Energy Preview -description: This article provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues. + Title: Release notes for Microsoft Azure Data Manager for Energy +description: This article provides release notes of Azure Data Manager for Energy releases, improvements, bug fixes, and known issues. Last updated 09/20/2022 -# Release Notes for Azure Data Manager for Energy Preview +# Release Notes for Azure Data Manager for Energy --Azure Data Manager for Energy Preview is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: +Azure Data Manager for Energy is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about: - The latest releases - Known issues Azure Private link enables access to Azure Data Manager for Energy instance over ### Enabled Monitoring of OSDU Service Logs -Now you can configure diagnostic settings of your Azure Data Manager for Energy Preview to export OSDU Service Logs to Azure Monitor. You can access, query, & analyze the logs in a Log Analytics Workspace. You can archive them in a storage account for later use. Learn more about [how to integrate OSDU service logs with Azure Monitor](how-to-integrate-osdu-service-logs-with-azure-monitor.md) +Now you can configure diagnostic settings of your Azure Data Manager for Energy to export OSDU Service Logs to Azure Monitor. You can access, query, & analyze the logs in a Log Analytics Workspace. You can archive them in a storage account for later use. Learn more about [how to integrate OSDU service logs with Azure Monitor](how-to-integrate-osdu-service-logs-with-azure-monitor.md) ### Monitoring and investigating actions with Audit logs Knowing who is taking what action on which item is critical in helping organizat ### Compliant with M14 OSDU™ release -Azure Data Manager for Energy Preview is now compliant with the M14 OSDU™ milestone release. With this release, you can take advantage of the latest features and capabilities available in the [OSDU™ M14](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M14-Release-Notes). +Azure Data Manager for Energy is now compliant with the M14 OSDU™ milestone release. With this release, you can take advantage of the latest features and capabilities available in the [OSDU™ M14](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M14-Release-Notes). ### Product Billing enabled -Billing for Azure Data Manager for Energy Preview is enabled. During Preview, the price for each instance is based on a fixed per-hour consumption. [Pricing information for Azure Data Manager for Energy Preview.](https://azure.microsoft.com/pricing/details/energy-data-services/#pricing) +Billing for Azure Data Manager for Energy is enabled. During, the price for each instance is based on a fixed per-hour consumption. [Pricing information for Azure Data Manager for Energy.](https://azure.microsoft.com/pricing/details/energy-data-services/#pricing) ### Available on Azure Marketplace -You can go directly to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.AzureDataManagerforEnergy?tab=Overview) to create an Azure Data Manager for Energy Preview resource in your subscription. You don't need to raise a support ticket with Microsoft to provision an instance anymore. +You can go directly to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.AzureDataManagerforEnergy?tab=Overview) to create an Azure Data Manager for Energy resource in your subscription. You don't need to raise a support ticket with Microsoft to provision an instance anymore. ### Support for Petrel Data Services-Azure Data Manager for Energy Preview supports [Petrel Data Services](overview-ddms.md#) that allows you to use [Petrel](https://www.software.slb.com/products/petrel) from SLB™ with Azure Data Manager from Energy as its data store. You can view your Petrel projects, liberate data from Petrel, and collaborate in real time with data permanently stored in Azure Data Manager for Energy. +Azure Data Manager for Energy supports [Petrel Data Services](overview-ddms.md#) that allows you to use [Petrel](https://www.software.slb.com/products/petrel) from SLB™ with Azure Data Manager from Energy as its data store. You can view your Petrel projects, liberate data from Petrel, and collaborate in real time with data permanently stored in Azure Data Manager for Energy. ### Enable Resource sharing (CORS) -CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin. You can set CORS rules for each Azure Data Manager for Energy Preview instance. When you set CORS rules for the instance they get applied automatically across all the services and storage accounts linked with Azure Data Manager for Energy Preview. [How to enable CORS.]( ../energy-data-services/how-to-enable-CORS.md) +CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin. You can set CORS rules for each Azure Data Manager for Energy instance. When you set CORS rules for the instance they get applied automatically across all the services and storage accounts linked with Azure Data Manager for Energy. [How to enable CORS.]( ../energy-data-services/how-to-enable-CORS.md) ## January 2023 ### Managed Identity support -You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Azure Data Manager for Energy Preview. For example, you can write a script in Azure Function to ingest data in Azure Data Manager for Energy Preview. Now, you can use managed identity to connect to Azure Data Manager for Energy Preview using system or user assigned managed identity from other Azure services. [Learn more.](../energy-data-services/how-to-use-managed-identity.md) +You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Azure Data Manager for Energy. For example, you can write a script in Azure Function to ingest data in Azure Data Manager for Energy. Now, you can use managed identity to connect to Azure Data Manager for Energy using system or user assigned managed identity from other Azure services. [Learn more.](../energy-data-services/how-to-use-managed-identity.md) ### Availability Zone support -Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Azure Data Manager for Energy Preview supports zone-redundant instance by default and there's no setup required by the customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services®ions=all) +Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Azure Data Manager for Energy supports zone-redundant instance by default and there's no setup required by the customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services®ions=all) <hr width=100%> Availability Zones are physically separate locations within an Azure region made ### Support for Lockbox -Most operations, support, and troubleshooting performed by Microsoft personnel don't require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Azure Data Manager for Energy Preview provides you with an interface to review, approve or reject data access requests. Azure Data Manager for Energy Preview now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md). +Most operations, support, and troubleshooting performed by Microsoft personnel don't require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Azure Data Manager for Energy provides you with an interface to review, approve or reject data access requests. Azure Data Manager for Energy now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md). <hr width=100%> Most operations, support, and troubleshooting performed by Microsoft personnel d ### Support for Private Links -Azure Private Link on Azure Data Manager for Energy Preview provides private access to the service. With Azure Private Link, traffic between your private network and Azure Data Manager for Energy Preview travels over the Microsoft backbone network, therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to an Azure Data Manager for Energy Preview instance from your virtual network via a private endpoint. You can limit access to your Azure Data Manager for Energy Preview instance over these private IP addresses. [Create a private endpoint for Azure Data Manager for Energy Preview](how-to-set-up-private-links.md). +Azure Private Link on Azure Data Manager for Energy provides private access to the service. With Azure Private Link, traffic between your private network and Azure Data Manager for Energy travels over the Microsoft backbone network, therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to an Azure Data Manager for Energy instance from your virtual network via a private endpoint. You can limit access to your Azure Data Manager for Energy instance over these private IP addresses. [Create a private endpoint for Azure Data Manager for Energy](how-to-set-up-private-links.md). ### Encryption at rest using Customer Managed keys -Azure Data Manager for Energy Preview supports customer managed encryption keys (CMK). All data in Azure Data Manager for Energy Preview is encrypted with Microsoft-managed keys by default. In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. [Data security and encryption in Azure Data Manager for Energy Preview](how-to-manage-data-security-and-encryption.md). +Azure Data Manager for Energy supports customer managed encryption keys (CMK). All data in Azure Data Manager for Energy is encrypted with Microsoft-managed keys by default. In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. [Data security and encryption in Azure Data Manager for Energy](how-to-manage-data-security-and-encryption.md). <hr width=100%> Azure Data Manager for Energy Preview supports customer managed encryption keys ## September 2022 -### Key Announcement: Preview Release +### Key Announcement: Release -Azure Data Manager for Energy is now available in preview. Information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Energy Preview will be updated monthly. +Azure Data Manager for Energy is now available in. Information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Energy will be updated monthly. -Azure Data Manager for Energy Preview is developed in alignment with the emerging requirements of the OSDU™ technical standard, version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes). +Azure Data Manager for Energy is developed in alignment with the emerging requirements of the OSDU™ technical standard, version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes). ### Partition & User Management -- New data partitions can be [created after provisioning an Azure Data Manager for Energy Preview instance](how-to-add-more-data-partitions.md). Earlier, data partitions could only be created when provisioning a new instance.+- New data partitions can be [created after provisioning an Azure Data Manager for Energy instance](how-to-add-more-data-partitions.md). Earlier, data partitions could only be created when provisioning a new instance. - The domain name for entitlement groups for [user management](how-to-manage-users.md) has been changed to "dataservices.energy". ### Data Ingestion -- Azure Data Manager for Energy Preview supports user context in ingestion ([ADR: Issue 52](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52)) +- Azure Data Manager for Energy supports user context in ingestion ([ADR: Issue 52](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52)) - User identity is preserved and passed on to all ingestion workflow related services using the newly introduced _x-on-behalf-of_ header. You need to have appropriate service level entitlements on all dependent services involved in the ingestion workflow to modify data. - Workflow service payload is restricted to a maximum of 2 MB. If it exceeds, the service throws an HTTP 413 error. This restriction is placed to prevent workflow requests from overwhelming the server.-- Azure Data Manager for Energy Preview uses Azure Data Factory (ADF) to run large scale ingestion workloads.+- Azure Data Manager for Energy uses Azure Data Factory (ADF) to run large scale ingestion workloads. ### Search -Azure Data Manager for Energy Preview is more secure as Elasticsearch images are now pulled from Microsoft's internal Azure Container Registry instead of public repositories. In addition, Elastic search, registration, and notification services are now encrypted in transit further enhancing the security of the product. +Azure Data Manager for Energy is more secure as Elasticsearch images are now pulled from Microsoft's internal Azure Container Registry instead of public repositories. In addition, Elastic search, registration, and notification services are now encrypted in transit further enhancing the security of the product. ### Monitoring -Azure Data Manager for Energy Preview supports diagnostic settings for [Airflow logs](how-to-integrate-airflow-logs-with-azure-monitor.md) and [Elasticsearch logs](how-to-integrate-elastic-logs-with-azure-monitor.md). You can configure Azure Monitor to view these logs in the storage location of your choice. +Azure Data Manager for Energy supports diagnostic settings for [Airflow logs](how-to-integrate-airflow-logs-with-azure-monitor.md) and [Elasticsearch logs](how-to-integrate-elastic-logs-with-azure-monitor.md). You can configure Azure Monitor to view these logs in the storage location of your choice. ### Region Availability -Currently, Azure Data Manager for Energy Preview is available in the following regions - South Central US, East US, West Europe, and North Europe. +Currently, Azure Data Manager for Energy is available in the following regions - South Central US, East US, West Europe, and North Europe. |
energy-data-services | Resources Partner Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/resources-partner-solutions.md | Title: Microsoft Azure Data Manager for Energy Preview partners -description: Lists of third-party Azure Data Manager for Energy Preview partners solutions. + Title: Microsoft Azure Data Manager for Energy partners +description: Lists of third-party Azure Data Manager for Energy partners solutions. Last updated 09/24/2022-# Azure Data Manager for Energy Preview partners +# Azure Data Manager for Energy partners -Partner community is the growth engine for Microsoft. To help our customers quickly realize the benefits of Azure Data Manager for Energy Preview, we've worked closely with many partners who have tested their software applications and tools on our data platform. +Partner community is the growth engine for Microsoft. To help our customers quickly realize the benefits of Azure Data Manager for Energy, we've worked closely with many partners who have tested their software applications and tools on our data platform. ## Partner solutions-This article highlights Microsoft partners with software solutions officially supporting Azure Data Manager for Energy Preview. +This article highlights Microsoft partners with software solutions officially supporting Azure Data Manager for Energy. | Partner | Description | Website/Product link | | - | -- | -- |-| Bluware | Bluware enables energy companies to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Azure Data Manager for Energy Preview is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI™, drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by 10 times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)| +| Bluware | Bluware enables energy companies to explore the full value of seismic data for exploration, carbon capture, wind farms, and geothermal workflows. Bluware technology on Azure Data Manager for Energy is increasing workflow productivity utilizing the power of Azure. Bluware's flagship seismic deep learning solution, InteractivAI™, drastically improves the effectiveness of interpretation workflows. The interactive experience reduces seismic interpretation time by 10 times from weeks to hours and provides full control over interpretation results. | [Bluware technologies on Azure](https://go.bluware.com/bluware-on-azure-markeplace) [Bluware Products and Evaluation Packages](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/bluwarecorp1581537274084.bluwareazurelisting)| | Katalyst | Katalyst Data Management® provides the only integrated, end-to-end subsurface data management solution for the oil and gas industry. Over 160 employees operate in North America, Europe and Asia-Pacific, dedicated to enabling digital transformation and optimizing the value of geotechnical information for exploration, production, and M&A activity. |[Katalyst Data Management solution](https://www.katalystdm.com/seismic-news/katalyst-announces-sub-surface-data-management-solution-powered-by-microsoft-energy-data-services/) |-| Interica | Interica OneView™ harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos, and clearly determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a complete holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy Preview adoption with Interica OneView™](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView™](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView™ connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)| +| Interica | Interica OneView™ harnesses the power of application connectors to extract rich metadata from live projects discovered across the organization. IOV scans automatically discover content and extract detailed metadata at the sub-element level. Quickly and easily discover data across multiple file systems and data silos, and clearly determine which projects contain selected data objects to inform business decisions. Live data discovery enables businesses to see a complete holistic view of subsurface project landscapes for improved time to decisions, more efficient data search, and effective storage management. | [Accelerate Azure Data Manager for Energy adoption with Interica OneView™](https://www.petrosys.com.au/interica-oneview-connecting-to-microsoft-data-services/) [Interica OneView™](https://www.petrosys.com.au/assets/Interica_OneView_Accelerate_MEDS_Azure_adoption.pdf) [Interica OneView™ connecting to Microsoft Data Services](https://youtu.be/uPEOo3H01w4)| ## Next steps-To learn more about Azure Data Manager for Energy Preview, visit +To learn more about Azure Data Manager for Energy, visit > [!div class="nextstepaction"]-> [What is Azure Data Manager for Energy Preview?](overview-microsoft-energy-data-services.md) +> [What is Azure Data Manager for Energy?](overview-microsoft-energy-data-services.md) |
energy-data-services | Troubleshoot Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/troubleshoot-manifest-ingestion.md | Title: Troubleshoot manifest ingestion in Microsoft Azure Data Manager for Energy Preview + Title: Troubleshoot manifest ingestion in Microsoft Azure Data Manager for Energy description: Find out how to troubleshoot manifest ingestion by using Airflow task logs. Last updated 02/06/2023 # Troubleshoot manifest ingestion problems by using Airflow task logs -This article helps you troubleshoot workflow problems with manifest ingestion in Azure Data Manager for Energy Preview by using Airflow task logs. +This article helps you troubleshoot workflow problems with manifest ingestion in Azure Data Manager for Energy by using Airflow task logs. ## Manifest ingestion DAG workflow types |
energy-data-services | Tutorial Csv Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-csv-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a CSV parser ingestion + Title: Microsoft Azure Data Manager for Energy - Steps to perform a CSV parser ingestion description: This tutorial shows you how to perform CSV parser ingestion -#Customer intent: As a customer, I want to learn how to use CSV parser ingestion so that I can load CSV data into the Azure Data Manager for Energy Preview instance. +#Customer intent: As a customer, I want to learn how to use CSV parser ingestion so that I can load CSV data into the Azure Data Manager for Energy instance. # Tutorial: Sample steps to perform a CSV parser ingestion -CSV Parser ingestion provides the capability to ingest CSV files into the Azure Data Manager for Energy Preview instance. +CSV Parser ingestion provides the capability to ingest CSV files into the Azure Data Manager for Energy instance. In this tutorial, you'll learn how to: > [!div class="checklist"]-> * Ingest a sample wellbore data CSV file into the Azure Data Manager for Energy Preview instance using Postman +> * Ingest a sample wellbore data CSV file into the Azure Data Manager for Energy instance using Postman > * Search for storage metadata records created during the CSV Ingestion using Postman - ## Prerequisites -### Get Azure Data Manager for Energy Preview instance details +### Get Azure Data Manager for Energy instance details -* Azure Data Manager for Energy Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create an Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) +* Azure Data Manager for Energy instance is created already. If not, follow the steps outlined in [Quickstart: Create an Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) * For this tutorial, you will need the following parameters: | Parameter | Value to use | Example | Where to find these values? | In this tutorial, you'll learn how to: | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | Hover over your account name in the Azure portal to get the directory or tenant ID. Alternately, search and select *Azure Active Directory > Properties > Tenant ID* in the Azure portal. | | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | Same as App ID or Client_ID mentioned above | | refresh_token | Refresh Token value | 0.ATcA01-XWHdJ0ES-qDevC6r........... | Follow the [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to create a refresh token and save it. This refresh token is required later to generate a user token. |- | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Azure Data Manager for Energy Preview instance| - | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Azure Data Manager for Energy Preview instance| + | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Azure Data Manager for Energy instance| + | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Azure Data Manager for Energy instance| * Follow the [Manage users](how-to-manage-users.md) guide to add appropriate entitlements for the user running this tutorial In this tutorial, you'll learn how to: > [!NOTE] > To import the Postman collection and environment variables, follow the steps outlined in [Importing data into Postman](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#importing-data-into-postman) -* Update the **CURRENT_VALUE** of the Postman environment with the information obtained in [Azure Data Manager for Energy Preview instance details](#get-azure-data-manager-for-energy-preview-instance-details) +* Update the **CURRENT_VALUE** of the Postman environment with the information obtained in [Azure Data Manager for Energy instance details](#get-azure-data-manager-for-energy-instance-details) * The Postman collection for CSV parser ingestion contains a total of 10 requests, which have to be executed in a sequential manner. * Make sure to choose the **Ingestion Workflow Environment** before triggering the Postman collection. :::image type="content" source="media/tutorial-csv-ingestion/tutorial-postman-choose-environment.png" alt-text="Screenshot of the postman environment." lightbox="media/tutorial-csv-ingestion/tutorial-postman-choose-environment.png"::: In this tutorial, you'll learn how to: :::image type="content" source="media/tutorial-csv-ingestion/tutorial-postman-test-failure.png" alt-text="Screenshot of a failure postman call." lightbox="media/tutorial-csv-ingestion/tutorial-postman-test-failure.png"::: -## Ingest a sample wellbore data CSV file into the Azure Data Manager for Energy Preview instance using Postman +## Ingest a sample wellbore data CSV file into the Azure Data Manager for Energy instance using Postman Using the given Postman collection, you could execute the following steps to ingest the wellbore data: 1. **Get a user token** - Generate the User token, which will be used to authenticate further API calls. 2. **Create a schema** - Generate a schema that adheres to the columns present in the CSV file |
energy-data-services | Tutorial Manifest Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-manifest-ingestion.md | Title: Microsoft Azure Data Manager for Energy Preview - Steps to perform a manifest-based file ingestion + Title: Microsoft Azure Data Manager for Energy - Steps to perform a manifest-based file ingestion description: This tutorial shows you how to perform Manifest ingestion -#Customer intent: As a customer, I want to learn how to use manifest ingestion so that I can load manifest information into the Azure Data Manager for Energy Preview instance. +#Customer intent: As a customer, I want to learn how to use manifest ingestion so that I can load manifest information into the Azure Data Manager for Energy instance. # Tutorial: Sample steps to perform a manifest-based file ingestion -Manifest ingestion provides the capability to ingest manifests into Azure Data Manager for Energy Preview instance +Manifest ingestion provides the capability to ingest manifests into Azure Data Manager for Energy instance In this tutorial, you will learn how to: > [!div class="checklist"]-> * Ingest sample manifests into the Azure Data Manager for Energy Preview instance using Postman +> * Ingest sample manifests into the Azure Data Manager for Energy instance using Postman > * Search for storage metadata records created during the manifest ingestion using Postman - ## Prerequisites Before beginning this tutorial, the following prerequisites must be completed:-### Get Azure Data Manager for Energy Preview instance details +### Get Azure Data Manager for Energy instance details -* Azure Data Manager for Energy Preview instance is created already. If not, follow the steps outlined in [Quickstart: Create an Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) +* Azure Data Manager for Energy instance is created already. If not, follow the steps outlined in [Quickstart: Create an Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) * For this tutorial, you will need the following parameters: | Parameter | Value to use | Example | Where to find these values? | Before beginning this tutorial, the following prerequisites must be completed: | TENANT_ID | Directory (tenant) ID | 72f988bf-86f1-41af-91ab-xxxxxxxxxxxx | Hover over your account name in the Azure portal to get the directory or tenant ID. Alternately, search and select *Azure Active Directory > Properties > Tenant ID* in the Azure portal. | | SCOPE | Application (client) ID | 3dbbbcc2-f28f-44b6-a5ab-xxxxxxxxxxxx | Same as App ID or Client_ID mentioned above | | refresh_token | Refresh Token value | 0.ATcA01-XWHdJ0ES-qDevC6r........... | Follow the [How to Generate a Refresh Token](how-to-generate-refresh-token.md) to create a refresh token and save it. This refresh token is required later to generate a user token. |- | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Azure Data Manager for Energy Preview instance| - | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Azure Data Manager for Energy Preview instance| + | DNS | URI | `<instance>`.energy.Azure.com | Overview page of Azure Data Manager for Energy instance| + | data-partition-id | Data Partition(s) | `<instance>`-`<data-partition-name>` | Overview page of Azure Data Manager for Energy instance| * Follow the [Manage users](how-to-manage-users.md) guide to add appropriate entitlements for the user running this tutorial Before beginning this tutorial, the following prerequisites must be completed: * [Manifest Ingestion postman environment](https://raw.githubusercontent.com/microsoft/meds-samples/main/postman/IngestionWorkflowEnvironment.postman_environment.json) > [!NOTE] > To import the Postman collection and environment variables, follow the steps outlined in [Importing data into Postman](https://learning.postman.com/docs/getting-started/importing-and-exporting-data/#importing-data-into-postman)-* Update the **CURRENT_VALUE** of the postman environment with the information obtained in [Get Azure Data Manager for Energy Preview instance details](#get-azure-data-manager-for-energy-preview-instance-details) +* Update the **CURRENT_VALUE** of the postman environment with the information obtained in [Get Azure Data Manager for Energy instance details](#get-azure-data-manager-for-energy-instance-details) * The Postman collection for manifest ingestion contains multiple requests, which will have to be executed in a sequential manner. * Make sure to choose the **Ingestion Workflow Environment** before triggering the Postman collection. :::image type="content" source="media/tutorial-manifest-ingestion/tutorial-postman-choose-environment.png" alt-text="Screenshot of the Postman environment." lightbox="media/tutorial-manifest-ingestion/tutorial-postman-choose-environment.png"::: Before beginning this tutorial, the following prerequisites must be completed: :::image type="content" source="media/tutorial-manifest-ingestion/tutorial-postman-test-failure.png" alt-text="Screenshot of a failure Postman call." lightbox="media/tutorial-manifest-ingestion/tutorial-postman-test-failure.png"::: -## Ingest sample manifests into the Azure Data Manager for Energy Preview instance using Postman +## Ingest sample manifests into the Azure Data Manager for Energy instance using Postman 1. **Get a user token** - Generate the User token, which will be used to authenticate further API calls. 2. **Create a legal tag** - Create a legal tag that will be added to the Manifest data for data compliance purpose |
energy-data-services | Tutorial Petrel Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-petrel-ddms.md | Title: Tutorial - Work with Petrel data records by using Petrel DDMS APIs in Azure Data Manager for Energy Preview -description: Learn how to work with Petrel data records in your Azure Data Manager for Energy Preview instance by using Petrel Domain Data Management Services (Petrel DDMS) APIs in Postman. + Title: Tutorial - Work with Petrel data records by using Petrel DDMS APIs in Azure Data Manager for Energy +description: Learn how to work with Petrel data records in your Azure Data Manager for Energy instance by using Petrel Domain Data Management Services (Petrel DDMS) APIs in Postman. -Use Petrel Domain Data Management Services (Petrel DDMS) APIs in Postman to work with Petrel data in your instance of Azure Data Manager for Energy Preview. +Use Petrel Domain Data Management Services (Petrel DDMS) APIs in Postman to work with Petrel data in your instance of Azure Data Manager for Energy. In this tutorial, you'll learn how to: > [!div class="checklist"] In this tutorial, you'll learn how to: > - Generate an authorization token. > - Use Petrel DDMS APIs to work with Petrel data records/projects. - For more information about DDMS, see [DDMS concepts](concepts-ddms.md). ## Prerequisites - An Azure subscription-- An instance of [Azure Data Manager for Energy Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription.+- An instance of [Azure Data Manager for Energy](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription. -## Get your Azure Data Manager for Energy Preview instance details +## Get your Azure Data Manager for Energy instance details -The first step is to get the following information from your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): +The first step is to get the following information from your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): | Parameter | Value | Example | | | |-- | Next, set up Postman: ## Generate a token to use in APIs -The Postman collection for Petrel DDMS contains requests you can use to interact with your Petrel Projects. It also contains a request to query current Petrel projects and records in your Azure Data Manager for Energy Preview instance. +The Postman collection for Petrel DDMS contains requests you can use to interact with your Petrel Projects. It also contains a request to query current Petrel projects and records in your Azure Data Manager for Energy instance. 1. In Postman, in the left menu, select **Collections**, and then select **Petrel DDMS**. Under **Setup**, select **Get Token**. The Postman collection for Petrel DDMS contains requests you can use to interact This request will generate an access token and assign it as the authorization method for future requests. -You can also generate a token by using the cURL command in Postman or a terminal to generate a bearer token. Use the values from your Azure Data Manager for Energy Preview instance. +You can also generate a token by using the cURL command in Postman or a terminal to generate a bearer token. Use the values from your Azure Data Manager for Energy instance. ```bash curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ Method: POST ### Get Project -Given a Project ID, returns the corresponding Petrel Project record in your Azure Data Manager for Energy Preview instance. +Given a Project ID, returns the corresponding Petrel Project record in your Azure Data Manager for Energy instance. API: **Project** > **Get Project**. Method: GET ### Delete Project -Given a Project ID, deletes the project and the associated Petrel Project record data in your Azure Data Manager for Energy Preview instance. +Given a Project ID, deletes the project and the associated Petrel Project record data in your Azure Data Manager for Energy instance. API: **Project** > **Delete Project** Method: DELETE ### Get Project Version -Given a `Project ID` and a `Version ID`, gets the Petrel Version record associated with that project/version ID in your Azure Data Manager for Energy Preview instance. +Given a `Project ID` and a `Version ID`, gets the Petrel Version record associated with that project/version ID in your Azure Data Manager for Energy instance. API: **Project** > **Project Version** Method: GET ### Get a Project Download URL -Given a Project ID, returns a SAS URL to download the data of the corresponding project from your Azure Data Manager for Energy Preview instance. +Given a Project ID, returns a SAS URL to download the data of the corresponding project from your Azure Data Manager for Energy instance. API: **Project** > **Download URL** Method: GET ### Get a Project Upload URL -Given a Project ID, returns two SAS URLs. One to upload data to and one to download data from the corresponding project in your Azure Data Manager for Energy Preview instance. +Given a Project ID, returns two SAS URLs. One to upload data to and one to download data from the corresponding project in your Azure Data Manager for Energy instance. API: **Project** > **Upload URL** Making a PUT call to this URL uploads the contents of the `body` to the blob sto ### Update Project -Given a Project ID, SAS upload URL, and a Petrel Project record, updates the Petrel Project record in your Azure Data Manager for Energy Preview with the new values provided. Can also upload data to a given project but doesn't have to. +Given a Project ID, SAS upload URL, and a Petrel Project record, updates the Petrel Project record in your Azure Data Manager for Energy with the new values provided. Can also upload data to a given project but doesn't have to. API: **Project** > **Update Project** |
energy-data-services | Tutorial Seismic Ddms Sdutil | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms-sdutil.md | Title: Microsoft Azure Data Manager for Energy Preview - Seismic store sdutil tutorial + Title: Microsoft Azure Data Manager for Energy - Seismic store sdutil tutorial description: Information on setting up and using sdutil, a command-line interface (CLI) tool that allows users to easily interact with seismic store. Sdutil is a command line Python utility tool designed to easily interact with se **Sdutil** is an intuitive command line utility tool to interact with seismic store and perform some basic operations like upload or download datasets to or from seismic store, manage users, list folders content and more. - ## Prerequisites Install the following prerequisites based on your OS: Run the changelog script (`./changelog-generator.sh`) to automatically generate ./scripts/changelog-generator.sh ``` -## Usage for Azure Data Manager for Energy Preview +## Usage for Azure Data Manager for Energy -Azure Data Manager for Energy Preview instance is using OSDU™ M12 Version of sdutil. Follow the below steps if you would like to use SDUTIL to leverage the SDMS API of your Azure Data Manager for Energy instance. +Azure Data Manager for Energy instance is using OSDU™ M12 Version of sdutil. Follow the below steps if you would like to use SDUTIL to leverage the SDMS API of your Azure Data Manager for Energy instance. 1. Ensure you have followed the [installation](#prerequisites) and [configuration](#configuration) steps from above. This includes downloading the SDUTIL source code, configuring your Python virtual environment, editing the `config.yaml` file and setting your three environment variables. |
energy-data-services | Tutorial Seismic Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-seismic-ddms.md | Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Azure Data Manager for Energy Preview -description: This tutorial shows you how to interact with Seismic DDMS Azure Data Manager for Energy Preview + Title: Tutorial - Sample steps to interact with Seismic DDMS in Microsoft Azure Data Manager for Energy +description: This tutorial shows you how to interact with Seismic DDMS Azure Data Manager for Energy -Seismic DDMS provides the capability to operate on seismic data in the Azure Data Manager for Energy Preview instance. +Seismic DDMS provides the capability to operate on seismic data in the Azure Data Manager for Energy instance. In this tutorial, you will learn how to: In this tutorial, you will learn how to: > * Register data partition to seismic > * Utilize seismic DDMS Api's to store and retrieve seismic data ## Prerequisites -### Azure Data Manager for Energy Preview instance details +### Azure Data Manager for Energy instance details -* Once the [Azure Data Manager for Energy Preview instance](./quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details: +* Once the [Azure Data Manager for Energy instance](./quickstart-create-microsoft-energy-data-services-instance.md) is created, note down the following details: | Parameter | Value to use | Example | | | |-- | In this tutorial, you will learn how to: * [Smoke test Postman collection](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/raw/master/source/ddms-smoke-tests/Azure%20DDMS%20OSDU%20Smoke%20Tests.postman_collection.json) * [Smoke Test Environment](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/raw/master/source/ddms-smoke-tests/%5BShip%5D%20osdu-glab.msft-osdu-test.org.postman_environment.json) -3. Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Azure Data Manager for Energy Preview instance details](#azure-data-manager-for-energy-preview-instance-details) +3. Update the **CURRENT_VALUE** of the Postman Environment with the information obtained in [Azure Data Manager for Energy instance details](#azure-data-manager-for-energy-instance-details) ## Register data partition to seismic |
energy-data-services | Tutorial Well Delivery Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-well-delivery-ddms.md | Title: Tutorial - Work with well data records by using Well Delivery DDMS APIs -description: Learn how to work with well data records in your Azure Data Manager for Energy Preview instance by using Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman. +description: Learn how to work with well data records in your Azure Data Manager for Energy instance by using Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman. -Use Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman to work with well data in your instance of Azure Data Manager for Energy Preview. +Use Well Delivery Domain Data Management Services (Well Delivery DDMS) APIs in Postman to work with well data in your instance of Azure Data Manager for Energy. In this tutorial, you'll learn how to: > [!div class="checklist"] In this tutorial, you'll learn how to: > - Generate an authorization token. > - Use Well Delivery DDMS APIs to work with well data records. - For more information about DDMS, see [DDMS concepts](concepts-ddms.md). ## Prerequisites - An Azure subscription-- An instance of [Azure Data Manager for Energy Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription+- An instance of [Azure Data Manager for Energy](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription -## Get your Azure Data Manager for Energy Preview instance details +## Get your Azure Data Manager for Energy instance details -The first step is to get the following information from your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): +The first step is to get the following information from your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): | Parameter | Value | Example | | | |-- | Next, set up Postman: :::image type="content" source="media/tutorial-well-delivery/postman-import-files.png" alt-text="Screenshot that shows importing collection and environment files in Postman." lightbox="media/tutorial-well-delivery/postman-import-files.png"::: -1. In the Postman environment, update **CURRENT VALUE** with the information from your [Azure Data Manager for Energy Preview instance](#get-your-azure-data-manager-for-energy-preview-instance-details): +1. In the Postman environment, update **CURRENT VALUE** with the information from your [Azure Data Manager for Energy instance](#get-your-azure-data-manager-for-energy-instance-details): 1. In Postman, in the left menu, select **Environments**, and then select **WellDelivery Environment**. - 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Azure Data Manager for Energy Preview instance details](#get-your-azure-data-manager-for-energy-preview-instance-details). Scroll to see all relevant variables. + 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Azure Data Manager for Energy instance details](#get-your-azure-data-manager-for-energy-instance-details). Scroll to see all relevant variables. :::image type="content" source="media/tutorial-well-delivery/postman-environment-current-values.png" alt-text="Screenshot that shows where to enter current values in the Well Delivery DDMS environment."::: ## Send a Postman request -The Postman collection for Well Delivery DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Azure Data Manager for Energy Preview instance. +The Postman collection for Well Delivery DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Azure Data Manager for Energy instance. For an example of how to send a Postman request, see the [Wellbore DDMS tutorial](tutorial-wellbore-ddms.md#send-an-example-postman-request). In the next sections, generate a token, and then use it to work with Well Delive To generate a token: -1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Azure Data Manager for Energy Preview instance. +1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Azure Data Manager for Energy instance. ```bash curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ To generate a token: ## Use Well Delivery DDMS APIs to work with well data records -Successfully completing the Postman requests that are described in the following Well Delivery DDMS APIs indicates successful ingestion and retrieval of well records in your Azure Data Manager for Energy Preview instance. +Successfully completing the Postman requests that are described in the following Well Delivery DDMS APIs indicates successful ingestion and retrieval of well records in your Azure Data Manager for Energy instance. ### Create a well Method: GET ### Delete a wellbore record -You can delete a wellbore record in your Azure Data Manager for Energy Preview instance by using Well Delivery DDMS APIs. For example: +You can delete a wellbore record in your Azure Data Manager for Energy instance by using Well Delivery DDMS APIs. For example: :::image type="content" source="media/tutorial-well-delivery/postman-api-delete-well-bore.png" alt-text="Screenshot that shows how to use an API to delete a wellbore record."::: ### Delete a well record -You can delete a well record in your Azure Data Manager for Energy Preview instance by using Well Delivery DDMS APIs. For example: +You can delete a well record in your Azure Data Manager for Energy instance by using Well Delivery DDMS APIs. For example: :::image type="content" source="media/tutorial-well-delivery/postman-api-delete-well.png" alt-text="Screenshot that shows how to use an API to delete a well record."::: |
energy-data-services | Tutorial Wellbore Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/tutorial-wellbore-ddms.md | Title: Tutorial - Work with well data records by using Wellbore DDMS APIs -description: Learn how to work with well data records in your Azure Data Manager for Energy Preview instance by using Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman. +description: Learn how to work with well data records in your Azure Data Manager for Energy instance by using Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman. -Use Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman to work with well data in your instance of Azure Data Manager for Energy Preview. +Use Wellbore Domain Data Management Services (Wellbore DDMS) APIs in Postman to work with well data in your instance of Azure Data Manager for Energy. In this tutorial, you'll learn how to: > [!div class="checklist"] In this tutorial, you'll learn how to: > - Generate an authorization token. > - Use Wellbore DDMS APIs to work with well data records. - For more information about DDMS, see [DDMS concepts](concepts-ddms.md). ## Prerequisites - An Azure subscription-- An instance of [Azure Data Manager for Energy Preview](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription.+- An instance of [Azure Data Manager for Energy](quickstart-create-microsoft-energy-data-services-instance.md) created in your Azure subscription. -## Get your Azure Data Manager for Energy Preview instance details +## Get your Azure Data Manager for Energy instance details -The first step is to get the following information from your [Azure Data Manager for Energy Preview instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): +The first step is to get the following information from your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) in the [Azure portal](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=Microsoft_Azure_OpenEnergyPlatformHidden): | Parameter | Value | Example | | | |-- | Next, set up Postman: :::image type="content" source="media/tutorial-wellbore-ddms/postman-import-files.png" alt-text="Screenshot that shows importing collection and environment files in Postman." lightbox="media/tutorial-wellbore-ddms/postman-import-files.png"::: -1. In the Postman environment, update **CURRENT VALUE** with the information from your [Azure Data Manager for Energy Preview instance details](#get-your-azure-data-manager-for-energy-preview-instance-details). +1. In the Postman environment, update **CURRENT VALUE** with the information from your [Azure Data Manager for Energy instance details](#get-your-azure-data-manager-for-energy-instance-details). 1. In Postman, in the left menu, select **Environments**, and then select **Wellbore DDMS Environment**. - 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Azure Data Manager for Energy Preview instance details](#get-your-azure-data-manager-for-energy-preview-instance-details). + 1. In the **CURRENT VALUE** column, enter the information that's described in the table in [Get your Azure Data Manager for Energy instance details](#get-your-azure-data-manager-for-energy-instance-details). :::image type="content" source="media/tutorial-wellbore-ddms/postman-environment-current-values.png" alt-text="Screenshot that shows where to enter current values in the Wellbore DDMS environment."::: ## Send an example Postman request -The Postman collection for Wellbore DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Azure Data Manager for Energy Preview instance. +The Postman collection for Wellbore DDMS contains requests you can use to interact with data about wells, wellbores, well logs, and well trajectory data in your Azure Data Manager for Energy instance. 1. In Postman, in the left menu, select **Collections**, and then select **Wellbore DDMS**. Under **Setup**, select **Get an SPN Token**. The Postman collection for Wellbore DDMS contains requests you can use to intera To generate a token: -1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Azure Data Manager for Energy Preview instance. +1. Import the following cURL command in Postman to generate a bearer token. Use the values from your Azure Data Manager for Energy instance. ```bash curl --location --request POST 'https://login.microsoftonline.com/{{TENANT_ID}}/oauth2/v2.0/token' \ To generate a token: ## Use Wellbore DDMS APIs to work with well data records -Successfully completing the Postman requests that are described in the following Wellbore DDMS APIs indicates successful ingestion and retrieval of well records in your Azure Data Manager for Energy Preview instance. +Successfully completing the Postman requests that are described in the following Wellbore DDMS APIs indicates successful ingestion and retrieval of well records in your Azure Data Manager for Energy instance. ### Create a legal tag For more information, see [Manage legal tags](how-to-manage-legal-tags.md). ### Create a well -Create a well record in your Azure Data Manager for Energy Preview instance. +Create a well record in your Azure Data Manager for Energy instance. API: **Well** > **Create Well**. Method: POST ### Get a well record -Get the well record data for your Azure Data Manager for Energy Preview instance. +Get the well record data for your Azure Data Manager for Energy instance. API: **Well** > **Well** Method: GET ### Get well versions -Get the versions of each ingested well record in your Azure Data Manager for Energy Preview instance. +Get the versions of each ingested well record in your Azure Data Manager for Energy instance. API: **Well** > **Well versions** Method: GET ### Get a specific well version -Get the details of a specific version for a specific well record in your Azure Data Manager for Energy Preview instance. +Get the details of a specific version for a specific well record in your Azure Data Manager for Energy instance. API: **Well** > **Well Specific version** Method: GET ### Delete a well record -Delete a specific well record from your Azure Data Manager for Energy Preview instance. +Delete a specific well record from your Azure Data Manager for Energy instance. API: **Clean up** > **Well Record** |
event-grid | Mqtt Publish And Subscribe Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md | If you don't already have a certificate, you can create a sample certificate usi To create root and intermediate certificates, run the following command: ```powershell-.\step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner +step ca init --deployment-type standalone --name MqttAppSamplesCA --dns localhost --address 127.0.0.1:443 --provisioner MqttAppSamplesCAProvisioner ``` Using the CA files generated to create certificate for the client. ```powershell-.step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h +step certificate create client1-authnID client1-authnID.pem client1-authnID.key --ca .step/certs/intermediate_ca.crt --ca-key .step/secrets/intermediate_ca_key --no-password --insecure --not-after 2400h ``` To view the thumbprint, run the Step command. ```powershell-.\step certificate fingerprint client1-authnID.pem +step certificate fingerprint client1-authnID.pem ``` > [!IMPORTANT] |
expressroute | Expressroute Locations Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md | If you're remote and don't have fiber connectivity or want to explore other conn | **Dallas** | Equinix<br/>Megaport | Axtel<br/>C3ntro Telecom<br/>Cox Business<br/>Crown Castle<br/>Data Foundry<br/>Spectrum Enterprise<br/>Transtelco | | **Frankfurt** | Interxion | BICS<br/>Cinia<br/>Equinix<br/>Nianet<br/>QSC AG<br/>Telekom Deutschland GmbH | | **Hamburg** | Equinix | Cinia |-| **Hong Kong SAR** | Equinix | Chief<br/>Macroview Telecom | +| **Hong Kong** | Equinix | Chief<br/>Macroview Telecom | | **Johannesburg** | Teraco | MTN | | **London** | BICS<br/>Equinix<br/>euNetworks | Bezeq International Ltd.<br/>CoreAzure<br/>Epsilon Telecommunications Limited<br/>Exponential E<br/>HSO<br/>NexGen Networks<br/>Proximus<br/>Tamares Telecom<br/>Zain | | **Los Angeles** | Equinix | Crown Castle<br/>Spectrum Enterprise<br/>Transtelco | |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |Supported |Supported | Melbourne<br/>Sydney | | **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2<br/>Mumbai2 | | **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | Supported | Supported | Bangkok |-| **[Aryaka Networks](https://www.aryaka.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Hong Kong SAR<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC | +| **[Aryaka Networks](https://www.aryaka.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Hong Kong<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC | | **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** | Supported | Supported | Campinas<br/>Sao Paulo<br/>Sao Paulo2 | | **AT&T Dynamic Exchange** | Supported | Supported | Chicago<br/>Dallas<br/>Los Angeles<br/>Miami<br/>Silicon Valley | | **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>London<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC | The following table shows locations by service provider. If you want to view ava | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** | Supported | Supported | Montreal<br/>Toronto<br/>Quebec City<br/>Vancouver | | **[Bezeq International](https://selfservice.bezeqint.net/english)** | Supported | Supported | London | | **[BICS](https://www.bics.com/cloud-connect/)** | Supported | Supported | Amsterdam2<br/>London2 |-| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Frankfurt<br/>Hong Kong SAR<br/>Johannesburg<br/>London<br/>London2<br/>Newport(Wales)<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | +| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Newport(Wales)<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | | **BSNL** | Supported | Supported | Chennai<br/>Mumbai | | **[C3ntro](https://www.c3ntro.com/)** | Supported | Supported | Miami | | **CDC** | Supported | Supported | Canberra<br/>Canberra2 | The following table shows locations by service provider. If you want to view ava | **du datamena** |Supported |Supported | Dubai2 | | **[eir evo](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin | | **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | Supported | Supported | Hong Kong2<br/>Singapore<br/>Singapore2 |-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Bogota<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong SAR<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Melbourne<br/>Miami<br/>Milan<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* | +| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Bogota<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Melbourne<br/>Miami<br/>Milan<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* | | **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | Supported | Supported | Taipei | The following table shows locations by service provider. If you want to view ava | **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** | Supported | Supported | Melbourne<br/>Perth<br/>Sydney<br/>Sydney2 | | **NL-IX** | Supported | Supported | Amsterdam2<br/>Dublin2 | | **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** | Supported | Supported | Amsterdam2<br/>Madrid |-| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** | Supported | Supported | Amsterdam<br/>Hong Kong SAR<br/>London<br/>Los Angeles<br/>New York<br/>Osaka<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | +| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** | Supported | Supported | Amsterdam<br/>Hong Kong<br/>London<br/>Los Angeles<br/>New York<br/>Osaka<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | | **NTT Communications India Network Services Pvt Ltd** | Supported | Supported | Chennai<br/>Mumbai | | **NTT Communications - Flexible InterConnect** |Supported |Supported | Jakarta<br/>Osaka<br/>Singapore2<br/>Tokyo<br/>Tokyo2 | | **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported | Tokyo | The following table shows locations by service provider. If you want to view ava | **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported | Osaka | | **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha<br/>Doha2<br/>London2<br/>Marseille | | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |Supported |Supported | Melbourne<br/>Sydney |-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Hong Kong SAR<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | +| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC | | **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 | | **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Silicon Valley<br/>Toronto<br/>Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 | The following table shows locations by service provider. If you want to view ava | **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** | Supported | Supported | Osaka | | **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** | Supported | Supported | Doha<br/>Doha2<br/>London2<br/>Marseille | | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Supported | Supported | Melbourne<br/>Sydney |-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong SAR<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC | +| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC | | **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 | | **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 | The following table shows locations by service provider. If you want to view ava | **[Sohonet](https://www.sohonet.com/fastlane/)** | Supported | Supported | Los Angeles<br/>London2 | | **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** | Supported | Supported | Auckland<br/>Sydney | | **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva<br/>Zurich |-| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | Supported | Supported | Amsterdam<br/>Chennai<br/>Chicago<br/>Hong Kong SAR<br/>London<br/>Mumbai<br/>Pune<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Washington DC | +| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | Supported | Supported | Amsterdam<br/>Chennai<br/>Chicago<br/>Hong Kong<br/>London<br/>Mumbai<br/>Pune<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Washington DC | | **[Telefonica](https://www.telefonica.com/es/home)** | Supported | Supported | Amsterdam<br/>Sao Paulo<br/>Madrid | | **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** | Supported | Supported | London<br/>London2<br/>Singapore2 | | **Telenor** |Supported |Supported | Amsterdam<br/>London<br/>Oslo<br/>Stavanger | The following table shows locations by service provider. If you want to view ava | **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported | Supported | Frankfurt | | **UOLDIVEO** | Supported | Supported | Sao Paulo | | **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok |-| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong SAR<br/>London<br/>Mumbai<br/>Paris<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC | +| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Mumbai<br/>Paris<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC | | **[Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services)** | Supported | Supported | Washington DC2 | | **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland<br/>Sydney | | **Vodacom** | Supported | Supported | Cape Town<br/>Johannesburg| If you're remote and don't have fiber connectivity, or you want to explore other | **[BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/)** | Equinix | Amsterdam<br/>Frankfurt<br/>London<br/>Singapore<br/>Washington DC | | **[BroadBand Tower, Inc.](https://www.bbtower.co.jp/product-service/network/)** | Equinix | Tokyo | | **[C3ntro Telecom](https://www.c3ntro.com/)** | Equinix<br/>Megaport | Dallas |-| **[Chief](https://www.chief.com.tw/)** | Equinix | Hong Kong SAR | +| **[Chief](https://www.chief.com.tw/)** | Equinix | Hong Kong | | **[Cinia](https://www.cinia.fi/palvelutiedotteet)** | Equinix<br/>Megaport | Frankfurt<br/>Hamburg | | **CloudXpress** | Equinix | Amsterdam | | **[CMC Telecom](https://cmctelecom.vn/san-pham/value-added-service-and-it/cmc-telecom-cloud-express-en/)** | Equinix | Singapore | If you're remote and don't have fiber connectivity, or you want to explore other | **[IVedha Inc](https://ivedha.com/cloud-services)**| Equinix | Toronto | | **[Kaalam Telecom Bahrain B.S.C](https://kalaam-telecom.com/)**| Level 3 Communications |Amsterdam | | **LGA Telecom** |Equinix |Singapore|-| **[Macroview Telecom](http://www.macroview.com/en/scripts/catitem.php?catid=solution§ionid=expressroute)** |Equinix |Hong Kong SAR +| **[Macroview Telecom](http://www.macroview.com/en/scripts/catitem.php?catid=solution§ionid=expressroute)** |Equinix |Hong Kong | **[Macquarie Telecom Group](https://macquariegovernment.com/secure-cloud/secure-cloud-exchange/)** | Megaport | Sydney | | **[MainOne](https://www.mainone.net/services/connectivity/cloud-connect/)** |Equinix | Amsterdam | | **[Masergy](https://www.masergy.com/sd-wan/multi-cloud-connectivity)** | Equinix | Washington DC | |
key-vault | About Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/about-certificates.md | A response includes these additional read-only attributes: - `nbf`: `IntDate` contains the value of the "not before" date of the X.509 certificate. > [!Note] -> If a Key Vault certificate expires, its addressable key and secret become inoperable. +> If a Key Vault certificate expires it can still be retrieved, but certificate may become inoperable in scenarios like TLS protection where expiration of certificate is validated. ### Tags |
key-vault | How To Export Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-export-certificate.md | View [examples and parameter definitions](/cli/azure/keyvault/certificate#az-key Downloading as certificate means getting the public portion. If you want both the private key and public metadata then you can download it as secret. ```azurecli-az keyvault secret download -ΓÇôfile {nameofcert.pfx} +az keyvault secret download --file {nameofcert.pfx} [--encoding {ascii, base64, hex, utf-16be, utf-16le, utf-8}] [--id] [--name] |
key-vault | Soft Delete Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/soft-delete-overview.md | Key Vault's soft-delete feature allows recovery of the deleted vaults and delete - Once a secret, key, certificate, or key vault is deleted, it will remain recoverable for a configurable period of 7 to 90 calendar days. If no configuration is specified the default recovery period will be set to 90 days. This provides users with sufficient time to notice an accidental secret deletion and respond. - Two operations must be made to permanently delete a secret. First a user must delete the object, which puts it into the soft-deleted state. Second, a user must purge the object in the soft-deleted state. The purge operation requires additional access policy permissions. These additional protections reduce the risk of a user accidentally or maliciously deleting a secret or a key vault. -- To purge a secret in the soft-deleted state, a service principal must be granted an additional "purge" access policy permission. The purge access policy permission is not granted by default to any service principal including key vault and subscription owners and must be deliberately set. By requiring an elevated access policy permission to purge a soft-deleted secret, it reduces the probability of accidentally deleting a secret.+- To purge a secret in the soft-deleted state, a service principal must be granted an additional "purge" access policy permission. The purge access policy permission isn't granted by default to any service principal including key vault and subscription owners and must be deliberately set. By requiring an elevated access policy permission to purge a soft-deleted secret, it reduces the probability of accidentally deleting a secret. ## Supporting interfaces Azure Key Vaults are tracked resources, managed by Azure Resource Manager. Azure When soft-delete is enabled, resources marked as deleted resources are retained for a specified period (90 days by default). The service further provides a mechanism for recovering the deleted object, essentially undoing the deletion. -When creating a new key vault, soft-delete is on by default. Once soft-delete is enabled on a key vault it cannot be disabled. +When creating a new key vault, soft-delete is on by default. Once soft-delete is enabled on a key vault it can't be disabled. -The default retention period is 90 days but, during key vault creation, it is possible to set the retention policy interval to a value from 7 to 90 days through the Azure portal. The purge protection retention policy uses the same interval. Once set, the retention policy interval cannot be changed. +The retention policy interval can only be configured during key vault creation and can't be changed afterwards. You have the option to set it anywhere from 7 to 90 days, with 90 days being the default. The same interval applies to both soft-delete and the purge protection retention policy. -You cannot reuse the name of a key vault that has been soft-deleted until the retention period has passed. +You can't reuse the name of a key vault that has been soft-deleted until the retention period has passed. ### Purge protection Purge protection is an optional Key Vault behavior and is **not enabled by default**. Purge protection can only be enabled once soft-delete is enabled. It can be turned on via [CLI](./key-vault-recovery.md?tabs=azure-cli) or [PowerShell](./key-vault-recovery.md?tabs=azure-powershell). Purge protection is recommended when using keys for encryption to prevent data loss. Most Azure services that integrate with Azure Key Vault, such as Storage, require purge protection to prevent data loss. -When purge protection is on, a vault or an object in the deleted state cannot be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed. +When purge protection is on, a vault or an object in the deleted state can't be purged until the retention period has passed. Soft-deleted vaults and objects can still be recovered, ensuring that the retention policy will be followed. -The default retention period is 90 days, but it is possible to set the retention policy interval to a value from 7 to 90 days through the Azure portal. Once the retention policy interval is set and saved it cannot be changed for that vault. +The default retention period is 90 days, but it's possible to set the retention policy interval to a value from 7 to 90 days through the Azure portal. Once the retention policy interval is set and saved it can't be changed for that vault. ### Permitted purge Soft-deleted resources are retained for a set period of time, 90 days. During th - You may list all of the key vaults and key vault objects in the soft-delete state for your subscription as well as access deletion and recovery information about them. - Only users with special permissions can list deleted vaults. We recommend that our users create a custom role with these special permissions for handling deleted vaults.-- A key vault with the same name cannot be created in the same location; correspondingly, a key vault object cannot be created in a given vault if that key vault contains an object with the same name and which is in a deleted state.+- A key vault with the same name can't be created in the same location; correspondingly, a key vault object can't be created in a given vault if that key vault contains an object with the same name and which is in a deleted state. - Only a specifically privileged user may restore a key vault or key vault object by issuing a recover command on the corresponding proxy resource. - The user, member of the custom role, who has the privilege to create a key vault under the resource group can restore the vault. - Only a specifically privileged user may forcibly delete a key vault or key vault object by issuing a delete command on the corresponding proxy resource. |
machine-learning | How To Use Serverless Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md | Last updated 05/09/2023 [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)] -You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is the easiest way to run training jobs on Azure Machine Learning. Serverless compute is a compute resource that you don't need to manage. It's created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up. +You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is the easiest way to run training jobs on Azure Machine Learning. Serverless compute is a fully-managed, on-demand compute. It is created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up. [!INCLUDE [machine-learning-preview-generic-disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] Serverless compute can be used to run command, sweep, AutoML, pipeline, distribu ## Advantages of serverless compute -* You don't need to create, setup, and manage compute anymore to run training jobs thus reducing steps involved to run a job. -* You don't need to learn about various compute types and related properties. +* Azure Machine Learning manages creating, setting up, scaling, deleting, patching, compute infrastructure reducing management overhead +* You don't need to learn about compute, various compute types, and related properties. * There's no need to repeatedly create clusters for each VM size needed, using same settings, and replicating for each workspace. * You can optimize costs by specifying the exact resources each job needs at runtime in terms of instance type (VM size) and instance count. You can monitor the utilization metrics of the job to optimize the resources a job would need.+* Reduction in steps involved to run a job * To further simplify job submission, you can skip the resources altogether. Azure Machine Learning defaults the instance count and chooses an instance type (VM size) based on factors like quota, cost, performance and disk size. * Lesser wait times before job starts executing in some cases.-* User identity and workspace user assigned managed identity is supported for job submission. +* User identity and workspace user-assigned managed identity is supported for job submission. * With managed network isolation you can streamline and automate your network isolation configuration.+* Admin control through quota and Azure policies ## How to use serverless compute |
machine-learning | Reference Automl Images Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md | The following schemas are applicable when the input request contains one image. #### Image classification (binary/multi-class) -Endpoint for image classification returns all the labels in the dataset and their probability scores for the input image in the following format. `visualizations` and `attributions` are related to explainability and when the request is only for scoring, values for these keys will always be None. For more information on explainability input and output schema for image classification, see the [explainability for image classification section](#image-classification-binarymulti-class-2). +Endpoint for image classification returns all the labels in the dataset and their probability scores for the input image in the following format. `visualizations` and `attributions` are related to explainability and when the request is only for scoring, these keys will not be included in the output. For more information on explainability input and output schema for image classification, see the [explainability for image classification section](#image-classification-binarymulti-class-2). ```json [ {- "filename": "/tmp/tmppjr4et28", "probs": [ 2.098e-06, 4.783e-08, Endpoint for image classification returns all the labels in the dataset and thei "carton", "milk_bottle", "water_bottle"- ], - "visualizations": None, - "attributions": None + ] } ] ``` #### Image classification multi-label -For image classification multi-label, model endpoint returns labels and their probabilities. `visualizations` and `attributions` are related to explainability and when the request is only for scoring, values for these keys will always be None. For more information on explainability input and output schema for multi-label classification, see the [explainability for image classification multi-label section](#image-classification-multi-label-2). +For image classification multi-label, model endpoint returns labels and their probabilities. `visualizations` and `attributions` are related to explainability and when the request is only for scoring, these keys will not be included in the output. For more information on explainability input and output schema for multi-label classification, see the [explainability for image classification multi-label section](#image-classification-multi-label-2). ```json [ {- "filename": "/tmp/tmpsdzxlmlm", "probs": [ 0.997, 0.960, For image classification multi-label, model endpoint returns labels and their pr "carton", "milk_bottle", "water_bottle"- ], - "visualizations": None, - "attributions": None + ] } ] ``` Object detection model returns multiple boxes with their scaled top-left and bot ```json [ {- "filename": "/tmp/tmpdkg2wkdy", "boxes": [ { "box": { In instance segmentation, output consists of multiple boxes with their scaled to ```json [ {- "filename": "/tmp/tmpi8604s0h", "boxes": [ { "box": { Predictions made on model endpoints follow different schema depending on the tas The following schemas are defined for the case of two input images. #### Image classification (binary/multi-class)-Output schema is [same as described above](#data-schema-for-online-scoring) except that `visualizations` and `attributions` key values won't be `None`, if these keys were set to `True` in the request. +Output schema is [same as described above](#data-schema-for-online-scoring) except that `visualizations` and `attributions` key values are included, if these keys were set to `True` in the request. If `model_explainability`, `visualizations`, `attributions` are set to `True` in the input request, then the output will have `visualizations` and `attributions`. More details on these parameters are explained in the following table. Visualizations and attributions are generated against a class that has the highest probability score. If `model_explainability`, `visualizations`, `attributions` are set to `True` in ```json [ {- "filename": "/tmp/tmp7lqqp4pt/tmp7xmop_j8", "probs": [ 0.006, 9.345e-05, If `model_explainability`, `visualizations`, `attributions` are set to `True` in ```json [ {- "filename": "/tmp/tmp_9zieom3/tmp6threa9_", "probs": [ 0.994, 0.994, |
migrate | Migrate Support Matrix Vmware Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md | The table summarizes VMware vSphere hypervisor requirements. | **VMware vCenter Server** | Version 5.5, 6.0, 6.5, 6.7, 7.0. **VMware vSphere ESXi host** | Version 5.5, 6.0, 6.5, 6.7, 7.0.-**vCenter Server permissions** | Agentless migration uses the [Migrate Appliance](migrate-appliance.md). The appliance needs these permissions in vCenter Server:<br/><br/> - **Datastore.Browse** (Datastore -> Browse datastore): Allow browsing of VM log files to troubleshoot snapshot creation and deletion.<br/><br/> - **Datastore.FileManagement** (Datastore -> Low level file operations): Allow read/write/delete/rename operations in the datastore browser, to troubleshoot snapshot creation and deletion.<br/><br/> - **VirtualMachine.Config.ChangeTracking** (Virtual machine -> Disk change tracking): Allow enable or disable change tracking of VM disks, to pull changed blocks of data between snapshots.<br/><br/> - **VirtualMachine.Config.DiskLease** (Virtual machine -> Disk lease): Allow disk lease operations for a VM, to read the disk using the VMware vSphere Virtual Disk Development Kit (VDDK).<br/><br/> - **VirtualMachine.Provisioning.DiskRandomRead** (Virtual machine -> Provisioning -> Allow read-only disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.DiskRandomAccess** (Virtual machine -> Provisioning -> Allow disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.GetVmFiles** (Virtual machine -> Provisioning -> Allow virtual machine download): Allows read operations on files associated with a VM, to download the logs and troubleshoot if failure occurs.<br/><br/> - **VirtualMachine.State.\*** (Virtual machine -> Snapshot management): Allow creation and management of VM snapshots for replication.<br/><br/> - **VirtualMachine.Interact.PowerOff** (Virtual machine > Interaction > Power off): Allow the VM to be powered off during migration to Azure. +**vCenter Server permissions** | Agentless migration uses the [Migrate Appliance](migrate-appliance.md). The appliance needs these permissions in vCenter Server:<br/><br/> - **Datastore.Browse** (Datastore -> Browse datastore): Allow browsing of VM log files to troubleshoot snapshot creation and deletion.<br/><br/> - **Datastore.FileManagement** (Datastore -> Low level file operations): Allow read/write/delete/rename operations in the datastore browser, to troubleshoot snapshot creation and deletion.<br/><br/> - **VirtualMachine.Config.ChangeTracking** (Virtual machine -> Disk change tracking): Allow enable or disable change tracking of VM disks, to pull changed blocks of data between snapshots.<br/><br/> - **VirtualMachine.Config.DiskLease** (Virtual machine -> Disk lease): Allow disk lease operations for a VM, to read the disk using the VMware vSphere Virtual Disk Development Kit (VDDK).<br/><br/> - **VirtualMachine.Provisioning.DiskRandomRead** (Virtual machine -> Provisioning -> Allow read-only disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.DiskRandomAccess** (Virtual machine -> Provisioning -> Allow disk access): Allow opening a disk on a VM, to read the disk using the VDDK.<br/><br/> - **VirtualMachine.Provisioning.GetVmFiles** (Virtual machine -> Provisioning -> Allow virtual machine download): Allows read operations on files associated with a VM, to download the logs and troubleshoot if failure occurs.<br/><br/> - **VirtualMachine.State.\*** (Virtual machine -> Snapshot management): Allow creation and management of VM snapshots for replication.<br/><br/> - **VirtualMachine.GuestOperations.\*** (Virtual machine -> Guest operations): Allow Discovery, Software Inventory, and Dependency Mapping on VMs.<br/><br/> -**VirtualMachine.Interact.PowerOff** (Virtual machine > Interaction > Power off): Allow the VM to be powered off during migration to Azure. **Multiple vCenter Servers** | A single appliance can connect to up to 10 vCenter Servers. |
open-datasets | Dataset Genomics Data Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-genomics-data-lake.md | The Genomics Data Lake is hosted in the West US 2 and West Central US Azure regi | [OpenCravat](dataset-open-cravat.md) | OpenCravat: Open Custom Ranked Analysis of Variants Toolkit | | [ENCODE](dataset-encode.md) | ENCODE: Encyclopedia of DNA Elements | | [GATK Resource Bundle](dataset-gatk-resource-bundle.md) | GATK Resource bundle |+| [TCGA Open Data](dataset-encode.md) | TCGA Open Data | +| [Pan UK-Biobank](dataset-panancestry-uk-bio-bank.md) | Pan UK-Biobank | ## Next steps |
orbital | Geospatial Reference Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/geospatial-reference-architecture.md | Various Spark libraries are available for working with geospatial data on Azure But [other solutions also exist for processing and scaling geospatial workloads with Azure Databricks](https://databricks.com/blog/2019/12/05/processing-geospatial-data-at-scale-with-databricks.html). -- Other Python libraries to consider include [PySAL](http://pysal.org/), [Rasterio](https://rasterio.readthedocs.io/en/latest/intro.html), [WhiteboxTools](https://www.whiteboxgeo.com/manual/wbt_book/intro.html), [Turf.js](https://turfjs.org/), [Pointpats](https://pointpats.readthedocs.io/en/latest/), [Raster Vision](https://docs.rastervision.io/en/0.13/), [EarthPy](https://earthpy.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [Planetary Computer](https://planetarycomputer.microsoft.com/), [PDAL](https://pdal.io/), etc.+- Other Python libraries to consider include [PySAL](http://pysal.org/), [Rasterio](https://rasterio.readthedocs.io/en/latest/intro.html), [WhiteboxTools](https://www.whiteboxgeo.com/manual/wbt_book/intro.html), [Turf.js](https://turfjs.org/), [Pointpats](https://pypi.org/project/pointpats/), [Raster Vision](https://docs.rastervision.io/en/0.13/), [EarthPy](https://earthpy.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [Planetary Computer](https://planetarycomputer.microsoft.com/), [PDAL](https://pdal.io/), etc. - [Vector tiles](https://github.com/mapbox/vector-tile-spec) provide an efficient way to display GIS data on maps. A solution could use PostGIS to dynamically query vector tiles. This approach works well for simple queries and result sets that contain well under 1 million records. But in the following cases, a different approach may be better: - Your queries are computationally expensive. |
private-link | Inspect Traffic With Azure Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md | You may need to inspect or block traffic from clients to the services exposed vi The following limitations apply: -* Network security groups (NSG) traffic is bypassed from private endpoints +* Network security groups (NSG) traffic is bypassed from private endpoints due to network policies being disabled for a subnet in a virtual network by default. To utilize network policies like User-Defined Routes and Network Security Groups support, network policy support must be enabled for the subnet. This setting is only applicable to private endpoints within the subnet. This setting affects all private endpoints within the subnet. For other resources in the subnet, access is controlled based on security rules in the network security group. * User-defined routes (UDR) traffic is bypassed from private endpoints. User-defined routes can be used to override traffic destined for the private endpoint. |
private-link | Private Endpoint Dns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md | For Azure services, use the recommended zone names as described in the following | Private link resource type / Subresource |Private DNS zone name | Public DNS zone forwarders | ||||-| Azure Automation / (Microsoft.Automation/automationAccounts) / Webhook, DSCAndHybridWorker | privatelink.azure-automation.net | azure-automation.net | +| Azure Automation (Microsoft.Automation/automationAccounts) / Webhook, DSCAndHybridWorker | privatelink.azure-automation.net | azure-automation.net | | Azure SQL Database (Microsoft.Sql/servers) / sqlServer | privatelink.database.windows.net | database.windows.net |-| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net | +| Azure SQL Managed Instance (Microsoft.Sql/managedInstances) / managedInstance | privatelink.{dnsPrefix}.database.windows.net | {instanceName}.{dnsPrefix}.database.windows.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Sql | privatelink.sql.azuresynapse.net | sql.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / SqlOnDemand | privatelink.sql.azuresynapse.net | {workspaceName}-ondemand.sql.azuresynapse.net | | Azure Synapse Analytics (Microsoft.Synapse/workspaces) / Dev | privatelink.dev.azuresynapse.net | dev.azuresynapse.net | For Azure services, use the recommended zone names as described in the following | Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.windows.net | redis.cache.windows.net | | Azure Cache for Redis Enterprise (Microsoft.Cache/RedisEnterprise) / redisEnterprise | privatelink.redisenterprise.cache.azure.net | redisenterprise.cache.azure.net | | Microsoft Purview (Microsoft.Purview) / account | privatelink.purview.azure.com | purview.azure.com |-| Microsoft Purview (Microsoft.Purview) / portal| privatelink.purviewstudio.azure.com | purview.azure.com | +| Microsoft Purview (Microsoft.Purview) / portal | privatelink.purviewstudio.azure.com | purview.azure.com | | Azure Digital Twins (Microsoft.DigitalTwins) / digitalTwinsInstances | privatelink.digitaltwins.azure.net | digitaltwins.azure.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.net | azurehdinsight.net | | Azure Arc (Microsoft.HybridCompute) / hybridcompute | privatelink.his.arc.azure.com <br/> privatelink.guestconfiguration.azure.com </br> privatelink.kubernetesconfiguration.azure.com | his.arc.azure.com <br/> guestconfiguration.azure.com </br> kubernetesconfiguration.azure.com | |
reliability | Reliability Energy Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md | Last updated 01/13/2023 -# Reliability in Azure Data Manager for Energy (Preview) +# Reliability in Azure Data Manager for Energy This article describes reliability support in Azure Data Manager for Energy, and covers intra-regional resiliency with [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](../reliability/overview.md). -- ## Availability zone support Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability in the case of a local zone failure. When one zone experiences a failure, the remaining two zones support all regional services, capacity, and high availability. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](availability-zones-overview.md). -Azure Data Manager for Energy Preview supports zone-redundant instance by default and there's no setup required. +Azure Data Manager for Energy supports zone-redundant instance by default and there's no setup required. ### Prerequisites -The Azure Data Manager for Energy Preview supports availability zones in the following regions: +The Azure Data Manager for Energy supports availability zones in the following regions: | Americas | Europe | Middle East | Africa | Asia Pacific | The Azure Data Manager for Energy Preview supports availability zones in the fol ### Zone down experience During a zone-wide outage, no action is required during zone recovery. There may be a brief degradation of performance until the service self-heals and re-balances underlying capacity to adjust to healthy zones. -If you're experiencing failures with Azure Data Manager for Energy PreviewAPIs, you may need to implement a retry mechanism for 5XX errors. +If you're experiencing failures with Azure Data Manager for Energy APIs, you may need to implement a retry mechanism for 5XX errors. ## Next steps > [!div class="nextstepaction"] |
role-based-access-control | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md | $validateRemovedRoles = Get-AzRoleAssignment -Scope /subscriptions/$subId | Wher ## Custom roles -### Symptom - Unable to update a custom role +### Symptom - Unable to update or delete a custom role -You're unable to update an existing custom role. +You're unable to update or delete an existing custom role. -**Cause** +**Cause 1** -You're currently signed in with a user that doesn't have permission to update custom roles. +You're currently signed in with a user that doesn't have permission to update or delete custom roles. -**Solution** +**Solution 1** Check that you're currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator). +**Cause 2** ++The custom role includes a subscription in assignable scopes and that subscription is in a [disabled state](../cost-management-billing/manage/subscription-states.md). ++**Solution 2** ++Reactivate the disabled subscription and update the custom role as needed. For more information, see [Reactivate a disabled Azure subscription](../cost-management-billing/manage/subscription-disabled.md). + ### Symptom - Unable to create or update a custom role When you try to create or update a custom role, you get an error similar to following: |
sap | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md | Clone the repository and prepare the execution environment by using the followin Ensure the Virtual Machine has the following prerequisites installed: - git+ - jq + - unzip Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources. |
storage | File Sync Troubleshoot Cloud Tiering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-cloud-tiering.md | - Title: Troubleshoot Azure File Sync cloud tiering -description: Troubleshoot common issues with cloud tiering in an Azure File Sync deployment. --- Previously updated : 06/02/2023-----# Troubleshoot Azure File Sync cloud tiering --Cloud tiering, an optional feature of Azure File Sync, decreases the amount of local storage required while keeping the performance of an on-premises file server. When enabled, this feature stores only frequently accessed (hot) files on your local server. Infrequently accessed (cool) files are split into namespace (file and folder structure) and file content. --There are two paths for failures in cloud tiering: --- Files can fail to tier, which means that Azure File Sync unsuccessfully attempts to tier a file to Azure Files.-- Files can fail to recall, which means that the Azure File Sync file system filter (StorageSync.sys) fails to download data when a user attempts to access a file that has been tiered.--There are two main classes of failures that can happen via either failure path: --- Cloud storage failures- - *Transient storage service availability issues*. For more information, see the [Service Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/v1_5/). - - *Inaccessible Azure file share*. This failure typically happens when you delete the Azure file share when it is still a cloud endpoint in a sync group. - - *Inaccessible storage account*. This failure typically happens when you delete the storage account while it still has an Azure file share that is a cloud endpoint in a sync group. -- Server failures - - *Azure File Sync file system filter (StorageSync.sys) isn't loaded*. In order to respond to tiering/recall requests, the Azure File Sync file system filter must be loaded. The filter not being loaded can happen for several reasons, but the most common reason is that an administrator unloaded it manually. The Azure File Sync file system filter must be loaded at all times for Azure File Sync to properly function. - - *Missing, corrupt, or otherwise broken reparse point*. A reparse point is a special data structure on a file that consists of two parts: - 1. A reparse tag, which indicates to the operating system that the Azure File Sync file system filter (StorageSync.sys) might need to do some action on IO to the file. - 2. Reparse data, which indicates to the file system filter the URI of the file on the associated cloud endpoint (the Azure file share). - - The most common way a reparse point could become corrupted is if an administrator attempts to modify either the tag or its data. - - *Network connectivity issues*. In order to tier or recall a file, the server must have internet connectivity. --The following sections indicate how to troubleshoot cloud tiering issues and determine if an issue is a cloud storage issue or a server issue. --## How to monitor tiering activity on a server -To monitor tiering activity on a server, use Event ID 9003, 9016, and 9029 in the Telemetry event log (located under `Applications and Services\Microsoft\FileSync\Agent` in Event Viewer). --- Event ID 9003 provides error distribution for a server endpoint. For example, Total Error Count, ErrorCode, etc. Note, one event is logged per error code.-- Event ID 9016 provides ghosting results for a volume. For example, Free space percent is, Number of files ghosted in session, Number of files failed to ghost, etc.-- Event ID 9029 provides ghosting session information for a server endpoint. For example, Number of files attempted in the session, Number of files tiered in the session, Number of files already tiered, etc.--## How to monitor recall activity on a server -To monitor recall activity on a server, use Event ID 9005, 9006, 9009, and 9059 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer). --- Event ID 9005 provides recall reliability for a server endpoint. For example, Total unique files accessed, Total unique files with failed access, etc.-- Event ID 9006 provides recall error distribution for a server endpoint. For example, Total Failed Requests, ErrorCode, etc. Note, one event is logged per error code.-- Event ID 9009 provides recall session information for a server endpoint. For example, DurationSeconds, CountFilesRecallSucceeded, CountFilesRecallFailed, etc.-- Event ID 9059 provides application recall distribution for a server endpoint. For example, ShareId, Application Name, and TotalEgressNetworkBytes.--## How to troubleshoot files that fail to tier -If files fail to tier to Azure Files: --1. In Event Viewer, review the telemetry, operational and diagnostic event logs, located under Applications and Services\Microsoft\FileSync\Agent. - 1. Verify the files exist in the Azure file share. -- > [!NOTE] - > A file must be synced to an Azure file share before it can be tiered. -- 2. Verify the server has internet connectivity. - 3. Verify the Azure File Sync filter drivers (StorageSync.sys and StorageSyncGuard.sys) are running: - - At an elevated command prompt, run `fltmc`. Verify that the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are listed. --> [!NOTE] -> An Event ID 9003 is logged once an hour in the Telemetry event log if a file fails to tier (one event is logged per error code). Check the [Tiering errors and remediation](#tiering-errors-and-remediation) section to see if remediation steps are listed for the error code. --## Tiering errors and remediation --| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation | -||-|--|-|-| -| 0x80c86045 | -2134351803 | ECS_E_INITIAL_UPLOAD_PENDING | The file failed to tier because the initial upload is in progress. | No action required. The file will be tiered once the initial upload completes. | -| 0x80c86043 | -2134351805 | ECS_E_GHOSTING_FILE_IN_USE | The file failed to tier because it's in use. | No action required. The file will be tiered when it's no longer in use. | -| 0x80c80241 | -2134375871 | ECS_E_GHOSTING_EXCLUDED_BY_SYNC | The file failed to tier because it's excluded by sync. | No action required. Files in the sync exclusion list can't be tiered. | -| 0x80c86042 | -2134351806 | ECS_E_GHOSTING_FILE_NOT_FOUND | The file failed to tier because it wasn't found on the server. | No action required. If the error persists, check if the file exists on the server. | -| 0x80c83053 | -2134364077 | ECS_E_CREATE_SV_FILE_DELETED | The file failed to tier because it was deleted in the Azure file share. | No action required. The file should be deleted on the server when the next download sync session runs. | -| 0x80c8600e | -2134351858 | ECS_E_AZURE_SERVER_BUSY | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | -| 0x80072ee7 | -2147012889 | WININET_E_NAME_NOT_RESOLVED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | -| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to tier due to access denied error. This error can occur if the file is located on a DFS-R read-only replication folder. | Azure File Sync doesn't support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. | -| 0x80072efe | -2147012866 | WININET_E_CONNECTION_ABORTED | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | -| 0x80c80261 | -2134375839 | ECS_E_GHOSTING_MIN_FILE_SIZE | The file failed to tier because the file size is less than the supported size. | The minimum supported file size is based on the file system cluster size (double file system cluster size). For example, if the file system cluster size is 4 KiB, the minimum file size is 8 KiB. | -| 0x80c83007 | -2134364153 | ECS_E_STORAGE_ERROR | The file failed to tier due to an Azure storage issue. | If the error persists, open a support request. | -| 0x800703e3 | -2147023901 | ERROR_OPERATION_ABORTED | The file failed to tier because it was recalled at the same time. | No action required. The file will be tiered when the recall completes and the file is no longer in use. | -| 0x80c80264 | -2134375836 | ECS_E_GHOSTING_FILE_NOT_SYNCED | The file failed to tier because it hasn't synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. | -| 0x80070001 | -2147942401 | ERROR_INVALID_FUNCTION | The file failed to tier because the cloud tiering filter driver (storagesync.sys) isn't running. | To resolve this issue, open an elevated command prompt and run the following command: `fltmc load storagesync`<br>If the Azure File Sync filter driver fails to load when running the `fltmc` command, uninstall the Azure File Sync agent, restart the server, and reinstall the Azure File Sync agent. | -| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to tier due to insufficient disk space on the volume where the server endpoint is located. | To resolve this issue, free at least 100 MiB of disk space on the volume where the server endpoint is located. | -| 0x80070490 | -2147023728 | ERROR_NOT_FOUND | The file failed to tier because it hasn't synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. | -| 0x80c80262 | -2134375838 | ECS_E_GHOSTING_UNSUPPORTED_RP | The file failed to tier because it's an unsupported reparse point. | If the file is a Data Deduplication reparse point, follow the steps in the [planning guide](file-sync-planning.md#data-deduplication) to enable Data Deduplication support. Files with reparse points other than Data Deduplication aren't supported and won't be tiered. | -| 0x80c83052 | -2134364078 | ECS_E_CREATE_SV_STREAM_ID_<br>MISMATCH | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. | -| 0x80c80269 | -2134375831 | ECS_E_GHOSTING_REPLICA_NOT_<br>FOUND | The file failed to tier because it hasn't synced to the Azure file share. | No action required. The file will tier once it has synced to the Azure file share. | -| 0x80072ee2 | -2147012894 | WININET_E_TIMEOUT | The file failed to tier due to a network issue. | No action required. If the error persists, check network connectivity to the Azure file share. | -| 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file failed to tier because it has been modified. | No action required. The file will tier once the modified file has synced to the Azure file share. | -| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to tier due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | -| 0x8e5e03fe | -1906441218 | JET_errDiskIO | The file failed to tier due to an I/O error when writing to the cloud tiering database. | If the error persists, run chkdsk on the volume and check the storage hardware. | -| 0x8e5e0442 | -1906441150 | JET_errInstanceUnavailable | The file failed to tier because the cloud tiering database isn't running. | To resolve this issue, restart the FileSyncSvc service or server. If the error persists, run chkdsk on the volume and check the storage hardware. | -| 0x80C80285 | -2134375803 | ECS_E_GHOSTING_SKIPPED_BY_<br>CUSTOM_EXCLUSION_LIST | The file can't be tiered because the file type is excluded from tiering. | To tier files with this file type, modify the GhostingExclusionList registry setting which is located under HKEY_LOCAL_MACHINE\SOFTWARE<br>\Microsoft\Azure\StorageSync | -| 0x80C86050 | -2134351792 | ECS_E_REPLICA_NOT_READY_FOR_<br>TIERING | The file failed to tier because the current sync mode is initial upload or reconciliation. | No action required. The file will be tiered once sync completes initial upload or reconciliation. | -| 0x80c8304e | -2134364082 | ECS_E_WORK_FRAMEWORK_ACTION_<br>RETRY_NOT_SUPPORTED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c8309c | -2134364004 | ECS_E_CREATE_SV_BATCHED_CHANGE_<br>DETECTION_FAILED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x8000ffff | -2147418113 | E_UNEXPECTED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c80220 | -2134375904 | ECS_E_SYNC_METADATA_IO_ERROR | The sync database has encountered an IO error. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c830a7 | -2134363993 | ECS_E_AZURE_FILE_SNAPSHOT_LIMIT_<br>REACHED | The Azure file snapshot limit has been reached. | Upgrade the Azure File Sync agent to the latest version. After upgrading the agent, run the `DeepScrubbingScheduledTask` located under \Microsoft\StorageSync. | -| 0x80c80367 | -2134375577 | ECS_E_FILE_SNAPSHOT_OPERATION_<br>EXECUTION_MAX_ATTEMPTS_REACHED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c8306f | -2134364049 | ECS_E_ETAG_MISMATCH | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c8304c | -2134364084 | ECS_E_ASYNC_POLLING_TIMEOUT | Timeout error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80070299 | -2147024231 | ERROR_FILE_SYSTEM_LIMITATION | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c83054 | -2134364076 | ECS_E_CREATE_SV_UNKNOWN_<br>GLOBAL_ID | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c8309b | -2134364005 | ECS_E_CREATE_SV_PER_ITEM_CHANGE_<br>DETECTION_FAILED | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c83034 | -2134364108 | ECS_E_FORBIDDEN | Access is denied. | Please check the access policies on the storage account, and also check your proxy settings. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x80070034 | -2147024844 | ERROR_DUP_NAME | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80071128 | -2147020504 | ERROR_INVALID_REPARSE_DATA | The data is corrupted and unreadable. | Run chkdsk on the volume. [Learn more](/windows-server/administration/windows-commands/chkdsk?tabs=event-viewer). | -| 0x8e5e0450 | -1906441136 | JET_errInvalidSesid | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80092004 | -2146885628 | CRYPT_E_NOT_FOUND | Certificate required for Azure File Sync authentication is missing. | Run this PowerShell command on the server to reset the certificate `Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>` | -| 0x80c80020 | -2134376416 | ECS_E_CLUSTER_NOT_RUNNING | The Failover Cluster service is not running. | Verify the cluster service (clussvc) is running. [Learn more](/troubleshoot/windows-server/high-availability/troubleshoot-cluster-service-fails-to-start). | -| 0x80c83036 | -2134364106 | ECS_E_NOT_FOUND | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x801f0005 | -2145452027 | ERROR_FLT_INVALID_NAME_REQUEST | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80071126 | -2147020506 | ERROR_NOT_A_REPARSE_POINT | An internal error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80070718 | -2147023080 | ERROR_NOT_ENOUGH_QUOTA | Not enough server memory resources available to process this command. | Monitor memory usage on your server. [Learn more](file-sync-planning.md#recommended-system-resources). | -| 0x8007046a | -2147023766 | ERROR_NOT_ENOUGH_SERVER_MEMORY | Not enough server memory resources available to process this command. | Monitor memory usage on your server. [Learn more](file-sync-planning.md#recommended-system-resources). | -| 0x80070026 | -2147024858 | COR_E_ENDOFSTREAM | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80131501 | -2146233087 | COR_E_SYSTEM | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c86040 | -2134351808 | ECS_E_AZURE_FILE_SHARE_INVALID_<br>HEADER | An unexpected error occurred. | If the error persists for more than a day, create a support request. | -| 0x80c80339 | -2134375623 | ECS_E_CERT_DATE_INVALID | The server's SSL certificate is expired. | Check with your organization's tech support to get help. If you need further investigation, create a support request. | -| 0x80c80337 | -2134375625 | ECS_E_INVALID_CA | The server's SSL certificate was issued by a certificate authority that isn't trusted by this PC. | Check with your organization's tech support to get help. If you need further investigation, create a support request. | -| 0x80c80001 | -2134376447 | ECS_E_SYNC_INVALID_PROTOCOL_FORMAT | A connection with the service could not be established. | Please check and configure the proxy setting correctly or remove the proxy setting. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x800706d9 | -2147023143 | EPT_S_NOT_REGISTERED | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80070035 | -2147024843 | ERROR_BAD_NETPATH | An external error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80070571 | -2147023503 | ERROR_DISK_CORRUPT | The disk structure is corrupted and unreadable. | Run chkdsk on the volume. [Learn more](/windows-server/administration/windows-commands/chkdsk?tabs=event-viewer). | -| 0x8007052e | -2147023570 | ERROR_LOGON_FAILURE | Operation failed due to an authentication failure. | If the error persists for more than a day, create a support request. | -| 0x8002802b | -2147319765 | TYPE_E_ELEMENTNOTFOUND | An unexpected error occurred. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80072f00 | -2147012864 | WININET_E_FORCE_RETRY | A connection with the service could not be established. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | --## How to troubleshoot files that fail to be recalled -If files fail to be recalled: -1. In Event Viewer, review the telemetry, operational and diagnostic event logs, located under Applications and Services\Microsoft\FileSync\Agent. - 1. Verify the files exist in the Azure file share. - 2. Verify the server has internet connectivity. - 3. Open the Services MMC snap-in and verify the Storage Sync Agent service (FileSyncSvc) is running. - 4. Verify the Azure File Sync filter drivers (StorageSync.sys and StorageSyncGuard.sys) are running: - - At an elevated command prompt, run `fltmc`. Verify that the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are listed. --> [!NOTE] -> An Event ID 9006 is logged once per hour in the Telemetry event log if a file fails to recall (one event is logged per error code). Check the [Recall errors and remediation](#recall-errors-and-remediation) section to see if remediation steps are listed for the error code. --## Recall errors and remediation --| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation | -||-|--|-|-| -| 0x80070079 | -2147942521 | ERROR_SEM_TIMEOUT | The file failed to recall due to an I/O timeout. This issue can occur for several reasons: server resource constraints, poor network connectivity, or an Azure storage issue (for example, throttling). | No action required. If the error persists for several hours, please open a support case. | -| 0x80070036 | -2147024842 | ERROR_NETWORK_BUSY | The file failed to recall due to a network issue. | If the error persists, check network connectivity to the Azure file share. | -| 0x80c80037 | -2134376393 | ECS_E_SYNC_SHARE_NOT_FOUND | The file failed to recall because the server endpoint was deleted. | To resolve this issue, see [Tiered files aren't accessible on the server after deleting a server endpoint](?tabs=portal1%252cazure-portal#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). | -| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file failed to recall due to an access denied error. This issue can occur if the firewall and virtual network settings on the storage account are enabled and the server does not have access to the storage account. | To resolve this issue, add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. | -| 0x80c86002 | -2134351870 | ECS_E_AZURE_RESOURCE_NOT_FOUND | The file failed to recall because it's not accessible in the Azure file share. | To resolve this issue, verify the file exists in the Azure file share. If the file exists in the Azure file share, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md#supported-versions). | -| 0x80c8305f | -2134364065 | ECS_E_EXTERNAL_STORAGE_ACCOUNT_<br>AUTHORIZATION_FAILED | The file failed to recall due to authorization failure to the storage account. | To resolve this issue, verify [Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#troubleshoot-rbac). | -| 0x80c86030 | -2134351824 | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | The file failed to recall because the Azure file share isn't accessible. | Verify the file share exists and is accessible. If the file share was deleted and recreated, perform the steps documented in the [Sync failed because the Azure file share was deleted and recreated](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cazure-portal#-2134375810) section to delete and recreate the sync group. | -| 0x800705aa | -2147023446 | ERROR_NO_SYSTEM_RESOURCES | The file failed to recall due to insufficient system resources. | If the error persists, investigate which application or kernel-mode driver is exhausting system resources. | -| 0x8007000e | -2147024882 | ERROR_OUTOFMEMORY | The file failed to recall due to insufficient memory. | If the error persists, investigate which application or kernel-mode driver is causing the low memory condition. | -| 0x80070070 | -2147024784 | ERROR_DISK_FULL | The file failed to recall due to insufficient disk space. | To resolve this issue, free up space on the volume by moving files to a different volume, increase the size of the volume, or force files to tier by using the `Invoke-StorageSyncCloudTiering` cmdlet. | -| 0x80072f8f | -2147012721 | WININET_E_DECODING_FAILED | The file failed to recall because the server was unable to decode the response from the Azure File Sync service. | This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. | -| 0x80090352 | -2146892974 | SEC_E_ISSUING_CA_UNTRUSTED | The file failed to recall because your organization is using a TLS terminating proxy or a malicious entity is intercepting the traffic between your server and the Azure File Sync service. | If you're certain this is expected (because your organization is using a TLS terminating proxy), follow the steps documented for error [CERT_E_UNTRUSTEDROOT](file-sync-troubleshoot-sync-errors.md#-2146762487) to resolve this issue. | -| 0x80c86047 | -2134351801 | ECS_E_AZURE_SHARE_SNAPSHOT_NOT_FOUND | The file failed to recall because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. | -| 0x80070032 | -2147024846 | ERROR_NOT_SUPPORTED | An internal error occurred. | Please upgrade to the latest Azure File Sync agent version. If the error persists after upgrading the agent, create a support request. | -| 0x80070006 | -2147024890 | ERROR_INVALID_HANDLE | An internal error occurred. | If the error persists for more than a day, create a support request. | -| 0x80c80310 | -2134375664 | ECS_E_INVALID_DOWNLOAD_RESPONSE | Azure File sync error. | If the error persists for more than a day, create a support request. | -| 0x8007045d | -2147023779 | ERROR_IO_DEVICE | An internal error occurred. | If the error persists for more than a day, create a support request. | -| 0x80c8604b | -2134351797 | ECS_E_AZURE_FILE_SHARE_FILE_NOT_FOUND | File not found in the file share. | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. | -| 0x80070021 | -2147024863 | ERROR_LOCK_VIOLATION | The process cannot access the file because another process has locked a portion of the file. | No action required. Once the application closes the handle to the file, recall should succeed. | -| 0x80c8604c | -2134351796 | ECS_E_AZURE_FILE_SNAPSHOT_NOT_FOUND_<br>SYNC_PENDING | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. Recall should succeed after the sync session completes. | -| 0x80c80312 | -2134375662 | ECS_E_DOWNLOAD_SESSION_STREAM_INTERRUPTED | Couldn't finish downloading files. Sync will try again later. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x80c8600c | -2134351860 | ECS_E_AZURE_INTERNAL_ERROR | The server encountered an internal error. | No action required. If the error persists for more than a day, create a support request. | -| 0x80c8600b | -2134351861 | ECS_E_AZURE_INVALID_RANGE | The server encountered an internal error. | No action required. If the error persists for more than a day, create a support request. | -| 0x8007045b | -2147023781 | ERROR_SHUTDOWN_IN_PROGRESS | A system shutdown is in progress. | No action required. If the error persists for more than a day, create a support request. | -| 0x80072efd | -2147012867 | WININET_E_CANNOT_CONNECT | A connection with the service could not be established. | Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x800703ee | -2147023890 | ERROR_FILE_INVALID | The volume for a file has been externally altered so that the opened file is no longer valid. | If the error persists for more than a day, create a support request. | -| 0x80c86048 | -2134351800 | ECS_E_AZURE_FILE_SNAPSHOT_NOT_FOUND | An internal error occurred. | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. | -| 0x80072f78 | -2147012744 | WININET_E_INVALID_SERVER_RESPONSE | A connection with the service could not be established. | Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x8007139f | -2147019873 | ERROR_INVALID_STATE | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. | -| 0x80070570 | -2147023504 | ERROR_FILE_CORRUPT | The file or directory is corrupted and unreadable. | Run chkdsk on the volume. [Learn more](/windows-server/administration/windows-commands/chkdsk?tabs=event-viewer). | -| 0x800705ad | -2147023443 | ERROR_WORKING_SET_QUOTA | Insufficient quota to complete the requested service. | Monitor memory usage on your server. If the error persists for more than a day, create a support request. | -| 0x80070008 | -2147024888 | ERROR_NOT_ENOUGH_MEMORY | Not enough memory resources are available to process this command. | Monitor memory usage on your server. If the error persists for more than a day, create a support request. | -| 0x80c80072 | -2134376334 | ECS_E_BAD_GATEWAY | A connection with the service could not be established. | Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x80190193 | -2145844845 | HTTP_E_STATUS_FORBIDDEN | Forbidden (403) error occurred. | Update Azure file share access policy. [Learn more](../../role-based-access-control/built-in-roles.md). | -| 0x80c8604e | -2134351794 | ECS_E_AZURE_FILE_SNAPSHOT_NOT_FOUND_ON_<br>CONFLICT_FILE | Unable to recall sync conflict loser file from Azure file share. | If this error is happening for a tiered file that is a sync conflict file, this file might not be needed by end users anymore. If the original file is available and valid, you may remove this file from the server endpoint. | -| 0x80c80075 | -2134376331 | ECS_E_ACCESS_TOKEN_CATASTROPHIC_FAILURE | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. | -| 0x80c8005b | -2134376357 | ECS_E_AZURE_FILE_SERVICE_UNAVAILABLE | The Azure File Service is currently unavailable. | If the error persists for more than a day, create a support request. | -| 0x80c83099 | -2134364007 | ECS_E_PRIVATE_ENDPOINT_ACCESS_BLOCKED | Private endpoint configuration access blocked. | Check the private endpoint configuration and allow access to the Azure File Sync service. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x80c86000 | -2134351872 | ECS_E_AZURE_AUTHENTICATION_FAILED | Server failed to authenticate the request. | Check the network configuration and make sure the storage account accepts the server IP address. You can do this by adding the server IP, adding the server's IP subnet, or adding the server vnet to the authorized access control list to access the storage account. [Learn more](file-sync-deployment-guide.md#optional-configure-firewall-and-virtual-network-settings). | -| 0x80072ef1 | -2147012879 | ERROR_WINHTTP_OPERATION_CANCELLED | A connection with the service could not be established. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x80c80338 | -2134375624 | ECS_E_CERT_CN_INVALID | The server's SSL certificate contains incorrect hostnames. The certificate can't be used to establish the SSL connection. | Check with your organization's tech support to get help. If you need further investigation, create a support request. | -| 0x80c8000c | -2134376436 | ECS_E_SYNC_UNKNOWN_URI | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. | -| 0x80c8033a | -2134375622 | ECS_E_SECURITY_CHANNEL_ERROR | There was a problem validating the server's SSL certificate. | Check with your organization's tech support to get help. If you need further investigation, create a support request. | -| 0x80131509 | -2146233079 | COR_E_INVALIDOPERATION | An unexpected error occurred. | If the error persists for more than a day, create a support request. | -| 0x80c8603d | -2134351811 | ECS_E_AZURE_UNKNOWN_FAILURE | An unexpected error occurred. | No action required. If the error persists for more than a day, create a support request. | -| 0x80c8033f | -2134375617 | ECS_E_TOKEN_LIFETIME_IS_TOO_LONG | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. | -| 0x80190190 | -2145844848 | HTTP_E_STATUS_BAD_REQUEST | A connection with the service could not be established. | No action required. If the error persists for more than a day, create a support request. | -| 0x80c86036 | -2134351818 | ECS_E_AZURE_FILE_PARENT_NOT_FOUND | The specified parent path for the file does not exist | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. | -| 0x80c86049 | -2134351799 | ECS_E_AZURE_SHARE_SNAPSHOT_FILE_NOT_FOUND | File not found in the share snapshot. | You have likely performed an unsupported operation. [Learn more](file-sync-disaster-recovery-best-practices.md). Please find the original copy of the file and overwrite the tiered file in the server endpoint. | -| 0x80c80311 | -2134375663 | ECS_E_DOWNLOAD_SESSION_HASH_CONFLICT | An internal error occurred. | If the error persists for more than a day, create a support request. | -| 0x800700a4 | -2147024732 | ERROR_MAX_THRDS_REACHED | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. | -| 0x80070147 | -2147024569 | ERROR_OFFSET_ALIGNMENT_VIOLATION | An internal error occurred. | If the error persists for more than a day, create a support request. | -| 0x80090321 | -2146893023 | SEC_E_BUFFER_TOO_SMALL | An internal error occurred. | If the error persists for more than a day, create a support request. | -| 0x801901a0 | -2145844832 | HTTP_E_STATUS_RANGE_NOT_SATISFIABLE | An internal error occurred. | If the error persists for more than a day, create a support request. | -| 0x80c80066 | -2134376346 | ECS_E_CLUSTER_ID_MISMATCH | There is a mismatch between the cluster ID returned from cluster API and the cluster ID saved during the registration. | Please create a support request for further investigation of the issue. | -| 0x80c8032d | -2134375635 | ECS_E_PROXY_AUTH_REQUIRED | The proxy server used to access the internet needs your current credentials. | If your proxy requires authentication, update the proxy credentials. [Learn more](file-sync-firewall-and-proxy.md#proxy). | -| 0x8007007a | -2147024774 | ERROR_INSUFFICIENT_BUFFER | An internal error occurred. | No action required. If the error persists for more than a day, create a support request. | -| 0x8019012e | -2145844946 | HTTP_E_STATUS_REDIRECT | Azure File Sync does not support HTTP redirection. | Disable HTTP redirect on your proxy server or network device. | -| 0x800706be | -2147023170 | RPC_S_CALL_FAILED | An unknown error occurred. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | -| 0x80072747 | -2147014841 | WSAENOBUFS | An internal error occurred. | If the error persists, use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). | --## Tiered files are not accessible on the server after deleting a server endpoint -Tiered files on a server will become inaccessible if the files aren't recalled prior to deleting a server endpoint. --Errors logged if tiered files aren't accessible -- When syncing a file, error code -2147942467 (0x80070043 - ERROR_BAD_NET_NAME) is logged in the ItemResults event log-- When recalling a file, error code -2134376393 (0x80c80037 - ECS_E_SYNC_SHARE_NOT_FOUND) is logged in the RecallResults event log--Restoring access to your tiered files is possible if the following conditions are met: -- Server endpoint was deleted within past 30 days-- Cloud endpoint wasn't deleted -- File share wasn't deleted-- Sync group wasn't deleted--If the above conditions are met, you can restore access to the files on the server by recreating the server endpoint at the same path on the server within the same sync group within 30 days. --If the above conditions aren't met, restoring access isn't possible as these tiered files on the server are now orphaned. Follow these instructions to remove the orphaned tiered files. --**Notes** -- When tiered files aren't accessible on the server, the full file should still be accessible if you access the Azure file share directly.-- To prevent orphaned tiered files in the future, follow the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md) when deleting a server endpoint.--<a id="get-orphaned"></a>**How to get the list of orphaned tiered files** --1. Run the following PowerShell commands to list orphaned tiered files: -```powershell -Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" -$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path> -$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt -``` -2. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they're deleted. --<a id="remove-orphaned"></a>**How to remove orphaned tiered files** --*Option 1: Delete the orphaned tiered files* --This option deletes the orphaned tiered files on the Windows Server but requires removing the server endpoint if it exists due to re-creation after 30 days or is connected to a different sync group. File conflicts will occur if files are updated on the Windows Server or Azure file share before the server endpoint is recreated. --1. Back up the Azure file share and server endpoint location. -2. Remove the server endpoint in the sync group (if it exists) by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md). --> [!Warning] -> If the server endpoint isn't removed prior to using the `Remove-StorageSyncOrphanedTieredFiles` cmdlet, deleting the orphaned tiered file on the server will delete the full file in the Azure file share. --3. Run the following PowerShell commands to list orphaned tiered files: --```powershell -Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" -$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path> -$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt -``` -4. Save the OrphanTieredFiles.txt output file in case files need to be restored from backup after they're deleted. -5. Run the following PowerShell commands to delete orphaned tiered files: --```powershell -Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" -$orphanFilesRemoved = Remove-StorageSyncOrphanedTieredFiles -Path <folder path containing orphaned tiered files> -Verbose -$orphanFilesRemoved.OrphanedTieredFiles > DeletedOrphanFiles.txt -``` -**Notes** -- Tiered files modified on the server that aren't synced to the Azure file share will be deleted.-- Tiered files that are accessible (not orphan) won't be deleted.-- Non-tiered files will remain on the server.--6. Optional: Recreate the server endpoint if deleted in step 3. --*Option 2: Mount the Azure file share and copy the files locally that are orphaned on the server* --This option doesn't require removing the server endpoint but requires sufficient disk space to copy the full files locally. --1. [Mount](../files/storage-how-to-use-files-windows.md?toc=/azure/storage/filesync/toc.json) the Azure file share on the Windows Server that has orphaned tiered files. -2. Run the following PowerShell commands to list orphaned tiered files: -```powershell -Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" -$orphanFiles = Get-StorageSyncOrphanedTieredFiles -path <server endpoint path> -$orphanFiles.OrphanedTieredFiles > OrphanTieredFiles.txt -``` -3. Use the OrphanTieredFiles.txt output file to identify orphaned tiered files on the server. -4. Overwrite the orphaned tiered files by copying the full file from the Azure file share to the Windows Server. --## How to troubleshoot files unexpectedly recalled on a server -Antivirus, backup, and other applications that read large numbers of files cause unintended recalls unless they respect the skip offline attribute and skip reading the content of those files. Skipping offline files for products that support this option helps avoid unintended recalls during operations like antivirus scans or backup jobs. --Consult with your software vendor to learn how to configure their solution to skip reading offline files. --Unintended recalls also might occur in other scenarios, like when you are browsing cloud-tiered files in File Explorer. This is likely to occur on Windows Server 2016 if the folder contains executable files. File Explorer was improved for Windows Server 2019 and later to better handle offline files. --> [!NOTE] ->Use Event ID 9059 in the Telemetry event log to determine which application(s) is causing recalls. This event provides application recall distribution for a server endpoint and is logged once an hour. --## Process exclusions for Azure File Sync --If you want to configure your antivirus or other applications to skip scanning for files accessed by Azure File Sync, configure the following process exclusions: --- C:\Program Files\Azure\StorageSyncAgent\AfsAutoUpdater.exe-- C:\Program Files\Azure\StorageSyncAgent\FileSyncSvc.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentLauncher.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentHost.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentManager.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\MonAgentCore.exe-- C:\Program Files\Azure\StorageSyncAgent\MAAgent\Extensions\XSyncMonitoringExtension\AzureStorageSyncMonitor.exe--## TLS 1.2 required for Azure File Sync --You can view the TLS settings at your server by looking at the [registry settings](/windows-server/security/tls/tls-registry-settings). --If you're using a proxy, consult your proxy's documentation and ensure it is configured to use TLS 1.2. --## See also -- [Troubleshoot Azure File Sync agent installation and server registration](file-sync-troubleshoot-installation.md)-- [Troubleshoot Azure File Sync sync group management](file-sync-troubleshoot-sync-group-management.md)-- [Troubleshoot Azure File Sync sync errors](file-sync-troubleshoot-sync-errors.md)-- [Monitor Azure File Sync](file-sync-monitoring.md)-- [Troubleshoot Azure Files](../files/files-troubleshoot.md) |
storage | File Sync Troubleshoot Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-installation.md | - Title: Troubleshoot Azure File Sync agent installation and server registration -description: Troubleshoot common issues with installing the Azure File Sync agent and registering Windows Server with the Storage Sync Service. --- Previously updated : 7/28/2022-----# Troubleshoot Azure File Sync agent installation and server registration --After deploying the Storage Sync Service, the next steps in deploying Azure File Sync are installing the Azure File Sync agent and registering Windows Server with the Storage Sync Service. This article is designed to help you troubleshoot and resolve issues that you might encounter during these steps. --## Agent installation -<a id="agent-installation-failures"></a>**Troubleshoot agent installation failures** -If the Azure File Sync agent installation fails, locate the installation log file which is located in the agent installation directory. If the Azure File Sync agent is installed on the C: volume, the installation log file is located under C:\Program Files\Azure\StorageSyncAgent\InstallerLog. --> [!Note] -> If the Azure File Sync agent is installed from the command line and the /l\*v switch is used, the log file will be located in the path where the agent installation was executed. --The log file name for agent installations using the MSI package is AfsAgentInstall. The log file name for agent installations using the MSP package (update package) is AfsUpdater. --Once you have located the agent installation log file, open the file and search for the failure code at the end of the log. If you search for **error code 1603** or **sandbox**, you should be able to locate the error code. --Here is a snippet from an agent installation that failed: -``` -CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException -CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess -CAQuietExec64: Error 0x80070001: Command line returned an error. -CAQuietExec64: Error 0x80070001: QuietExec64 Failed -CAQuietExec64: Error 0x80070001: Failed in ExecCommon64 method -CustomAction SetRegPIIAclSettings returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) -Action ended 12:23:40: InstallExecute. Return value 3. -MSI (s) (0C:C8) [12:23:40:994]: Note: 1: 2265 2: 3: -2147287035 -``` --For this example, the agent installation failed with error code -2147287035 (ERROR_ACCESS_DENIED). --<a id="agent-installation-gpo"></a>**Agent installation fails with error: Storage Sync Agent Setup Wizard ended prematurely because of an error** --In the agent installation log, the following error is logged: --``` -CAQuietExec64: + CategoryInfo : SecurityError: (:) , PSSecurityException -CAQuietExec64: + FullyQualifiedErrorId : UnauthorizedAccess -CAQuietExec64: Error 0x80070001: Command line returned an error. -CAQuietExec64: Error 0x80070001: QuietExec64 Failed -CAQuietExec64: Error 0x80070001: Failed in ExecCommon64 method -CustomAction SetRegPIIAclSettings returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) -Action ended 12:23:40: InstallExecute. Return value 3. -MSI (s) (0C:C8) [12:23:40:994]: Note: 1: 2265 2: 3: -2147287035 -``` --This issue occurs if the [PowerShell execution policy](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) is configured using group policy and the policy setting is "Allow only signed scripts." All scripts included with the Azure File Sync agent are signed. The Azure File Sync agent installation fails because the installer is performing the script execution using the Bypass execution policy setting. --To resolve this issue, temporarily disable the [Turn on Script Execution](/powershell/module/microsoft.powershell.core/about/about_execution_policies#use-group-policy-to-manage-execution-policy) group policy setting on the server. Once the agent installation completes, the group policy setting can be re-enabled. --<a id="agent-installation-on-DC"></a>**Agent installation fails on Active Directory Domain Controller** --In the agent installation log, the following error is logged: --``` -CAQuietExec64: Error 0x80070001: Command line returned an error. -CAQuietExec64: Error 0x80070001: CAQuietExec64 Failed -CustomAction InstallHFSRequiredWindowsFeatures returned actual error code 1603 (note this may not be 100% accurate if translation happened inside sandbox) -Action ended 8:51:12: InstallExecute. Return value 3. -MSI (s) (EC:B4) [08:51:12:439]: Note: 1: 2265 2: 3: -2147287035 -``` --This issue occurs if you try to install the sync agent on an Active Directory domain controller where the PDC role owner is on a Windows Server 2008 R2 or below OS version. --To resolve, transfer the PDC role to another domain controller running Windows Server 2012 R2 or more recent, then install sync. --<a id="parameter-is-incorrect"></a>**Accessing a volume on Windows Server 2012 R2 fails with error: The parameter is incorrect** -After creating a server endpoint on Windows Server 2012 R2, the following error occurs when accessing the volume: --drive letter:\ is not accessible. -The parameter is incorrect. --To resolve this issue, install [KB2919355](https://support.microsoft.com/help/2919355/windows-rt-8-1-windows-8-1-windows-server-2012-r2-update-april-2014) and restart the server. If this update will not install because a later update is already installed, go to Windows Update, install the latest updates for Windows Server 2012 R2 and restart the server. --## Server registration --<a id="server-registration-missing-subscriptions"></a>**Server Registration does not list all Azure Subscriptions** -When registering a server using ServerRegistration.exe, subscriptions are missing when you click the Azure Subscription drop-down. --This issue occurs because ServerRegistration.exe will only retrieve subscriptions from the first five Azure AD tenants. --To increase the Server Registration tenant limit on the server, create a DWORD value called ServerRegistrationTenantLimit under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure\StorageSync with a value greater than 5. --You can also work around this issue by using the following PowerShell commands to register the server: --```powershell -Connect-AzAccount -Subscription "<guid>" -Tenant "<guid>" -Register-AzStorageSyncServer -ResourceGroupName "<your-resource-group-name>" -StorageSyncServiceName "<your-storage-sync-service-name>" -``` --<a id="server-registration-prerequisites"></a>**Server Registration displays the following message: "Pre-requisites are missing"** -This message appears if Az or AzureRM PowerShell module is not installed on PowerShell 5.1. --> [!Note] -> ServerRegistration.exe does not support PowerShell 6.x. You can use the Register-AzStorageSyncServer cmdlet on PowerShell 6.x to register the server. --To install the Az or AzureRM module on PowerShell 5.1, perform the following steps: --1. Type **powershell** from an elevated command prompt and hit enter. -2. Install the latest Az or AzureRM module by following the documentation: - - [Az module (requires .NET 4.7.2)](/powershell/azure/install-azure-powershell) - - [AzureRM module](https://go.microsoft.com/fwlink/?linkid=856959) -3. Run ServerRegistration.exe, and complete the wizard to register the server with a Storage Sync Service. --<a id="server-already-registered"></a>**Server Registration displays the following message: "This server is already registered"** -- --This message appears if the server was previously registered with a Storage Sync Service. To unregister the server from the current Storage Sync Service and then register with a new Storage Sync Service, complete the steps that are described in [Unregister a server with Azure File Sync](file-sync-server-registration.md#unregister-the-server-with-storage-sync-service). --If the server is not listed under **Registered servers** in the Storage Sync Service, on the server that you want to unregister, run the following PowerShell commands: --```powershell -Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll" -Reset-StorageSyncServer -``` --> [!Note] -> If the server is part of a cluster, use the Reset-StorageSyncServer -CleanClusterRegistration parameter to remove the server from the Azure File Sync cluster registration detail. --<a id="web-site-not-trusted"></a>**When I register a server, I see numerous "web site not trusted" responses. Why?** -This issue occurs when the **Enhanced Internet Explorer Security** policy is enabled during server registration. For more information about how to correctly disable the **Enhanced Internet Explorer Security** policy, see [Prepare Windows Server to use with Azure File Sync](file-sync-deployment-guide.md#prepare-windows-server-to-use-with-azure-file-sync) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). --<a id="server-registration-missing"></a>**Server is not listed under registered servers in the Azure portal** -If a server is not listed under **Registered servers** for a Storage Sync Service: -1. Sign in to the server that you want to register. -2. Open File Explorer, and then go to the Storage Sync Agent installation directory (the default location is C:\Program Files\Azure\StorageSyncAgent). -3. Run ServerRegistration.exe, and complete the wizard to register the server with a Storage Sync Service. --## See also -- [Troubleshoot Azure File Sync sync group management](file-sync-troubleshoot-sync-group-management.md)-- [Troubleshoot Azure File Sync sync errors](file-sync-troubleshoot-sync-errors.md)-- [Troubleshoot Azure File Sync cloud tiering](file-sync-troubleshoot-cloud-tiering.md)-- [Monitor Azure File Sync](file-sync-monitoring.md)-- [Troubleshoot Azure Files problems](../files/files-troubleshoot.md) |
storage | File Sync Troubleshoot Sync Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-errors.md | - Title: Troubleshoot sync health and errors in Azure File Sync -description: Troubleshoot common issues with monitoring sync health and resolving sync errors in an Azure File Sync deployment. --- Previously updated : 06/02/2023------# Troubleshoot Azure File Sync sync health and errors --This article is designed to help you troubleshoot and resolve common sync issues that you might encounter with your Azure File Sync deployment. --## Sync health --<a id="afs-change-detection"></a>**If I created a file directly in my Azure file share over SMB or through the portal, how long does it take for the file to sync to servers in the sync group?** --<a id="serverendpoint-pending"></a>**Server endpoint health is in a pending state for several hours** -This issue is expected if you create a cloud endpoint and use an Azure file share that contains data. The cloud change enumeration job that scans for changes in the Azure file share must complete before files can sync between the cloud and server endpoints. The time to complete the job is dependent on the size of the namespace in the Azure file share. The server endpoint health should update once the change enumeration job completes. --To check the status of the cloud change enumeration job, go the Cloud Endpoint properties in the portal and the status is provided in the Change Enumeration section. --### <a id="broken-sync"></a>How do I monitor sync health? -# [Portal](#tab/portal1) -Within each sync group, you can drill down into its individual server endpoints to see the status of the last completed sync sessions. A green Health column and a Files Not Syncing value of 0 indicate that sync is working as expected. If not, see below for a list of common sync errors and how to handle files that aren't syncing. -- --# [Server](#tab/server) -Go to the server's telemetry logs, which can be found in the Event Viewer at `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`. Event 9102 corresponds to a completed sync session; for the latest status of sync, look for the most recent event with ID 9102. `SyncDirection` tells you if this session was an upload or download. If the `HResult` is 0, then the sync session was successful. A non-zero `HResult` means that there was an error during sync; see below for a list of common errors. If the `PerItemErrorCount` is greater than 0, then some files or folders didn't sync properly. It's possible to have an `HResult` of 0 but a `PerItemErrorCount` that is greater than 0. --Below is an example of a successful upload. For the sake of brevity, only some of the values contained in each 9102 event are listed. --``` -Replica Sync session completed. -SyncDirection: Upload, -HResult: 0, -SyncFileCount: 2, SyncDirectoryCount: 0, -AppliedFileCount: 2, AppliedDirCount: 0, AppliedTombstoneCount 0, AppliedSizeBytes: 0. -PerItemErrorCount: 0, -TransferredFiles: 2, TransferredBytes: 0, FailedToTransferFiles: 0, FailedToTransferBytes: 0. -``` --Conversely, an unsuccessful upload might look like this: --``` -Replica Sync session completed. -SyncDirection: Upload, -HResult: -2134364065, -SyncFileCount: 0, SyncDirectoryCount: 0, -AppliedFileCount: 0, AppliedDirCount: 0, AppliedTombstoneCount 0, AppliedSizeBytes: 0. -PerItemErrorCount: 0, -TransferredFiles: 0, TransferredBytes: 0, FailedToTransferFiles: 0, FailedToTransferBytes: 0. -``` --Sometimes sync sessions fail overall or have a non-zero `PerItemErrorCount` but still make forward progress, with some files syncing successfully. Progress can be determined by looking into the *Applied* fields (`AppliedFileCount`, `AppliedDirCount`, `AppliedTombstoneCount`, and `AppliedSizeBytes`). These fields describe how much of the session is succeeding. If you see multiple sync sessions in a row that are failing but have an increasing *Applied* count, then you should give sync time to try again before opening a support ticket. ----### How do I monitor the progress of a current sync session? -# [Portal](#tab/portal1) -Within your sync group, go to the server endpoint in question and look at the Sync Activity section to see the count of files uploaded or downloaded in the current sync session. Keep in mind that this status will be delayed by about 5 minutes. If your sync session is small enough to be completed within this period, it might not be reported in the portal. --# [Server](#tab/server) -Look at the most recent 9302 event in the telemetry log on the server (in the Event Viewer, go to Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry). This event indicates the state of the current sync session. `TotalItemCount` denotes how many files are to be synced, `AppliedItemCount` the number of files that have been synced so far, and `PerItemErrorCount` the number of files that are failing to sync (see below for how to deal with this). --``` -Replica Sync Progress. -ServerEndpointName: <CI>sename</CI>, SyncGroupName: <CI>sgname</CI>, ReplicaName: <CI>rname</CI>, -SyncDirection: Upload, CorrelationId: {AB4BA07D-5B5C-461D-AAE6-4ED724762B65}. -AppliedItemCount: 172473, TotalItemCount: 624196. AppliedBytes: 51473711577, -TotalBytes: 293363829906. -AreTotalCountsFinal: true. -PerItemErrorCount: 1006. -``` ---### How do I know if my servers are in sync with each other? -# [Portal](#tab/portal1) -For each server in a given sync group, make sure: -- The timestamps for the Last Attempted Sync for both upload and download are recent.-- The status is green for both upload and download.-- The Sync Activity field shows very few or no files remaining to sync.-- The Files Not Syncing field is 0 for both upload and download.--# [Server](#tab/server) -Look at the completed sync sessions, which are marked by 9102 events in the telemetry event log for each server (in the Event Viewer, go to `Applications and Services Logs\Microsoft\FileSync\Agent\Telemetry`). --1. On any given server, you want to make sure the latest upload and download sessions completed successfully. To do this, check that the `HResult` and PerItemErrorCount are 0 for both upload and download (the SyncDirection field indicates if a given session is an upload or download session). Note that if you do not see a recently completed sync session, it is likely a sync session is currently in progress, which is to be expected if you just added or modified a large amount of data. -2. When a server is fully up to date with the cloud and has no changes to sync in either direction, you will see empty sync sessions. These are indicated by upload and download events in which all the Sync* fields (`SyncFileCount`, `SyncDirCount`, `SyncTombstoneCount`, and `SyncSizeBytes`) are zero, meaning there was nothing to sync. Note that these empty sync sessions might not occur on high-churn servers as there is always something new to sync. If there is no sync activity, they should occur every 30 minutes. -3. If all servers are up to date with the cloud, meaning their recent upload and download sessions are empty sync sessions, you can say with reasonable certainty that the system as a whole is in sync. - -If you made changes directly in your Azure file share, Azure File Sync will not detect these changes until change enumeration runs, which happens once every 24 hours. It's possible that a server will say it is up to date with the cloud when it is in fact missing recent changes made directly in the Azure file share. ----### How do I see if there are specific files or folders that are not syncing? -If your `PerItemErrorCount` on the server or Files Not Syncing count in the portal are greater than 0 for any given sync session, that means some items are failing to sync. Files and folders can have characteristics that prevent them from syncing. These characteristics can be persistent and require explicit action to resume sync, for example removing unsupported characters from the file or folder name. They can also be transient, meaning the file or folder will automatically resume sync; for example, files with open handles will automatically resume sync when the file is closed. When the Azure File Sync engine detects such a problem, an error log is produced that can be parsed to list the items currently not syncing properly. --To see these errors, run the **FileSyncErrorsReport.ps1** PowerShell script (located in the agent installation directory of the Azure File Sync agent) to identify files that failed to sync because of open handles, unsupported characters, or other issues. The `ItemPath` field tells you the location of the file in relation to the root sync directory. See the list of common sync errors for remediation steps. --> [!Note] -> If the FileSyncErrorsReport.ps1 script returns "There were no file errors found" or doesn't list per-item errors for the sync group, the cause is either: -> ->- Cause 1: The last completed sync session didn't have per-item errors. The portal should be updated soon to show 0 Files Not Syncing. By default, the FileSyncErrorsReport.ps1 script will only show per-item errors for the last completed sync session. To view per-item errors for all sync sessions, use the `-ReportAllErrors` parameter. -> - Check the most recent [Event ID 9102](?tabs=server%252cazure-portal#broken-sync) in the Telemetry event log to confirm the `PerItemErrorCount` is 0. -> ->- Cause 2: The `ItemResults` event log on the server wrapped due to too many per-item errors and the event log no longer contains errors for this sync group. -> - To prevent this issue, increase the `ItemResults` event log size. The `ItemResults` event log can be found under "Applications and Services Logs\Microsoft\FileSync\Agent" in Event Viewer. --## Sync errors --### Troubleshooting per file/directory sync errors -**ItemResults log - per-item sync errors** --| HRESULT | HRESULT (decimal) | Error string | Issue | Remediation | -||-|--|-|-| -| 0x80070043 | -2147942467 | ERROR_BAD_NET_NAME | The tiered file on the server isn't accessible. This issue occurs if the tiered file was not recalled prior to deleting a server endpoint. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). | -| 0x80c80207 | -2134375929 | ECS_E_SYNC_CONSTRAINT_CONFLICT | The file or directory change can't be synced yet because a dependent folder isn't yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder isn't yet synced. | -| 0x80C8028A | -2134375798 | ECS_E_SYNC_CONSTRAINT_CONFLICT_ON_FAILED_DEPENDEE | The file or directory change can't be synced yet because a dependent folder isn't yet synced. This item will sync after the dependent changes are synced. | No action required. If the error persists for several days, use the FileSyncErrorsReport.ps1 PowerShell script to determine why the dependent folder isn't yet synced. | -| 0x80c80284 | -2134375804 | ECS_E_SYNC_CONSTRAINT_CONFLICT_SESSION_FAILED | The file or directory change can't be synced yet because a dependent folder isn't yet synced and the sync session failed. This item will sync after the dependent changes are synced. | No action required. If the error persists, investigate the sync session failure. | -| 0x8007007b | -2147024773 | ERROR_INVALID_NAME | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. | -| 0x80c80255 | -2134375851 | ECS_E_XSMB_REST_INCOMPATIBILITY | The file or directory name is invalid. | Rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. | -| 0x80c80018 | -2134376424 | ECS_E_SYNC_FILE_IN_USE | The file can't be synced because it's in use. The file will be synced when it's no longer in use. | No action required. Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles. | -| 0x80c8031d | -2134375651 | ECS_E_CONCURRENCY_CHECK_FAILED | The file has changed, but the change hasn't yet been detected by sync. Sync will recover after this change is detected. | No action required. | -| 0x80070002 | -2147024894 | ERROR_FILE_NOT_FOUND | The file was deleted and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection detects the file was deleted. | -| 0x80070003 | -2147024893 | ERROR_PATH_NOT_FOUND | Deletion of a file or directory can't be synced because the item was already deleted in the destination and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync detects the item was deleted. | -| 0x80c80205 | -2134375931 | ECS_E_SYNC_ITEM_SKIP | The file or directory was skipped but will be synced during the next sync session. If this error is reported when downloading the item, the file or directory name is more than likely invalid. | No action required if this error is reported when uploading the file. If the error is reported when downloading the file, rename the file or directory in question. See [Handling unsupported characters](?tabs=portal1%252cazure-portal#handling-unsupported-characters) for more information. | -| 0x800700B7 | -2147024713 | ERROR_ALREADY_EXISTS | Creation of a file or directory can't be synced because the item already exists in the destination and sync isn't aware of the change. | No action required. Sync will stop logging this error once change detection runs on the destination and sync is aware of this new item. | -| 0x80c8603e | -2134351810 | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | The file can't be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. | -| 0x80c83008 | -2134364152 | ECS_E_CANNOT_CREATE_AZURE_STAGED_FILE | The file can't be synced because the Azure file share limit is reached. | To resolve this issue, see [You reached the Azure file share storage limit](?tabs=portal1%252cazure-portal#-2134351810) section in the troubleshooting guide. | -| 0x80c8027C | -2134375812 | ECS_E_ACCESS_DENIED_EFS | The file is encrypted by an unsupported solution (like NTFS EFS). | Decrypt the file and use a supported encryption solution. For a list of support solutions, see the [Encryption](file-sync-planning.md#encryption) section of the planning guide. | -| 0x80c80283 | -2160591491 | ECS_E_ACCESS_DENIED_DFSRRO | The file is located on a DFS-R read-only replication folder. | File is located on a DFS-R read-only replication folder. Azure File Sync doesn't support server endpoints on DFS-R read-only replication folders. See [planning guide](file-sync-planning.md#distributed-file-system-dfs) for more information. | -| 0x80070005 | -2147024891 | ERROR_ACCESS_DENIED | The file has a delete pending state. | No action required. File will be deleted once all open file handles are closed. | -| 0x80c86044 | -2134351804 | ECS_E_AZURE_AUTHORIZATION_FAILED | The file can't be synced because the firewall and virtual network settings on the storage account are enabled, and the server doesn't have access to the storage account. | Add the Server IP address or virtual network by following the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. | -| 0x80c80243 | -2134375869 | ECS_E_SECURITY_DESCRIPTOR_SIZE_TOO_LARGE | The file can't be synced because the security descriptor size exceeds the 64 KiB limit. | To resolve this issue, remove access control entries (ACE) on the file to reduce the security descriptor size. | -| 0x8000ffff | -2147418113 | E_UNEXPECTED | The file can't be synced due to an unexpected error. | If the error persists for several days, please open a support case. | -| 0x80070020 | -2147024864 | ERROR_SHARING_VIOLATION | The file can't be synced because it's in use. The file will be synced when it's no longer in use. | No action required. | -| 0x80c80017 | -2134376425 | ECS_E_SYNC_OPLOCK_BROKEN | The file was changed during sync, so it needs to be synced again. | No action required. | -| 0x80070017 | -2147024873 | ERROR_CRC | The file can't be synced due to CRC error. This error can occur if a tiered file was not recalled prior to deleting a server endpoint or if the file is corrupt. | To resolve this issue, see [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) to remove tiered files that are orphaned. If the error continues to occur after removing orphaned tiered files, run [chkdsk](/windows-server/administration/windows-commands/chkdsk) on the volume. | -| 0x80c80200 | -2134375936 | ECS_E_SYNC_CONFLICT_NAME_EXISTS | The file can't be synced because the maximum number of conflict files has been reached. Azure File Sync supports 100 conflict files per file. To learn more about file conflicts, see Azure File Sync [FAQ](../files/storage-files-faq.md?toc=/azure/storage/filesync/toc.json#afs-conflict-resolution). | To resolve this issue, reduce the number of conflict files. The file will sync once the number of conflict files is less than 100. | -| 0x80c8027d | -2134375811 | ECS_E_DIRECTORY_RENAME_FAILED | Rename of a directory can't be synced because files or folders within the directory have open handles. | No action required. The rename of the directory will be synced once all open file handles within the directory are closed. | -| 0x800700de | -2147024674 | ERROR_BAD_FILE_TYPE | The tiered file on the server isn't accessible because it's referencing a version of the file which no longer exists in the Azure file share. | This issue can occur if the tiered file was restored from a backup of the Windows Server. To resolve this issue, restore the file from a snapshot in the Azure file share. | -| 0x80C80065 | -2134376347 | ECS_E_DATA_TRANSFER_BLOCKED | The file has been identified to produce persistent errors during sync. Hence it is blocked from sync until the retry interval is reached. The file will be retried later. | No action required. The file will be retried after 24 hours. If the error persists for several days, create a support request. | -| 0x80C80203 | -2134375933 | ECS_E_SYNC_INVALID_STAGED_FILE | File transfer error. Service will retry later. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80c8027f | -2134375809 | ECS_E_SYNC_CONSTRAINT_CONFLICT_CYCLIC_DEPENDENCY | Sync session timeout error. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80070035 | -2147024843 | ERROR_BAD_NETPATH | The network path was not found. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80071779 | -2147018887 | ERROR_FILE_READ_ONLY | The specified file is read only. | If the error persists for more than a day, create a support request. | -| 0x80070006 | -2147024890 | ERROR_INVALID_HANDLE | An internal error occurred. | If the error persists for more than a day, create a support request. | -| 0x8007012f | -2147024593 | ERROR_DELETE_PENDING | The file cannot be opened because it is in the process of being deleted. | No action required. This error should automatically resolve. If the error persists for several days, create a support request. | -| 0x80041007 | -2147217401 | SYNC_E_ITEM_MUST_EXIST | An internal error occurred. | If the error persists for more than a day, create a support request. | ---### Handling unsupported characters -If the **FileSyncErrorsReport.ps1** PowerShell script shows per-item sync errors due to unsupported characters (error code 0x8007007b or 0x80c80255), you should remove or rename the characters at fault from the respective file names. PowerShell will likely print these characters as question marks or empty rectangles since most of these characters have no standard visual encoding. -> [!Note] -> The [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) can be used to identify characters that are not supported. If your dataset has several files with invalid characters, use the [ScanUnsupportedChars](https://github.com/Azure-Samples/azure-files-samples/tree/master/ScanUnsupportedChars) script to rename files which contain unsupported characters. --The table below contains all of the unicode characters Azure File Sync does not yet support. --| Character set | Character count | -||--| -| 0x00000000 - 0x0000001F (control characters) | 32 | -| 0x0000FDD0 - 0x0000FDDD (arabic presentation forms-a) | 14 | -| <ul><li>0x00000022 (quotation mark)</li><li>0x0000002A (asterisk)</li><li>0x0000002F (forward slash)</li><li>0x0000003A (colon)</li><li>0x0000003C (less than)</li><li>0x0000003E (greater than)</li><li>0x0000003F (question mark)</li><li>0x0000005C (backslash)</li><li>0x0000007C (pipe or bar)</li></ul> | 9 | -| <ul><li>0x0004FFFE - 0x0004FFFF = 2 (noncharacter)</li><li>0x0008FFFE - 0x0008FFFF = 2 (noncharacter)</li><li>0x000CFFFE - 0x000CFFFF = 2 (noncharacter)</li><li>0x0010FFFE - 0x0010FFFF = 2 (noncharacter)</li></ul> | 8 | -| <ul><li>0x0000009D (`osc` operating system command)</li><li>0x00000090 (dcs device control string)</li><li>0x0000008F (ss3 single shift three)</li><li>0x00000081 (high octet preset)</li><li>0x0000007F (del delete)</li><li>0x0000008D (ri reverse line feed)</li></ul> | 6 | -| 0x0000FFF0, 0x0000FFFD, 0x0000FFFE, 0x0000FFFF (specials) | 4 | -| Files or directories that end with a period | 1 | --### Common sync errors -<a id="-2147023673"></a>**The sync session was canceled.** --| Error | Code | -|-|-| -| **HRESULT** | 0x800704c7 | -| **HRESULT (decimal)** | -2147023673 | -| **Error string** | ERROR_CANCELLED | -| **Remediation required** | No | --Sync sessions might fail for various reasons including the server being restarted or updated, VSS snapshots, etc. Although this error looks like it requires follow-up, it's safe to ignore this error unless it persists over a period of several hours. --<a id="-2134375780"></a>**The file sync session was cancelled by the volume snapshot sync session that runs once a day to sync files with open handles.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8029c | -| **HRESULT (decimal)** | -2134375780 | -| **Error string** | ECS_E_SYNC_CANCELLED_BY_VSS | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for more than a day, create a support request. --<a id="-2147012889"></a>**A connection with the service could not be established.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80072ee7 | -| **HRESULT (decimal)** | -2147012889 | -| **Error string** | WININET_E_NAME_NOT_RESOLVED | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83081 | -| **HRESULT (decimal)** | -2134364031 | -| **Error string** | ECS_E_HTTP_CLIENT_CONNECTION_ERROR | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8309a | -| **HRESULT (decimal)** | -2134364006 | -| **Error string** | ECS_E_AZURE_STORAGE_REMOTE_NAME_NOT_RESOLVED | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0xc00000c4 | -| **HRESULT (decimal)** | -1073741628 | -| **Error string** | UNEXPECTED_NETWORK_ERROR | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0x80072ee2 | -| **HRESULT (decimal)** | -2147012894 | -| **Error string** | WININET_E_TIMEOUT | -| **Remediation required** | Yes | ---> [!Note] -> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location. --<a id="-2134376372"></a>**The user request was throttled by the service.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8004c | -| **HRESULT (decimal)** | -2134376372 | -| **Error string** | ECS_E_USER_REQUEST_THROTTLED | -| **Remediation required** | No | --No action is required; the server will try again. If this error persists for several hours, create a support request. --<a id="-2134364160"></a>**Sync failed because the operation was aborted** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83000 | -| **HRESULT (decimal)** | -2134364160 | -| **Error string** | ECS_E_OPERATION_ABORTED | -| **Remediation required** | No | --No action is required. If this error persists for several hours, create a support request. --<a id="-2134364019"></a>**The operation was cancelled.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8308d | -| **HRESULT (decimal)** | -2134364019 | -| **Error string** | ECS_E_REQUEST_CANCELLED_EXTERNALLY | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x8013153b | -| **HRESULT (decimal)** | -2146233029 | -| **Error string** | COR_E_OPERATIONCANCELED | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2134364043"></a>**Sync is blocked until change detection completes post restore** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83075 | -| **HRESULT (decimal)** | -2134364043 | -| **Error string** | ECS_E_SYNC_BLOCKED_ON_CHANGE_DETECTION_POST_RESTORE | -| **Remediation required** | No | --No action is required. When a file or file share (cloud endpoint) is restored using Azure Backup, sync is blocked until change detection completes on the Azure file share. Change detection runs immediately once the restore is complete and the duration is based on the number of files in the file share. --<a id="-2134364072"></a>**Sync is blocked on the folder due to a pause initiated as part of restore on sync folder.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83058 | -| **HRESULT (decimal)** | -2134364072 | -| **Error string** | ECS_E_SYNC_BLOCKED_ON_RESTORE | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2147216747"></a>**Sync failed because the sync database was unloaded.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80041295 | -| **HRESULT (decimal)** | -2147216747 | -| **Error string** | SYNC_E_METADATA_INVALID_OPERATION | -| **Remediation required** | No | --This error typically occurs when a backup application creates a VSS snapshot and the sync database is unloaded. If this error persists for several hours, create a support request. --<a id="-2134364065"></a>**Sync can't access the Azure file share specified in the cloud endpoint.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8305f | -| **HRESULT (decimal)** | -2134364065 | -| **Error string** | ECS_E_EXTERNAL_STORAGE_ACCOUNT_AUTHORIZATION_FAILED | -| **Remediation required** | Yes | --This error occurs because the Azure File Sync agent can't access the Azure file share, which might be because the Azure file share or the storage account hosting it no longer exists. You can troubleshoot this error by working through the following steps: --1. [Verify the storage account exists.](#troubleshoot-storage-account) -2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share) -3. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac) -4. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) --<a id="-2134351804"></a>**Sync failed because the request isn't authorized to perform this operation.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c86044 | -| **HRESULT (decimal)** | -2134351804 | -| **Error string** | ECS_E_AZURE_AUTHORIZATION_FAILED | -| **Remediation required** | Yes | --This error occurs because the Azure File Sync agent isn't authorized to access the Azure file share. You can troubleshoot this error by working through the following steps: --1. [Verify the storage account exists.](#troubleshoot-storage-account) -2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share) -3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) -4. [Ensure Azure File Sync has access to the storage account.](#troubleshoot-rbac) --<a id="-2134364064"></a><a id="cannot-resolve-storage"></a>**The storage account name used could not be resolved.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80C83060 | -| **HRESULT (decimal)** | -2134364064 | -| **Error string** | ECS_E_STORAGE_ACCOUNT_NAME_UNRESOLVED | -| **Remediation required** | Yes | --1. Check that you can resolve the storage DNS name from the server. -- ```powershell - Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 443 - ``` -2. [Verify the storage account exists.](#troubleshoot-storage-account) -3. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) --> [!Note] -> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location. --<a id="-2134364022"></a><a id="storage-unknown-error"></a>**An unknown error occurred while accessing the storage account.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8308a | -| **HRESULT (decimal)** | -2134364022 | -| **Error string** | ECS_E_STORAGE_ACCOUNT_UNKNOWN_ERROR | -| **Remediation required** | Yes | --1. [Verify the storage account exists.](#troubleshoot-storage-account) -2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) --<a id="-2134364014"></a>**Sync failed due to storage account locked.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83092 | -| **HRESULT (decimal)** | -2134364014 | -| **Error string** | ECS_E_STORAGE_ACCOUNT_LOCKED | -| **Remediation required** | Yes | --This error occurs because the storage account has a read-only [resource lock](../../azure-resource-manager/management/lock-resources.md). To resolve this issue, remove the read-only resource lock on the storage account. --<a id="-1906441138"></a>**Sync failed due to a problem with the sync database.** --| Error | Code | -|-|-| -| **HRESULT** | 0x8e5e044e | -| **HRESULT (decimal)** | -1906441138 | -| **Error string** | JET_errWriteConflict | -| **Remediation required** | Yes | --This error occurs when there is a problem with the internal database used by Azure File Sync. When this issue occurs, create a support request and we will contact you to help you resolve this issue. --<a id="-2134364053"></a>**The Azure File Sync agent version installed on the server isn't supported.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80C8306B | -| **HRESULT (decimal)** | -2134364053 | -| **Error string** | ECS_E_AGENT_VERSION_BLOCKED | -| **Remediation required** | Yes | --This error occurs if the Azure File Sync agent version installed on the server isn't supported. To resolve this issue, [upgrade](file-sync-release-notes.md#azure-file-sync-agent-update-policy) to a [supported agent version](file-sync-release-notes.md#supported-versions). --<a id="-2134351810"></a>**You reached the Azure file share storage limit.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8603e | -| **HRESULT (decimal)** | -2134351810 | -| **Error string** | ECS_E_AZURE_STORAGE_SHARE_SIZE_LIMIT_REACHED | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80249 | -| **HRESULT (decimal)** | -2134375863 | -| **Error string** | ECS_E_NOT_ENOUGH_REMOTE_STORAGE | -| **Remediation required** | Yes | --Sync sessions fail with either of these errors when the Azure file share storage limit has been reached, which can happen if a quota is applied for an Azure file share or if the usage exceeds the limits for an Azure file share. For more information, see the [current limits for an Azure file share](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json). --1. Navigate to the sync group within the Storage Sync Service. -2. Select the cloud endpoint within the sync group. -3. Note the Azure file share name in the opened pane. -4. Select the linked storage account. If this link fails, the referenced storage account has been removed. --  --5. Select **Files** to view the list of file shares. -6. Click the three dots at the end of the row for the Azure file share referenced by the cloud endpoint. -7. Verify that the **Usage** is below the **Quota**. Note unless an alternate quota has been specified, the quota will match the [maximum size of the Azure file share](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json). --  --If the share is full and a quota isn't set, one possible way of fixing this issue is to make each subfolder of the current server endpoint into its own server endpoint in their own separate sync groups. This way each subfolder will sync to individual Azure file shares. --<a id="-2134351824"></a>**The Azure file share cannot be found.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c86030 | -| **HRESULT (decimal)** | -2134351824 | -| **Error string** | ECS_E_AZURE_FILE_SHARE_NOT_FOUND | -| **Remediation required** | Yes | --This error occurs when the Azure file share isn't accessible. To troubleshoot: --1. [Verify the storage account exists.](#troubleshoot-storage-account) -2. [Ensure the Azure file share exists.](#troubleshoot-azure-file-share) -3. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). --If the Azure file share was deleted, you need to create a new file share and then recreate the sync group. --<a id="-2134364042"></a>**Sync is paused while this Azure subscription is suspended.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80C83076 | -| **HRESULT (decimal)** | -2134364042 | -| **Error string** | ECS_E_SYNC_BLOCKED_ON_SUSPENDED_SUBSCRIPTION | -| **Remediation required** | Yes | --This error occurs when the Azure subscription is suspended. Sync will be reenabled when the Azure subscription is restored. See [Why is my Azure subscription disabled and how do I reactivate it?](../../cost-management-billing/manage/subscription-disabled.md) for more information. --<a id="-2134375618"></a>**The storage account has a firewall or virtual networks configured.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8033e | -| **HRESULT (decimal)** | -2134375618 | -| **Error string** | ECS_E_SERVER_BLOCKED_BY_NETWORK_ACL | -| **Remediation required** | Yes | --This error occurs when the Azure file share is inaccessible because of a storage account firewall or because the storage account belongs to a virtual network. Verify the firewall and virtual network settings on the storage account are configured properly. For more information, see [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings). --<a id="-2134375911"></a>**Sync failed due to a problem with the sync database.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80219 | -| **HRESULT (decimal)** | -2134375911 | -| **Error string** | ECS_E_SYNC_METADATA_WRITE_LOCK_TIMEOUT | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83044 | -| **HRESULT (decimal)** | -2134364092 | -| **Error string** | ECS_E_SYNC_METADATA_WRITE_LOCK_TIMEOUT_SERVICEUNAVAILABLE | -| **Remediation required** | No | --These errors usually resolve themselves and can occur if there are: --* A high number of file changes across the servers in the sync group. -* A large number of errors on individual files and directories. --If this error persists for longer than a few hours, create a support request and we will contact you to help you resolve this issue. --<a id="-2134375905"></a>**The sync database has encountered a storage busy IO error.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8021f | -| **HRESULT (decimal)** | -2134375905 | -| **Error string** | ECS_E_SYNC_METADATA_IO_BUSY | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2134375906"></a>**The sync database has encountered an IO timeout.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8021e | -| **HRESULT (decimal)** | -2134375906 | -| **Error string** | ECS_E_SYNC_METADATA_IO_TIMEOUT | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2134375904"></a>**The sync database has encountered an IO error.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80220 | -| **HRESULT (decimal)** | -2134375904 | -| **Error string** | ECS_E_SYNC_METADATA_IO_ERROR | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2146762487"></a>**The server failed to establish a secure connection. The cloud service received an unexpected certificate.** --| Error | Code | -|-|-| -| **HRESULT** | 0x800b0109 | -| **HRESULT (decimal)** | -2146762487 | -| **Error string** | CERT_E_UNTRUSTEDROOT | -| **Remediation required** | Yes | --This error can happen if your organization is using a TLS terminating proxy or if a malicious entity is intercepting the traffic between your server and the Azure File Sync service. If you're certain that this is expected (because your organization is using a TLS terminating proxy), you skip certificate verification with a registry override. --1. Create the SkipVerifyingPinnedRootCertificate registry value. -- ```powershell - New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\Azure\StorageSync -Name SkipVerifyingPinnedRootCertificate -PropertyType DWORD -Value 1 - ``` --2. Restart the sync service on the registered server. -- ```powershell - Restart-Service -Name FileSyncSvc -Force - ``` --By setting this registry value, the Azure File Sync agent will accept any locally trusted TLS/SSL certificate when transferring data between the server and the cloud service. --<a id="-2147012721"></a>**Sync failed because the server was unable to decode the response from the Azure File Sync service** --| Error | Code | -|-|-| -| **HRESULT** | 0x80072f8f | -| **HRESULT (decimal)** | -2147012721 | -| **Error string** | WININET_E_DECODING_FAILED | -| **Remediation required** | Yes | --This error typically occurs if a network proxy is modifying the response from the Azure File Sync service. Please check your proxy configuration. --<a id="-2134375680"></a>**Sync failed due to a problem with authentication.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80300 | -| **HRESULT (decimal)** | -2134375680 | -| **Error string** | ECS_E_SERVER_CREDENTIAL_NEEDED | -| **Remediation required** | Yes | --This error typically occurs because the server time is incorrect. If the server is running in a virtual machine, verify the time on the host is correct. --<a id="-2134364040"></a>**Sync failed due to certificate expiration.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83078 | -| **HRESULT (decimal)** | -2134364040 | -| **Error string** | ECS_E_AUTH_SRV_CERT_EXPIRED | -| **Remediation required** | Yes | --This error occurs because the certificate used for authentication is expired. --To confirm the certificate is expired, perform the following steps: -1. Open the Certificates MMC snap-in, select Computer Account and navigate to Certificates (Local Computer)\Personal\Certificates. -2. Check if the client authentication certificate is expired. --If the client authentication certificate is expired, run the following PowerShell command on the server: --```powershell -Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string> -``` -<a id="-2134375896"></a>**Sync failed due to authentication certificate not found.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80228 | -| **HRESULT (decimal)** | -2134375896 | -| **Error string** | ECS_E_AUTH_SRV_CERT_NOT_FOUND | -| **Remediation required** | Yes | --This error occurs because the certificate used for authentication isn't found. --To resolve this issue, run the following PowerShell command on the server: --```powershell -Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string> -``` -<a id="-2134364039"></a>**Sync failed due to authentication identity not found.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83079 | -| **HRESULT (decimal)** | -2134364039 | -| **Error string** | ECS_E_AUTH_IDENTITY_NOT_FOUND | -| **Remediation required** | Yes | --This error occurs because the server endpoint deletion failed and the endpoint is now in a partially deleted state. To resolve this issue, retry deleting the server endpoint. --<a id="-1906441711"></a><a id="-2134375654"></a><a id="doesnt-have-enough-free-space"></a>**The volume where the server endpoint is located is low on disk space.** --| Error | Code | -|-|-| -| **HRESULT** | 0x8e5e0211 | -| **HRESULT (decimal)** | -1906441711 | -| **Error string** | JET_errLogDiskFull | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8031a | -| **HRESULT (decimal)** | -2134375654 | -| **Error string** | ECS_E_NOT_ENOUGH_LOCAL_STORAGE | -| **Remediation required** | Yes | --Sync sessions fail with one of these errors because either the volume has insufficient disk space or disk quota limit is reached. This error commonly occurs because files outside the server endpoint are using up space on the volume. Check the available disk space on the server. You can free up space on the volume by adding additional server endpoints, moving files to a different volume, or increasing the size of the volume the server endpoint is on. If a disk quota is configured on the volume using [File Server Resource Manager](/windows-server/storage/fsrm/fsrm-overview) or [NTFS quota](/windows-server/administration/windows-commands/fsutil-quota), increase the quota limit. --If cloud tiering is enabled for the server endpoint, verify the files are syncing to the Azure file share to avoid running out of disk space. --<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service isn't yet ready to sync with this server endpoint.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8300f | -| **HRESULT (decimal)** | -2134364145 | -| **Error string** | ECS_E_REPLICA_NOT_READY | -| **Remediation required** | No | --This error occurs because the cloud endpoint was created with content already existing on the Azure file share. Azure File Sync must scan the Azure file share for all content before allowing the server endpoint to proceed with its initial synchronization. Once change detection completes on the Azure file share, sync will commence. Change detection can take longer than 24 hours to complete, and is proportional to the number of files and directories on your Azure file share. If cloud tiering is configured, files will be tiered after sync completes. --<a id="-2134375877"></a><a id="-2134375908"></a><a id="-2134375853"></a>**Sync failed due to problems with many individual files.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8023b | -| **HRESULT (decimal)** | -2134375877 | -| **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_SOFT_LIMIT_REACHED | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8021c | -| **HRESULT (decimal)** | -2134375908 | -| **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED | -| **Remediation required** | Yes | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80253 | -| **HRESULT (decimal)** | -2134375853 | -| **Error string** | ECS_E_TOO_MANY_PER_ITEM_ERRORS | -| **Remediation required** | Yes | --Sync sessions fail with one of these errors when there are many files that are failing to sync with per-item errors. Perform the steps documented in the [How do I see if there are specific files or folders that are not syncing?](?tabs=portal1%252cazure-portal#how-do-i-see-if-there-are-specific-files-or-folders-that-are-not-syncing) section to resolve the per-item errors. For sync error ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED, please open a support case. --> [!NOTE] -> Azure File Sync creates a temporary VSS snapshot once a day on the server to sync files that have open handles. --<a id="-2134376423"></a>**Sync failed due to a problem with the server endpoint path.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80019 | -| **HRESULT (decimal)** | -2134376423 | -| **Error string** | ECS_E_SYNC_INVALID_PATH | -| **Remediation required** | Yes | --Ensure the path exists, is on a local NTFS volume, and isn't a reparse point or existing server endpoint. --<a id="-2134375817"></a>**Sync failed because the filter driver version isn't compatible with the agent version** --| Error | Code | -|-|-| -| **HRESULT** | 0x80C80277 | -| **HRESULT (decimal)** | -2134375817 | -| **Error string** | ECS_E_INCOMPATIBLE_FILTER_VERSION | -| **Remediation required** | Yes | --This error occurs because the Cloud Tiering filter driver (StorageSync.sys) version loaded isn't compatible with the Storage Sync Agent (FileSyncSvc) service. If the Azure File Sync agent was upgraded, restart the server to complete the installation. If the error continues to occur, uninstall the agent, restart the server and reinstall the Azure File Sync agent. --<a id="-2134376373"></a>**The service is currently unavailable.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8004b | -| **HRESULT (decimal)** | -2134376373 | -| **Error string** | ECS_E_SERVICE_UNAVAILABLE | -| **Remediation required** | No | --This error occurs because the Azure File Sync service is unavailable. This error will auto-resolve when the Azure File Sync service is available again. --> [!Note] -> Once network connectivity to the Azure File Sync service is restored, sync might not resume immediately. By default, Azure File Sync will initiate a sync session every 30 minutes if no changes are detected within the server endpoint location. To force a sync session, restart the Storage Sync Agent (FileSyncSvc) service or make a change to a file or directory within the server endpoint location. --<a id="-2146233088"></a>**Sync failed due to an exception.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80131500 | -| **HRESULT (decimal)** | -2146233088 | -| **Error string** | COR_E_EXCEPTION | -| **Remediation required** | No | --This error occurs because sync failed due to an exception. If the error persists for several hours, please create a support request. --<a id="-2134364045"></a>**Sync failed because the storage account has failed over to another region.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83073 | -| **HRESULT (decimal)** | -2134364045 | -| **Error string** | ECS_E_STORAGE_ACCOUNT_FAILED_OVER | -| **Remediation required** | Yes | --This error occurs because the storage account has failed over to another region. Azure File Sync does not support the storage account failover feature. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync should not be failed over. Doing so will cause sync to stop working and might also cause unexpected data loss in the case of newly tiered files. To resolve this issue, move the storage account to the primary region. --<a id="-2134375922"></a>**Sync failed due to a transient problem with the sync database.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8020e | -| **HRESULT (decimal)** | -2134375922 | -| **Error string** | ECS_E_SYNC_METADATA_WRITE_LEASE_LOST | -| **Remediation required** | No | --This error occurs because of an internal problem with the sync database. This error will auto-resolve when sync retries. If this error continues for an extend period of time, create a support request and we will contact you to help you resolve this issue. --<a id="-2134364024"></a>**Sync failed due to change in Azure Active Directory tenant** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83088 | -| **HRESULT (decimal)** | -2134364024 | -| **Error string** | ECS_E_INVALID_AAD_TENANT | -| **Remediation required** | Yes | --Verify you have the latest Azure File Sync agent version installed and give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](#troubleshoot-rbac)). --<a id="-2134364010"></a>**Sync failed due to firewall and virtual network exception not configured** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83096 | -| **HRESULT (decimal)** | -2134364010 | -| **Error string** | ECS_E_MGMT_STORAGEACLSBYPASSNOTSET | -| **Remediation required** | Yes | --This error occurs if the firewall and virtual network settings are enabled on the storage account and the "Allow trusted Microsoft services to access this storage account" exception isn't checked. To resolve this issue, follow the steps documented in the [Configure firewall and virtual network settings](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) section in the deployment guide. --<a id="-2147024891"></a>**Sync failed with access denied due to security settings on the storage account or NTFS permissions on the server.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80070005 | -| **HRESULT (decimal)** | -2147024891 | -| **Error string** | ERROR_ACCESS_DENIED | -| **Remediation required** | Yes | --This error can occur if Azure File Sync cannot access the storage account due to security settings or if the NT AUTHORITY\SYSTEM account doesn't have permissions to the System Volume Information folder on the volume where the server endpoint is located. If individual files are failing to sync with ERROR_ACCESS_DENIED, perform the steps documented in the [Troubleshooting per file/directory sync errors](?tabs=portal1%252cazure-portal#troubleshooting-per-filedirectory-sync-errors) section. --1. Verify the **SMB security settings** on the storage account are allowing **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). -2. [Verify the firewall and virtual network settings on the storage account are configured properly (if enabled)](file-sync-deployment-guide.md?tabs=azure-portal#optional-configure-firewall-and-virtual-network-settings) -3. Verify the **NT AUTHORITY\SYSTEM** account has permissions to the System Volume Information folder on the volume where the server endpoint is located by performing the following steps: -- a. Download [Psexec](/sysinternals/downloads/psexec) tool. - b. Run the following command from an elevated command prompt to launch a command prompt using the system account: `PsExec.exe -i -s -d cmd` - c. From the command prompt running under the system account, run the following command to confirm the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder: `cacls "drive letter:\system volume information" /T /C` - d. If the NT AUTHORITY\SYSTEM account does not have access to the System Volume Information folder, run the following command: `cacls "drive letter:\system volume information" /T /E /G "NT AUTHORITY\SYSTEM:F"` - - If step #d fails with access denied, run the following command to take ownership of the System Volume Information folder and then repeat step #d: `takeown /A /R /F "drive letter:\System Volume Information"` --<a id="-2134375810"></a>**Sync failed because the Azure file share was deleted and recreated.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8027e | -| **HRESULT (decimal)** | -2134375810 | -| **Error string** | ECS_E_SYNC_REPLICA_ROOT_CHANGED | -| **Remediation required** | Yes | --This error occurs because Azure File Sync doesn't support deleting and recreating an Azure file share in the same sync group. --To resolve this issue, delete and recreate the sync group by performing the following steps: --1. Delete all server endpoints in the sync group. -2. Delete the cloud endpoint. -3. Delete the sync group. -4. If cloud tiering was enabled on a server endpoint, delete the orphaned tiered files on the server by performing the steps documented in the [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint) section. -5. Recreate the sync group. --<a id="-2134375852"></a>**Sync detected the replica has been restored to an older state** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80254 | -| **HRESULT (decimal)** | -2134375852 | -| **Error string** | ECS_E_SYNC_REPLICA_BACK_IN_TIME | -| **Remediation required** | No | --No action is required. This error occurs because sync detected the replica has been restored to an older state. Sync will now enter a reconciliation mode, where it recreates the sync relationship by merging the contents of the Azure file share and the data on the server endpoint. When reconciliation mode is triggered, the process can be very time consuming depending upon the namespace size. Regular synchronization doesn't happen until the reconciliation finishes, and files that are different (last modified time or size) between the Azure file share and server endpoint will result in file conflicts. --<a id="-2145844941"></a>**Sync failed because the HTTP request was redirected** --| Error | Code | -|-|-| -| **HRESULT** | 0x80190133 | -| **HRESULT (decimal)** | -2145844941 | -| **Error string** | HTTP_E_STATUS_REDIRECT_KEEP_VERB | -| **Remediation required** | Yes | --This error occurs because Azure File Sync doesn't support HTTP redirection (3xx status code). To resolve this issue, disable HTTP redirect on your proxy server or network device. --<a id="-2134364086"></a>**Sync session timeout error.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8304a | -| **HRESULT (decimal)** | -2134364086 | -| **Error string** | ECS_E_WORK_FRAMEWORK_TIMEOUT | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83049 | -| **HRESULT (decimal)** | -2134364087 | -| **Error string** | ECS_E_WORK_FRAMEWORK_RESULT_NOT_FOUND | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83093 | -| **HRESULT (decimal)** | -2134364013 | -| **Error string** | ECS_E_WORK_RESULT_EXPIRED | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2146233083"></a>**Operation time out.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80131505 | -| **HRESULT (decimal)** | -2146233083 | -| **Error string** | COR_E_TIMEOUT | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2134351859"></a>**Time out error.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8600d | -| **HRESULT (decimal)** | -2134351859 | -| **Error string** | ECS_E_AZURE_OPERATION_TIME_OUT | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2134364027"></a>**A timeout occurred during offline data transfer, but it is still in progress.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83085 | -| **HRESULT (decimal)** | -2134364027 | -| **Error string** | ECS_E_DATA_INGESTION_WAIT_TIMEOUT | -| **Remediation required** | No | --This error occurs when a data ingestion operation exceeds the timeout. This error can be ignored if sync is making progress (AppliedItemCount is greater than 0). See [How do I monitor the progress of a current sync session?](#how-do-i-monitor-the-progress-of-a-current-sync-session). --<a id="-2134375814"></a>**Sync failed because the server endpoint path cannot be found on the server.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8027a | -| **HRESULT (decimal)** | -2134375814 | -| **Error string** | ECS_E_SYNC_ROOT_DIRECTORY_NOT_FOUND | -| **Remediation required** | Yes | --This error occurs if the directory used as the server endpoint path was renamed or deleted. If the directory was renamed, rename the directory back to the original name and restart the Storage Sync Agent service (FileSyncSvc). --If the directory was deleted, perform the following steps to remove the existing server endpoint and create a new server endpoint using a new path: --1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md). -1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md). --<a id="-2134375783"></a>**Server endpoint provisioning failed due to an empty server path.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80C80299 | -| **HRESULT (decimal)** | -2134375783 | -| **Error string** | ECS_E_SYNC_AUTHORITATIVE_UPLOAD_EMPTY_SET | -| **Remediation required** | Yes | --Server endpoint provisioning fails with this error code if these conditions are met: -* This server endpoint was provisioned with the initial sync mode: [server authoritative](file-sync-server-endpoint-create.md#initial-sync-section) -* Local server path is empty or contains no items recognized as able to sync. --This provisioning error protects you from deleting all content that might be available in an Azure file share. Server authoritative upload is a special mode to catch up a cloud location that was already seeded, with the updates from the server location. Review this [migration guide](../files/storage-files-migration-server-hybrid-databox.md) to understand the scenario for which this mode has been built. --1. Remove the server endpoint in the sync group by following the steps documented in [Remove a server endpoint](file-sync-server-endpoint-delete.md). -1. Create a new server endpoint in the sync group by following the steps documented in [Add a server endpoint](file-sync-server-endpoint-create.md). ---<a id="-2134364025"></a>**The subscription owning the storage account is disabled.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83087 | -| **HRESULT (decimal)** | -2134364025 | -| **Error string** | ECS_E_STORAGE_ACCOUNT_SUBSCRIPTION_DISABLED | -| **Remediation required** | Yes | --Please check and ensure the subscription where your storage account resides is enabled. --<a id="64"></a>**The specified network name is no longer available.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80070040 | -| **HRESULT (decimal)** | -2147024832 | -| **Error string** | ERROR_NETNAME_DELETED | -| **Remediation required** | Yes | --Use the `Test-StorageSyncNetworkConnectivity` cmdlet to check network connectivity to the service endpoints. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). --<a id="-2134364147"></a>**Sync session error.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8300d | -| **HRESULT (decimal)** | -2134364147 | -| **Error string** | ECS_E_CANNOT_CREATE_ACTIVE_SESSION_PLACEHOLDER_BLOB | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8300e | -| **HRESULT (decimal)** | -2134364146 | -| **Error string** | ECS_E_CANNOT_UPDATE_REPLICA_WATERMARK | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8024a | -| **HRESULT (decimal)** | -2134375862 | -| **Error string** | ECS_E_SYNC_DEFERRAL_QUEUE_RESTART_SESSION | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83098 | -| **HRESULT (decimal)** | -2134364008 | -| **Error string** | ECS_E_STORAGE_ACCOUNT_MGMT_OPERATION_THROTTLED | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83082 | -| **HRESULT (decimal)** | -2134364030 | -| **Error string** | ECS_E_ASYNC_WORK_ACTION_UNABLE_TO_RETRY | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83006 | -| **HRESULT (decimal)** | -2134364154 | -| **Error string** | ECS_E_ECS_BATCH_ERROR | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2134363999"></a>**Sync session error.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c830a1 | -| **HRESULT (decimal)** | -2134363999 | -| **Error string** | ECS_TOO_MANY_ETAGVERIFICATION_FAILURES | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8023c | -| **HRESULT (decimal)** | -2134375876 | -| **Error string** | ECS_E_SYNC_CLOUD_METADATA_CORRUPT | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | | -| **HRESULT (decimal)** | | -| **Error string** | | -| **Remediation required** | Maybe | --If the error persists for more than a day, create a support request. --<a id="-2147024809"></a>**An internal error occurred.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80070057 | -| **HRESULT (decimal)** | -2147024809 | -| **Error string** | ERROR_INVALID_PARAMETER | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80302 | -| **HRESULT (decimal)** | -2134375678 | -| **Error string** | ECS_E_UNKNOWN_HTTP_SERVER_ERROR | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x8004100c | -| **HRESULT (decimal)** | -2147217396 | -| **Error string** | SYNC_E_DESERIALIZATION | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8022d | -| **HRESULT (decimal)** | -2134375891 | -| **Error string** | ECS_E_SYNC_METADATA_UNCOMMITTED_TX_LIMIT_REACHED | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83097 | -| **HRESULT (decimal)** | -2134364009 | -| **Error string** | ECS_E_QUEUE_CLIENT_EXCEPTION | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80245 | -| **HRESULT (decimal)** | -2134375867 | -| **Error string** | ECS_E_EPOCH_CHANGE_DETECTED | -| **Remediation required** | No | --| Error | Code | -|-|-| -| **HRESULT** | 0x80072ef3 | -| **HRESULT (decimal)** | -2147012877 | -| **Error string** | WININET_E_INCORRECT_HANDLE_STATE | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2146233079"></a>**An internal error occurred.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80131509 | -| **HRESULT (decimal)** | -2146233079 | -| **Error string** | COR_E_INVALIDOPERATION | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80070718 | -| **HRESULT (decimal)** | -2147023080 | -| **Error string** | ERROR_NOT_ENOUGH_QUOTA | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80131622 | -| **HRESULT (decimal)** | -2146232798 | -| **Error string** | COR_E_OBJECTDISPOSED | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80004002 | -| **HRESULT (decimal)** | -2147467262 | -| **Error string** | E_NOINTERFACE | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x800700a1 | -| **HRESULT (decimal)** | -2147024735 | -| **Error string** | ERROR_BAD_PATHNAME | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x8007054f | -| **HRESULT (decimal)** | -2147023537 | -| **Error string** | ERROR_INTERNAL_ERROR | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80131501 | -| **HRESULT (decimal)** | -2146233087 | -| **Error string** | COR_E_SYSTEM | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80131620 | -| **HRESULT (decimal)** | -2146232800 | -| **Error string** | COR_E_IO | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80070026 | -| **HRESULT (decimal)** | -2147024858 | -| **Error string** | COR_E_ENDOFSTREAM | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80070554 | -| **HRESULT (decimal)** | -2147023532 | -| **Error string** | ERROR_NO_SUCH_PACKAGE | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x80131537 | -| **HRESULT (decimal)** | -2146233033 | -| **Error string** | COR_E_FORMAT | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x8007001f | -| **HRESULT (decimal)** | -2147024865 | -| **Error string** | ERROR_GEN_FAILURE | -| **Remediation required** | Maybe | --If the error persists for more than a day, create a support request. --<a id="-2147467261"></a>**An internal error occurred.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80004003 | -| **HRESULT (decimal)** | -2147467261 | -| **Error string** | E_POINTER | -| **Remediation required** | Yes | --Please upgrade to the latest file sync agent version. If the error persists after upgrading the agent, create a support request. --<a id="-2147023570"></a>**Operation failed due to an authentication failure.** --| Error | Code | -|-|-| -| **HRESULT** | 0x8007052e | -| **HRESULT (decimal)** | -2147023570 | -| **Error string** | ERROR_LOGON_FAILURE | -| **Remediation required** | Maybe | --| Error | Code | -|-|-| -| **HRESULT** | 0x8007051f | -| **HRESULT (decimal)** | -2147023585 | -| **Error string** | ERROR_NO_LOGON_SERVERS | -| **Remediation required** | Maybe | --If the error persists for more than a day, create a support request. --<a id="-2134351869"></a>**The specified Azure account is disabled.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c86003 | -| **HRESULT (decimal)** | -2134351869 | -| **Error string** | ECS_E_AZURE_ACCOUNT_IS_DISABLED | -| **Remediation required** | Yes | --Please check and ensure the subscription where your storage account resides is enabled. --<a id="-2134364036"></a>**Storage account key based authentication blocked.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8307c | -| **HRESULT (decimal)** | -2134364036 | -| **Error string** | ECS_E_STORAGE_ACCOUNT_KEY_BASED_AUTHENTICATION_BLOCKED | -| **Remediation required** | Yes | --Enable 'Allow storage account key access' on the storage account. [Learn more](file-sync-deployment-guide.md#prerequisites). --<a id="-2134364020"></a>**The specified seeded share does not exist.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8308c | -| **HRESULT (decimal)** | -2134364020 | -| **Error string** | ECS_E_SEEDED_SHARE_NOT_FOUND | -| **Remediation required** | Yes | --Check if the Azure file share exists in the storage account. --<a id="-2134376385"></a>**Sync needs to update the database on the server.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c8003f | -| **HRESULT (decimal)** | -2134376385 | -| **Error string** | ECS_E_SYNC_EPOCH_MISMATCH | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="-2134347516"></a>**The volume is offline. Either it is removed, not ready or not connected.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c87104 | -| **HRESULT (decimal)** | -2134347516 | -| **Error string** | ECS_E_VOLUME_OFFLINE | -| **Remediation required** | Yes | --Please verify the volume where the server endpoint is located is attached to the server. --<a id="-2134364007"></a>**Private endpoint configuration access blocked.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c83099 | -| **HRESULT (decimal)** | -2134364007 | -| **Error string** | ECS_E_PRIVATE_ENDPOINT_ACCESS_BLOCKED | -| **Remediation required** | Yes | --Check the private endpoint configuration and allow access to the file sync service. [Learn more](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). --<a id="-2134375864"></a>**Sync needs to reconcile the server and Azure file share data before files can be uploaded.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80248 | -| **HRESULT (decimal)** | -2134375864 | -| **Error string** | ECS_E_REPLICA_RECONCILIATION_NEEDED | -| **Remediation required** | No | --No action required. This error should automatically resolve. If the error persists for several days, create a support request. --<a id="0x4c3"></a>**Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed.** --| Error | Code | -|-|-| -| **HRESULT** | 0x800704c3 | -| **HRESULT (decimal)** | -2147023677 | -| **Error string** | ERROR_SESSION_CREDENTIAL_CONFLICT | -| **Remediation required** | Yes | --Disconnect all previous connections to the server or shared resource and try again. --<a id="-2134376368"></a>**The server's SSL certificate is invalid or expired.** --| Error | Code | -|-|-| -| **HRESULT** | 0x80c80050 | -| **HRESULT (decimal)** | -2134376368 | -| **Error string** | ECS_E_SERVER_INVALID_OR_EXPIRED_CERTIFICATE | -| **Remediation required** | Yes | --Run the following PowerShell command on the server to reset the certificate: `Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string>` --## Common troubleshooting steps --<a id="troubleshoot-storage-account"></a>**Verify the storage account exists.** -# [Portal](#tab/azure-portal) -1. Navigate to the sync group within the Storage Sync Service. -2. Select the cloud endpoint within the sync group. -3. Note the Azure file share name in the opened pane. -4. Select the linked storage account. If this link fails, the referenced storage account has been removed. -  --# [PowerShell](#tab/azure-powershell) -```powershell -# Variables for you to populate based on your configuration -$region = "<Az_Region>" -$resourceGroup = "<RG_Name>" -$syncService = "<storage-sync-service>" -$syncGroup = "<sync-group>" --# Log into the Azure account -Connect-AzAccount --# Check to ensure Azure File Sync is available in the selected Azure -# region. -$regions = [System.String[]]@() -Get-AzLocation | ForEach-Object { - if ($_.Providers -contains "Microsoft.StorageSync") { - $regions += $_.Location - } -} --if ($regions -notcontains $region) { - throw [System.Exception]::new("Azure File Sync is either not available in the " + ` - " selected Azure Region or the region is mistyped.") -} --# Check to ensure resource group exists -$resourceGroups = [System.String[]]@() -Get-AzResourceGroup | ForEach-Object { - $resourceGroups += $_.ResourceGroupName -} --if ($resourceGroups -notcontains $resourceGroup) { - throw [System.Exception]::new("The provided resource group $resourceGroup does not exist.") -} --# Check to make sure the provided Storage Sync Service -# exists. -$syncServices = [System.String[]]@() --Get-AzStorageSyncService -ResourceGroupName $resourceGroup | ForEach-Object { - $syncServices += $_.StorageSyncServiceName -} --if ($syncServices -notcontains $syncService) { - throw [System.Exception]::new("The provided Storage Sync Service $syncService does not exist.") -} --# Check to make sure the provided Sync Group exists -$syncGroups = [System.String[]]@() --Get-AzStorageSyncGroup -ResourceGroupName $resourceGroup -StorageSyncServiceName $syncService | ForEach-Object { - $syncGroups += $_.SyncGroupName -} --if ($syncGroups -notcontains $syncGroup) { - throw [System.Exception]::new("The provided sync group $syncGroup does not exist.") -} --# Get reference to cloud endpoint -$cloudEndpoint = Get-AzStorageSyncCloudEndpoint ` - -ResourceGroupName $resourceGroup ` - -StorageSyncServiceName $syncService ` - -SyncGroupName $syncGroup --# Get reference to storage account -$storageAccount = Get-AzStorageAccount | Where-Object { - $_.Id -eq $cloudEndpoint.StorageAccountResourceId -} --if ($storageAccount -eq $null) { - throw [System.Exception]::new("The storage account referenced in the cloud endpoint does not exist.") -} -``` ---<a id="troubleshoot-azure-file-share"></a>**Ensure the Azure file share exists.** -# [Portal](#tab/azure-portal) -1. Click **Overview** on the left-hand table of contents to return to the main storage account page. -2. Select **Files** to view the list of file shares. -3. Verify the file share referenced by the cloud endpoint appears in the list of file shares (you should have noted this in step 1 above). --# [PowerShell](#tab/azure-powershell) -```powershell -$fileShare = Get-AzStorageShare -Context $storageAccount.Context | Where-Object { - $_.Name -eq $cloudEndpoint.AzureFileShareName -and - $_.IsSnapshot -eq $false -} --if ($fileShare -eq $null) { - throw [System.Exception]::new("The Azure file share referenced by the cloud endpoint does not exist") -} -``` ---<a id="troubleshoot-rbac"></a>**Ensure Azure File Sync has access to the storage account.** -# [Portal](#tab/azure-portal) -1. Select **Access control (IAM)** from the left-hand navigation. -1. Select the **Role assignments** tab to list the users and applications (*service principals*) that have access to your storage account. -1. Verify **Microsoft.StorageSync** or **Hybrid File Sync Service** (old application name) appears in the list with the **Reader and Data Access** role. --  -- If **Microsoft.StorageSync** or **Hybrid File Sync Service** doesn't appear in the list, perform the following steps: -- - Select **Add**. - - In the **Role** field, select **Reader and Data Access**. - - In the **Select** field, type **Microsoft.StorageSync**, select the role, and then select **Save**. --# [PowerShell](#tab/azure-powershell) -```powershell -$role = Get-AzRoleAssignment -Scope $storageAccount.Id | Where-Object { $_.DisplayName -eq "Microsoft.StorageSync" } --if ($role -eq $null) { - throw [System.Exception]::new("The storage account does not have the Azure File Sync " + ` - "service principal authorized to access the data within the " + ` - "referenced Azure file share.") -} -``` ---## See also -- [Troubleshoot Azure File Sync sync group management](file-sync-troubleshoot-sync-group-management.md)-- [Troubleshoot Azure File Sync agent installation and server registration](file-sync-troubleshoot-installation.md)-- [Troubleshoot Azure File Sync cloud tiering](file-sync-troubleshoot-cloud-tiering.md)-- [Monitor Azure File Sync](file-sync-monitoring.md)-- [Troubleshoot Azure Files problems](../files/files-troubleshoot.md) |
storage | File Sync Troubleshoot Sync Group Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-group-management.md | - Title: Troubleshoot Azure File Sync sync group management -description: Troubleshoot common issues in managing Azure File Sync sync groups, including cloud endpoint creation and server endpoint creation, deletion, and health. --- Previously updated : 10/25/2022------# Troubleshoot Azure File Sync sync group management -A sync group defines the sync topology for a set of files. Endpoints within a sync group are kept in sync with each other. A sync group must contain one cloud endpoint, which represents an Azure file share, and one or more server endpoints, which represents a path on a registered server. This article is designed to help you troubleshoot and resolve issues that you might encounter when managing sync groups. --## Cloud endpoint creation errors --<a id="cloud-endpoint-mgmtinternalerror"></a>**Cloud endpoint creation fails, with this error: "MgmtInternalError"** -This error can occur if the Azure File Sync service cannot access the storage account due to SMB security settings. To enable Azure File Sync to access the storage account, the SMB security settings on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings). --<a id="cloud-endpoint-mgmtforbidden"></a>**Cloud endpoint creation fails, with this error: "MgmtForbidden"** -This error occurs if the Azure File Sync service cannot access the storage account. --To resolve this issue, perform the following steps: -- Verify the "Allow trusted Microsoft services to access this storage account" setting is checked on your storage account. To learn more, see [Restrict access to the storage account public endpoint](file-sync-networking-endpoints.md#restrict-access-to-the-storage-account-public-endpoint).-- Verify the SMB security settings on your storage account. To enable Azure File Sync to access the storage account, the SMB security settings on the storage account must allow **SMB 3.1.1** protocol version, **NTLM v2** authentication and **AES-128-GCM** encryption. To check the SMB security settings on the storage account, see [SMB security settings](../files/files-smb-protocol.md#smb-security-settings).--<a id="cloud-endpoint-authfailed"></a>**Cloud endpoint creation fails, with this error: "AuthorizationFailed"** -This error occurs if your user account doesn't have sufficient rights to create a cloud endpoint. --To create a cloud endpoint, your user account must have the following Microsoft Authorization permissions: -* Read: Get role definition -* Write: Create or update custom role definition -* Read: Get role assignment -* Write: Create role assignment --The following built-in roles have the required Microsoft Authorization permissions: -* Owner -* User Access Administrator --To determine whether your user account role has the required permissions: -1. In the Azure portal, select **Resource groups**. -2. Select the resource group where the storage account is located, and then select **Access control (IAM)**. -3. Select the **Role assignments** tab. -4. Select the **Role** (for example, Owner or Contributor) for your user account. -5. In the **Resource Provider** list, select **Microsoft Authorization**. - * **Role assignment** should have **Read** and **Write** permissions. - * **Role definition** should have **Read** and **Write** permissions. --<a id="cloud-endpoint-using-share"></a>**Cloud endpoint creation fails, with this error: "The specified Azure FileShare is already in use by a different CloudEndpoint"** -This error occurs if the Azure file share is already in use by another cloud endpoint. --If you see this message and the Azure file share currently is not in use by a cloud endpoint, complete the following steps to clear the Azure File Sync metadata on the Azure file share: --> [!Warning] -> Deleting the metadata on an Azure file share that is currently in use by a cloud endpoint causes Azure File Sync operations to fail. If you then use this file share for sync in a different sync group, data loss for files in the old sync group is almost certain. --1. In the Azure portal, go to your Azure file share.   -2. Right-click the Azure file share, and then select **Edit metadata**. -3. Right-click **SyncService**, and then select **Delete**. --## Server endpoint creation and deletion errors --<a id="-2134375898"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134375898 or 0x80c80226)** -This error occurs if the server endpoint path is on the system volume and cloud tiering is enabled. Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint. --<a id="-2147024894"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2147024894 or 0x80070002)** -This error occurs if the server endpoint path specified is not valid. Verify the server endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path. --<a id="-2134375640"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134375640 or 0x80c80328)** -This error occurs if the server endpoint path specified is not an NTFS volume. Verify the server endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path. --<a id="-2134347507"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134347507 or 0x80c8710d)** -This error occurs because Azure File Sync does not support server endpoints on volumes, which have a compressed System Volume Information folder. To resolve this issue, decompress the System Volume Information folder. If the System Volume Information folder is the only folder compressed on the volume, perform the following steps: --1. Download [PsExec](/sysinternals/downloads/psexec) tool. -2. Run the following command from an elevated command prompt to launch a command prompt running under the system account: **PsExec.exe -i -s -d cmd** -3. From the command prompt running under the system account, type the following commands and hit enter: - **cd /d "drive letter:\System Volume Information"** - **compact /u /s** --<a id="-2134376345"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376345 or 0x80C80067)** -This error occurs if the limit of server endpoints per server is reached. Azure File Sync currently supports up to 30 server endpoints per server. For more information, see -[Azure File Sync scale targets](../files/storage-files-scale-targets.md?toc=/azure/storage/filesync/toc.json#azure-file-sync-scale-targets). --<a id="-2134376427"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2134376427 or 0x80c80015)** -This error occurs if another server endpoint is already syncing the server endpoint path specified. Azure File Sync does not support multiple server endpoints syncing the same directory or volume. --<a id="-2160590967"></a>**Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2160590967 or 0x80c80077)** -This error occurs if the server endpoint path contains orphaned tiered files. If a server endpoint was recently removed, wait until the orphaned tiered files cleanup has completed. An Event ID 6662 is logged to the Telemetry event log once the orphaned tiered files cleanup has started. An Event ID 6661 is logged once the orphaned tiered files cleanup has completed and a server endpoint can be recreated using the path. If the server endpoint creation fails after the tiered files cleanup has completed or if Event ID 6661 cannot be found in the Telemetry event log due to event log rollover, remove the orphaned tiered files by performing the steps documented in [Tiered files are not accessible on the server after deleting a server endpoint](file-sync-troubleshoot-cloud-tiering.md#tiered-files-are-not-accessible-on-the-server-after-deleting-a-server-endpoint). --<a id="-2134347757"></a>**Server endpoint deletion fails, with this error: "MgmtServerJobExpired" (Error code: -2134347757 or 0x80c87013)** -This error occurs if the server is offline or doesn't have network connectivity. If the server is no longer available, unregister the server in the portal, which will delete the server endpoints. To delete the server endpoints, follow the steps that are described in [Unregister a server with Azure File Sync](file-sync-server-registration.md#unregister-the-server-with-storage-sync-service). --## Server endpoint health --<a id="server-endpoint-provisioningfailed"></a>**Unable to open server endpoint properties page or update cloud tiering policy** -This issue can occur if a management operation on the server endpoint fails. If the server endpoint properties page does not open in the Azure portal, updating server endpoint using PowerShell commands from the server may fix this issue. --```powershell -# Get the server endpoint id based on the server endpoint DisplayName property -Get-AzStorageSyncServerEndpoint ` - -ResourceGroupName myrgname ` - -StorageSyncServiceName storagesvcname ` - -SyncGroupName mysyncgroup | ` -Tee-Object -Variable serverEndpoint --# Update the free space percent policy for the server endpoint -Set-AzStorageSyncServerEndpoint ` - -InputObject $serverEndpoint - -CloudTiering ` - -VolumeFreeSpacePercent 60 -``` -<a id="server-endpoint-noactivity"></a>**Server endpoint has a health status of "No Activity" or "Pending" and the server state on the registered servers blade is "Appears offline"** --This issue can occur if the Storage Sync Monitor process (AzureStorageSyncMonitor.exe) is not running or the server is unable to access the Azure File Sync service. --On the server that is showing as "Appears offline" in the portal, look at Event ID 9301 in the Telemetry event log (located under Applications and Services\Microsoft\FileSync\Agent in Event Viewer) to determine why the server is unable to access the Azure File Sync service. --- If **GetNextJob completed with status: 0** is logged, the server can communicate with the Azure File Sync service. - - Open Task Manager on the server and verify the Storage Sync Monitor (AzureStorageSyncMonitor.exe) process is running. If the process is not running, first try restarting the server. If restarting the server does not resolve the issue, upgrade to the latest Azure File Sync [agent version](file-sync-release-notes.md). --- If **GetNextJob completed with status: -2134347756** is logged, the server is unable to communicate with the Azure File Sync service due to a firewall, proxy, or TLS cipher suite order configuration. - - If the server is behind a firewall, verify port 443 outbound is allowed. If the firewall restricts traffic to specific domains, confirm the domains listed in the Firewall [documentation](file-sync-firewall-and-proxy.md#firewall) are accessible. - - If the server is behind a proxy, configure the machine-wide or app-specific proxy settings by following the steps in the Proxy [documentation](file-sync-firewall-and-proxy.md#proxy). - - Use the Test-StorageSyncNetworkConnectivity cmdlet to check network connectivity to the service endpoints. To learn more, see [Test network connectivity to service endpoints](file-sync-firewall-and-proxy.md#test-network-connectivity-to-service-endpoints). - - If the TLS cipher suite order is configured on the server, you can use group policy or TLS cmdlets to add cipher suites: - - To use group policy, see [Configuring TLS Cipher Suite Order by using Group Policy](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-group-policy). - - To use TLS cmdlets, see [Configuring TLS Cipher Suite Order by using TLS PowerShell Cmdlets](/windows-server/security/tls/manage-tls#configuring-tls-cipher-suite-order-by-using-tls-powershell-cmdlets). - - Azure File Sync currently supports the following cipher suites for TLS 1.2 protocol: - - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - - TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 - - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 --> [!Note] -> Different Windows versions support different TLS cipher suites and priority order. See [TLS Cipher Suites in Windows](/windows/win32/secauthn/cipher-suites-in-schannel) for the corresponding Windows version and the supported cipher suites and default order in which they are chosen by the Microsoft Schannel Provider. --- If **GetNextJob completed with status: -2134347764** is logged, the server is unable to communicate with the Azure File Sync service due to an expired or deleted certificate. - - Run the following PowerShell command on the server to reset the certificate used for authentication: - ```powershell - Reset-AzStorageSyncServerCertificate -ResourceGroupName <string> -StorageSyncServiceName <string> - ``` -<a id="endpoint-noactivity-sync"></a>**Server endpoint has a health status of "No Activity" and the server state on the registered servers blade is "Online"** --A server endpoint health status of "No Activity" means the server endpoint has not logged sync activity in the past two hours. --To check current sync activity on a server, see [How do I monitor the progress of a current sync session?](file-sync-troubleshoot-sync-errors.md#how-do-i-monitor-the-progress-of-a-current-sync-session) --A server endpoint may not log sync activity for several hours due to a bug or insufficient system resources. Verify the latest Azure File Sync [agent version](file-sync-release-notes.md) is installed. If the issue persists, open a support request. --> [!Note] -> If the server state on the registered servers blade is "Appears Offline," perform the steps documented in the [Server endpoint has a health status of "No Activity" or "Pending" and the server state on the registered servers blade is "Appears offline"](#server-endpoint-noactivity) section. --## See also -- [Troubleshoot Azure File Sync sync errors](file-sync-troubleshoot-sync-errors.md)-- [Troubleshoot Azure File Sync agent installation and server registration](file-sync-troubleshoot-installation.md)-- [Troubleshoot Azure File Sync cloud tiering](file-sync-troubleshoot-cloud-tiering.md)-- [Monitor Azure File Sync](file-sync-monitoring.md)-- [Troubleshoot Azure Files problems](../files/files-troubleshoot.md) |
storage | File Sync Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md | - Title: Troubleshoot Azure File Sync -description: Troubleshoot common issues that you might encounter with Azure File Sync, which you can use to transform Windows Server into a quick cache of your Azure file share. --- Previously updated : 8/08/2022-----# Troubleshoot Azure File Sync -You can use Azure File Sync to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of an on-premises file server. This article is designed to help you troubleshoot and resolve issues that you might encounter with your Azure File Sync deployment. We also describe how to collect important logs from the system if a deeper investigation of the issue is required. If you don't see the answer to your question, you can contact us through the following channels (in escalating order): --- [Microsoft Q&A question page for Azure Files](/answers/products/azure?product=storage).-- [Azure Community Feedback](https://feedback.azure.com/d365community/forum/a8bb4a47-3525-ec11-b6e6-000d3a4f0f84?c=c860fa6b-3525-ec11-b6e6-000d3a4f0f84).-- Microsoft Support. To create a new support request, in the Azure portal, on the **Help** tab, select the **Help + support** button, and then select **New support request**.--## I'm having an issue with Azure File Sync on my server (sync, cloud tiering, etc.). Should I remove and recreate my server endpoint? --## General troubleshooting first steps -If you encounter issues with Azure File Sync on a server, start by completing the following steps: -1. In Event Viewer, review the telemetry, operational and diagnostic event logs. - - Sync, tiering, and recall issues are logged in the telemetry, diagnostic and operational event logs under Applications and Services\Microsoft\FileSync\Agent. - - Issues related to managing a server (for example, configuration settings) are logged in the operational and diagnostic event logs under Applications and Services\Microsoft\FileSync\Management. -2. Verify the Azure File Sync service is running on the server: - - Open the Services MMC snap-in and verify that the Storage Sync Agent service (FileSyncSvc) is running. -3. Verify the Azure File Sync filter drivers (StorageSync.sys and StorageSyncGuard.sys) are running: - - At an elevated command prompt, run `fltmc`. Verify that the StorageSync.sys and StorageSyncGuard.sys file system filter drivers are listed. --If the issue is not resolved, run the AFSDiag tool and send its .zip file output to the support engineer assigned to your case for further diagnosis. --To run AFSDiag, perform the steps below: --1. Open an elevated PowerShell window, and then run the following commands (press Enter after each command): -- > [!NOTE] - >AFSDiag will create the output directory and a temp folder within it prior to collecting logs and will delete the temp folder after execution. Specify an output location which does not contain data. - - ```powershell - cd "c:\Program Files\Azure\StorageSyncAgent" - Import-Module .\afsdiag.ps1 - Debug-AFS -OutputDirectory C:\output -KernelModeTraceLevel Verbose -UserModeTraceLevel Verbose - ``` --2. Reproduce the issue. When you're finished, enter **D**. -3. A .zip file that contains logs and trace files is saved to the output directory that you specified. --## Common troubleshooting subject areas --For more detailed information, choose the subject area that you'd like to troubleshoot. --- [Agent installation and server registration issues](file-sync-troubleshoot-installation.md)-- [Sync group management (including cloud endpoint and server endpoint creation)](file-sync-troubleshoot-sync-group-management.md)-- [Sync errors](file-sync-troubleshoot-sync-errors.md)-- [Cloud tiering issues](file-sync-troubleshoot-cloud-tiering.md)--Some issues can be related to more than one subject area. --## See also -- [Monitor Azure File Sync](file-sync-monitoring.md)-- [Troubleshoot Azure Files](../files/files-troubleshoot.md)-- [Troubleshoot Azure Files performance issues](../files/files-troubleshoot-performance.md) |
storage | Files Nfs Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md | Azure Files offers two industry-standard file system protocols for mounting Azur This article covers NFS Azure file shares. For information about SMB Azure file shares, see [SMB file shares in Azure Files](files-smb-protocol.md). > [!IMPORTANT]-> NFS Azure file shares aren't supported for Windows. Before using NFS Azure file shares in production, see [Troubleshoot NFS Azure file shares](files-troubleshoot-linux-nfs.md) for a list of known issues. NFS access control lists (ACLs) aren't supported. +> NFS Azure file shares aren't supported for Windows. Before using NFS Azure file shares in production, see [Troubleshoot NFS Azure file shares](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-nfs?toc=/azure/storage/files/toc.json) for a list of known issues. NFS access control lists (ACLs) aren't supported. ## Common scenarios NFS file shares are often used in the following scenarios: NFS Azure file shares are only offered on premium file shares, which store data ## Workloads > [!IMPORTANT]-> Before using NFS Azure file shares in production, see [Troubleshoot NFS Azure file shares](files-troubleshoot-linux-nfs.md) for a list of known issues. +> Before using NFS Azure file shares in production, see [Troubleshoot NFS Azure file shares](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-nfs?toc=/azure/storage/files/toc.json) for a list of known issues. NFS has been validated to work well with workloads such as SAP application layer, database backups, database replication, messaging queues, home directories for general purpose file servers, and content repositories for application workloads. |
storage | Files Remove Smb1 Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-remove-smb1-linux.md | See these links for more information about Azure Files: - [Planning for an Azure Files deployment](storage-files-planning.md) - [Use Azure Files with Linux](storage-how-to-use-files-linux.md)-- [Troubleshoot SMB issues on Linux](files-troubleshoot-linux-smb.md)-- [Troubleshoot NFS issues on Linux](files-troubleshoot-linux-nfs.md)+- [Troubleshoot SMB issues on Linux](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-smb?toc=/azure/storage/files/toc.json) +- [Troubleshoot NFS issues on Linux](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-nfs?toc=/azure/storage/files/toc.json) |
storage | Files Troubleshoot Create Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-create-alerts.md | - Title: Azure Files performance troubleshooting - creating alerts -description: Troubleshoot performance issues with SMB Azure file shares by receiving alerts if a share is being throttled or is about to be throttled. --- Previously updated : 02/23/2023---#Customer intent: As a system admin, I want to troubleshoot performance issues with Azure file shares to improve performance for applications and users. --# Troubleshoot Azure Files by creating alerts --This article explains how to create and receive alerts if an Azure file share is being throttled or is about to be throttled. Requests are throttled when the I/O operations per second (IOPS), ingress, or egress limits for a file share are reached. --> [!IMPORTANT] -> For standard storage accounts with large file shares (LFS) enabled, throttling occurs at the account level. For premium files shares and standard file shares without LFS enabled, throttling occurs at the share level. --## Applies to -| File share type | SMB | NFS | -|-|:-:|:-:| -| Standard file shares (GPv2), LRS/ZRS |  |  | -| Standard file shares (GPv2), GRS/GZRS |  |  | -| Premium file shares (FileStorage), LRS/ZRS |  |  | --## Create an alert if a file share is being throttled --1. Go to your **storage account** in the **Azure portal**. -2. In the **Monitoring** section, click **Alerts**, and then click **+ New alert rule**. -3. Click **Edit resource**, select the **File resource type** for the storage account and then click **Done**. For example, if the storage account name is `contoso`, select the `contoso/file` resource. -4. Click **Add condition** to add a condition. -5. You'll see a list of signals supported for the storage account, select the **Transactions** metric. -6. On the **Configure signal logic** blade, click the **Dimension name** drop-down and select **Response type**. -7. Click the **Dimension values** drop-down and select the appropriate response types for your file share. -- For standard file shares that don't have large file shares enabled, select the following response types (requests are throttled at the share level): -- - SuccessWithThrottling - - SuccessWithShareIopsThrottling - - ClientShareIopsThrottlingError -- For standard file shares that have large file shares enabled, select the following response types (requests are throttled at the storage account level): -- - ClientAccountRequestThrottlingError - - ClientAccountBandwidthThrottlingError -- For premium file shares, select the following response types (requests are throttled at the share level): -- - SuccessWithShareEgressThrottling - - SuccessWithShareIngressThrottling - - SuccessWithShareIopsThrottling - - ClientShareEgressThrottlingError - - ClientShareIngressThrottlingError - - ClientShareIopsThrottlingError -- > [!NOTE] - > If the response types aren't listed in the **Dimension values** drop-down, this means the resource hasn't been throttled. To add the dimension values, next to the **Dimension values** drop-down list, select **Add custom value**, enter the response type (for example, **SuccessWithThrottling**), select **OK**, and then repeat these steps to add all applicable response types for your file share. --8. For **premium file shares**, click the **Dimension name** drop-down and select **File Share**. For **standard file shares**, skip to **step #10**. -- > [!NOTE] - > If the file share is a standard file share, the **File Share** dimension won't list the file share(s) because per-share metrics aren't available for standard file shares. Throttling alerts for standard file shares will be triggered if any file share within the storage account is throttled, and the alert won't identify which file share was throttled. Because per-share metrics aren't available for standard file shares, we recommend having only one file share per storage account. --9. Select the **Dimension values** drop-down and select the file share(s) that you want to alert on. -10. Define the **alert parameters** (threshold value, operator, aggregation granularity and frequency of evaluation) and select **Done**. -- > [!TIP] - > If you're using a static threshold, the metric chart can help determine a reasonable threshold value if the file share is currently being throttled. If you're using a dynamic threshold, the metric chart will display the calculated thresholds based on recent data. --11. Select **Add action groups** to add an **action group** (email, SMS, etc.) to the alert either by selecting an existing action group or creating a new action group. -12. Fill in the **Alert details** like **Alert rule name**, **Description**, and **Severity**. -13. Select **Create alert rule** to create the alert. --## Create alert if a premium file share is close to being throttled --1. In the Azure portal, go to your storage account. -2. In the **Monitoring** section, select **Alerts**, and then select **New alert rule**. -3. Select **Edit resource**, select the **File resource type** for the storage account, and then select **Done**. For example, if the storage account name is *contoso*, select the contoso/file resource. -4. Select **Select Condition** to add a condition. -5. In the list of signals that are supported for the storage account, select the **Egress** metric. -- > [!NOTE] - > You have to create three separate alerts to be alerted when the ingress, egress, or transaction values exceed the thresholds you set. This is because an alert is triggered only when all conditions are met. For example, if you put all the conditions in one alert, you would be alerted only if ingress, egress, and transactions exceed their threshold amounts. --6. Scroll down. In the **Dimension name** drop-down list, select **File Share**. -7. In the **Dimension values** drop-down list, select the file share or shares that you want to alert on. -8. Define the alert parameters by selecting values in the **Operator**, **Threshold value**, **Aggregation granularity**, and **Frequency of evaluation** drop-down lists, and then select **Done**. -- Egress, ingress, and transactions metrics are expressed per minute, though you're provisioned egress, ingress, and I/O per second. Therefore, for example, if your provisioned egress is 90 MiB/s and you want your threshold to be 80 percent of provisioned egress, select the following alert parameters: - - For **Threshold value**: *75497472* - - For **Operator**: *greater than or equal to* - - For **Aggregation type**: *average* - - Depending on how noisy you want your alert to be, you can also select values for **Aggregation granularity** and **Frequency of evaluation**. For example, if you want your alert to look at the average ingress over the time period of 1 hour, and you want your alert rule to be run every hour, select the following: - - For **Aggregation granularity**: *1 hour* - - For **Frequency of evaluation**: *1 hour* --9. Select **Add action groups**, and then add an action group (for example, email or SMS) to the alert either by selecting an existing action group or by creating a new one. -10. Enter the alert details, such as **Alert rule name**, **Description**, and **Severity**. -11. Select **Create alert rule** to create the alert. -- > [!NOTE] - > - To be notified that your premium file share is close to being throttled *because of provisioned ingress*, follow the preceding instructions, but with the following change: - > - In step 5, select the **Ingress** metric instead of **Egress**. - > - > - To be notified that your premium file share is close to being throttled *because of provisioned IOPS*, follow the preceding instructions, but with the following changes: - > - In step 5, select the **Transactions** metric instead of **Egress**. - > - In step 10, the only option for **Aggregation type** is *Total*. Therefore, the threshold value depends on your selected aggregation granularity. For example, if you want your threshold to be 80 percent of provisioned baseline IOPS and you select *1 hour* for **Aggregation granularity**, your **Threshold value** would be your baseline IOPS (in bytes) × 0.8 × 3600. --## See also -- [Troubleshoot Azure Files](files-troubleshoot.md)-- [Troubleshoot Azure Files Performance](files-troubleshoot-performance.md)-- [Understand Azure Files performance](understand-performance.md)-- [Overview of alerts in Microsoft Azure](../../azure-monitor/alerts/alerts-overview.md)- |
storage | Files Troubleshoot Linux Nfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-linux-nfs.md | - Title: Troubleshoot NFS file shares - Azure Files -description: Troubleshoot issues with NFS Azure file shares. --- Previously updated : 02/21/2023------# Troubleshoot NFS Azure file shares --This article lists common issues related to NFS Azure file shares and provides potential causes and workarounds. --> [!IMPORTANT] -> The content of this article only applies to NFS shares. To troubleshoot SMB issues in Linux, see [Troubleshoot Azure Files problems in Linux (SMB)](files-troubleshoot-linux-smb.md). NFS Azure file shares aren't supported for Windows. --## Applies to -| File share type | SMB | NFS | -|-|:-:|:-:| -| Standard file shares (GPv2), LRS/ZRS |  |  | -| Standard file shares (GPv2), GRS/GZRS |  |  | -| Premium file shares (FileStorage), LRS/ZRS |  |  | --## chgrp "filename" failed: Invalid argument (22) --### Cause 1: idmapping isn't disabled -Because Azure Files disallows alphanumeric UID/GID, you must disable idmapping. --### Cause 2: idmapping was disabled, but got re-enabled after encountering bad file/dir name -Even if you correctly disable idmapping, it can be automatically re-enabled in some cases. For example, when Azure Files encounters a bad file name, it sends back an error. Upon seeing this error code, an NFS 4.1 Linux client decides to re-enable idmapping, and sends future requests with alphanumeric UID/GID. For a list of unsupported characters on Azure Files, see this [article](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata). Colon is one of the unsupported characters. --### Workaround -Make sure you've disabled idmapping and that nothing is re-enabling it. Then perform the following steps: --- Unmount the share-- Disable idmapping with -```bash -sudo echo Y > /sys/module/nfs/parameters/nfs4_disable_idmapping -``` -- Mount the share back-- If running rsync, run rsync with the "ΓÇönumeric-ids" argument from a directory that doesn't have a bad dir/file name.--## Unable to create an NFS share --### Cause 1: Unsupported storage account settings --NFS is only available on storage accounts with the following configuration: --- Tier - Premium-- Account Kind - FileStorage-- Regions - [List of supported regions](storage-files-how-to-create-nfs-shares.md?tabs=azure-portal#regional-availability)--#### Solution --Follow the instructions in [How to create an NFS share](storage-files-how-to-create-nfs-shares.md). --## Can't connect to or mount an NFS Azure file share --### Cause 1: Request originates from a client in an untrusted network/untrusted IP --Unlike SMB, NFS doesn't have user-based authentication. The authentication for a share is based on your network security rule configuration. To ensure that clients only establish secure connections to your NFS share, you must use either the service endpoint or private endpoints. To access shares from on-premises in addition to private endpoints, you must set up a VPN or ExpressRoute connection. IPs added to the storage account's allowlist for the firewall are ignored. You must use one of the following methods to set up access to an NFS share: ---- [Service endpoint](storage-files-networking-endpoints.md#restrict-public-endpoint-access)- - Accessed by the public endpoint. - - Only available in the same region. - - You can't use VNet peering for share access. - - You must add each virtual network or subnet individually to the allowlist. - - For on-premises access, you can use service endpoints with ExpressRoute, point-to-site, and site-to-site VPNs. We recommend using a private endpoint because it's more secure. --The following diagram depicts connectivity using public endpoints. ---- [Private endpoint](storage-files-networking-endpoints.md#create-a-private-endpoint)- - Access is more secure than the service endpoint. - - Access to NFS share via private link is available from within and outside the storage account's Azure region (cross-region, on-premises). - - Virtual network peering with virtual networks hosted in the private endpoint give the NFS share access to the clients in peered virtual networks. - - You can use private endpoints with ExpressRoute, point-to-site VPNs, and site-to-site VPNs. ---### Cause 2: Secure transfer required is enabled --NFS Azure file shares don't currently support double encryption. Azure provides a layer of encryption for all data in transit between Azure datacenters using MACSec. You can only access NFS shares from trusted virtual networks and over VPN tunnels. No extra transport layer encryption is available on NFS shares. --#### Solution --Disable **secure transfer required** in your storage account's configuration blade. ---### Cause 3: nfs-utils, nfs-client or nfs-common package isn't installed -Before running the `mount` command, install the nfs-utils, nfs-client or the nfs-common package. --To check if the NFS package is installed, run: --# [RHEL](#tab/RHEL) --Same commands on this section apply for CentOS and Oracle Linux. --```bash -sudo rpm -qa | grep nfs-utils -``` -# [SLES](#tab/SLES) --```bash -sudo rpm -qa | grep nfs-client -``` -# [Ubuntu](#tab/Ubuntu) --Same commands on this section apply for Debian. - -```bash -sudo dpkg -l | grep nfs-common -``` ---#### Solution --If the package isn't installed, install the package using your distro-specific command. --# [RHEL](#tab/RHEL) --Same commands on this section apply for CentOS and Oracle Linux. --Os Version 7.X --```bash -sudo yum install nfs-utils -``` -OS Version 8.X or 9.X --```bash -sudo dnf install nfs-utils -``` --# [SLES](#tab/SLES) --```bash -sudo zypper install nfs-client -``` --# [Ubuntu](#tab/Ubuntu) --Same commands on this section apply for Debian. --```bash -sudo apt update -sudo apt install nfs-common -``` ---### Cause 4: Firewall blocking port 2049 --The NFS protocol communicates to its server over port 2049. Make sure that this port is open to the storage account (the NFS server). --#### Solution --Verify that port 2049 is open on your client by running the following command. If the port isn't open, open it. --```bash -sudo nc -zv <storageaccountnamehere>.file.core.windows.net 2049 -``` --## ls hangs for large directory enumeration on some kernels --### Cause: A bug was introduced in Linux kernel v5.11 and was fixed in v5.12.5. -Some kernel versions have a bug that causes directory listings to result in an endless READDIR sequence. Small directories where all entries can be shipped in one call don't have this problem. -The bug was introduced in Linux kernel v5.11 and was fixed in v5.12.5. So anything in between has the bug. RHEL 8.4 has this kernel version. --#### Workaround: Downgrade or upgrade the kernel -Downgrading or upgrading the kernel to anything outside the affected kernel should resolve the issue. --## Need help? -If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. --## See also -- [Troubleshoot Azure Files](files-troubleshoot.md)-- [Troubleshoot Azure Files performance](files-troubleshoot-performance.md)-- [Troubleshoot Azure Files connectivity (SMB)](files-troubleshoot-smb-connectivity.md)-- [Troubleshoot Azure Files authentication and authorization (SMB)](files-troubleshoot-smb-authentication.md)-- [Troubleshoot Azure Files general SMB issues on Linux](files-troubleshoot-linux-smb.md) |
storage | Files Troubleshoot Linux Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-linux-smb.md | - Title: Troubleshoot Azure Files issues in Linux (SMB) -description: Troubleshooting Azure Files issues in Linux. See general issues related to SMB Azure file shares when you connect from Linux clients, and see possible resolutions. --- Previously updated : 02/21/2023----# Troubleshoot Azure Files issues in Linux (SMB) --This article lists common issues that can occur when using SMB Azure file shares with Linux clients. It also provides possible causes and resolutions for these problems. --You can use [AzFileDiagnostics](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Linux) to automate symptom detection and ensure that the Linux client has the correct prerequisites. It helps set up your environment to get optimal performance. You can also find this information in the [Azure file shares troubleshooter](https://support.microsoft.com/help/4022301/troubleshooter-for-azure-files-shares). --> [!IMPORTANT] -> This article only applies to SMB shares. For details on NFS shares, see [Troubleshoot NFS Azure file shares](files-troubleshoot-linux-nfs.md). --## Applies to -| File share type | SMB | NFS | -|-|:-:|:-:| -| Standard file shares (GPv2), LRS/ZRS |  |  | -| Standard file shares (GPv2), GRS/GZRS |  |  | -| Premium file shares (FileStorage), LRS/ZRS |  |  | ---<a id="timestampslost"></a> -## Time stamps were lost in copying files from Windows to Linux --On Linux/Unix platforms, the **cp -p** command fails if different users own file 1 and file 2. --### Cause --The force flag **f** in COPYFILE results in executing **cp -p -f** on Unix. This command also fails to preserve the time stamp of the file that you don't own. --### Workaround --Use the storage account user for copying the files: --- `str_acc_name=[storage account name]`-- `sudo useradd $str_acc_name`-- `sudo passwd $str_acc_name`-- `su $str_acc_name`-- `cp -p filename.txt /share`--## ls: cannot access '<path>': Input/output error --When you try to list files in an Azure file share by using the ls command, the command hangs when listing files. You get the following error: --**ls: cannot access'<path>': Input/output error** ---### Solution -Upgrade the Linux kernel to the following versions that have a fix for this problem: --- 4.4.87+-- 4.9.48+-- 4.12.11+-- All versions that are greater than or equal to 4.13--## Can't create symbolic links - ln: failed to create symbolic link 't': Operation not supported --### Cause -By default, mounting Azure file shares on Linux by using SMB doesn't enable support for symbolic links (symlinks). You might see an error like this: --```bash -sudo ln -s linked -n t -``` -```output -ln: failed to create symbolic link 't': Operation not supported -``` --### Solution -The Linux SMB client doesn't support creating Windows-style symbolic links over the SMB 2 or 3 protocol. Currently, the Linux client supports another style of symbolic links called [Minshall+French symlinks](https://wiki.samba.org/index.php/UNIX_Extensions#Minshall.2BFrench_symlinks) for both create and follow operations. Customers who need symbolic links can use the "mfsymlinks" mount option. We recommend "mfsymlinks" because it's also the format that Macs use. --To use symlinks, add the following to the end of your SMB mount command: --```bash -,mfsymlinks -``` --So the command looks something like: --```bash -sudo mount -t cifs //<storage-account-name>.file.core.windows.net/<share-name> <mount-point> -o vers=<smb-version>,username=<storage-account-name>,password=<storage-account-key>,dir_mode=0777,file_mode=0777,serverino,mfsymlinks -``` --You can then create symlinks as suggested on the [wiki](https://wiki.samba.org/index.php/UNIX_Extensions#Storing_symlinks_on_Windows_servers). --## Unable to access folders or files which name has a space or a dot at the end --You can't access folders or files from the Azure file share while mounted on Linux. Commands like du and ls and/or third-party applications might fail with a "No such file or directory" error while accessing the share; however, you're able to upload files to these folders via the Azure portal. --### Cause --The folders or files were uploaded from a system that encodes the characters at the end of the name to a different character. Files uploaded from a Macintosh computer may have a "0xF028" or "0xF029" character instead of 0x20 (space) or 0X2E (dot). --### Solution --Use the mapchars option on the share when mounting the share on Linux: --instead of : --```bash -sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino -``` --use: --```bash -sudo mount -t cifs $smbPath $mntPath -o vers=3.0,username=$storageAccountName,password=$storageAccountKey,serverino,mapchars -``` --<a id="dns-account-migration"></a> -## DNS issues with live migration of Azure storage accounts --File I/Os on the mounted filesystem start giving "Host is down" or "Permission denied" errors. Linux dmesg logs on the client show repeated errors like: --```output -Status code returned 0xc000006d STATUS_LOGON_FAILURE -cifs_setup_session: 2 callbacks suppressed -CIFS VFS: \\contoso.file.core.windows.net Send error in SessSetup = -13 -``` - -You'll also see that the server FQDN now resolves to a different IP address than what itΓÇÖs currently connected to. --### Cause --For capacity load balancing purposes, storage accounts are sometimes live-migrated from one storage cluster to another. Account migration triggers Azure Files traffic to be redirected from the source cluster to the destination cluster by updating the DNS mappings to point to the destination cluster. This blocks all traffic to the source cluster from that account. ItΓÇÖs expected that the SMB client picks up the DNS updates and redirects further traffic to the destination cluster. However, due to a bug in the Linux SMB kernel client, this redirection doesn't take effect. As a result, the data traffic keeps going to the source cluster, which has stopped serving this account post migration. --### Workaround --You can mitigate this issue by rebooting the client OS, but you might run into the issue again if you don't upgrade your client OS to a Linux distro version with account migration support. Note that umount and remount of the share may appear to fix the issue temporarily. --### Solution --For a permanent fix, upgrade your client OS to a Linux distro version with account migration support. Several fixes for the Linux SMB kernel client have been submitted to the mainline Linux kernel. Kernel version 5.15+ and Keyutils-1.6.2+ have the fixes. Some distros have backported these fixes, and you can check if the following fixes exist in the distro version you're using: --[cifs: On cifs_reconnect, resolve the hostname again](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4e456b30f78c429b183db420e23b26cde7e03a78) --[cifs: use the expiry output of dns_query to schedule next resolution](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=506c1da44fee32ba1d3a70413289ad58c772bba6) --[cifs: set a minimum of 120s for next dns resolution](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4ac0536f8874a903a72bddc57eb88db774261e3a) --[cifs: To match file servers, make sure the server hostname matches](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=7be3248f313930ff3d3436d4e9ddbe9fccc1f541) --[cifs: fix memory leak of smb3_fs_context_dup::server_hostname](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=869da64d071142d4ed562a3e909deb18e4e72c4e) --[dns: Apply a default TTL to records obtained from getaddrinfo()](https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/keyutils.git/commit/?id=75e7568dc516db698093b33ea273e1b4a30b70be) --## Need help? --If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. --## See also -- [Troubleshoot Azure Files](files-troubleshoot.md)-- [Troubleshoot Azure Files performance](files-troubleshoot-performance.md)-- [Troubleshoot Azure Files connectivity (SMB)](files-troubleshoot-smb-connectivity.md)-- [Troubleshoot Azure Files authentication and authorization (SMB)](files-troubleshoot-smb-authentication.md)-- [Troubleshoot Azure Files general NFS issues on Linux](files-troubleshoot-linux-nfs.md)-- [Troubleshoot Azure File Sync issues](../file-sync/file-sync-troubleshoot.md) |
storage | Files Troubleshoot Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-performance.md | - Title: Azure Files performance troubleshooting guide -description: Troubleshoot performance issues with Azure file shares and discover potential causes and associated workarounds for these problems. --- Previously updated : 02/21/2023---#Customer intent: As a system admin, I want to troubleshoot performance issues with Azure file shares to improve performance for applications and users. --# Troubleshoot Azure Files performance issues --This article lists common problems related to Azure file share performance, and provides potential causes and workarounds. To get the most value from this troubleshooting guide, we recommend first reading [Understand Azure Files performance](understand-performance.md). --## Applies to -| File share type | SMB | NFS | -|-|:-:|:-:| -| Standard file shares (GPv2), LRS/ZRS |  |  | -| Standard file shares (GPv2), GRS/GZRS |  |  | -| Premium file shares (FileStorage), LRS/ZRS |  |  | --## General performance troubleshooting --First, rule out some common reasons why you might be having performance problems. --### You're running an old operating system --If your client virtual machine (VM) is running Windows 8.1 or Windows Server 2012 R2, or an older Linux distro or kernel, you might experience performance issues when accessing Azure file shares. Either upgrade your client OS or apply the fixes below. --# [Windows](#tab/windows) --### Considerations for Windows 8.1 and Windows Server 2012 R2 --Clients that are running Windows 8.1 or Windows Server 2012 R2 might see higher than expected latency when accessing Azure file shares for I/O-intensive workloads. Make sure that the [KB3114025](https://support.microsoft.com/help/3114025) hotfix is installed. This hotfix improves the performance of create and close handles. --You can run the following script to check whether the hotfix has been installed: --`reg query HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\Policies` --If the hotfix is installed, the following output is displayed: --`HKEY_Local_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\Policies {96c345ef-3cac-477b-8fcd-bea1a564241c} REG_DWORD 0x1` --> [!Note] -> Windows Server 2012 R2 images in Azure Marketplace have hotfix KB3114025 installed by default, starting in December 2015. ---# [Linux](#tab/linux) --### Low IOPS on CentOS Linux or RHEL --#### Cause --An I/O depth of greater than 1 isn't supported on older versions of CentOS Linux or RHEL. --#### Workaround --- Upgrade to CentOS Linux 8.6+ or RHEL 8.6+.-- Change to Ubuntu.-- For other Linux VMs, upgrade the kernel to 5.0 or later.-----### Your workload is being throttled --Requests are throttled when the I/O operations per second (IOPS), ingress, or egress limits for a file share are reached. For example, if the client exceeds baseline IOPS, it will get throttled by the Azure Files service. Throttling can result in the client experiencing poor performance. --To understand the limits for standard and premium file shares, see [File share and file scale targets](storage-files-scale-targets.md#azure-file-share-scale-targets). Depending on your workload, throttling can often be avoided by moving from standard to premium Azure file shares. --To learn more about how throttling at the share level or storage account level can cause high latency, low throughput, and general performance issues, see [Share or storage account is being throttled](#cause-1-share-or-storage-account-is-being-throttled). ---## High latency, low throughput, or low IOPS --### Cause 1: Share or storage account is being throttled --To confirm whether your share or storage account is being throttled, you can access and use Azure metrics in the portal. You can also create alerts that will notify you if a share is being throttled or is about to be throttled. See [Troubleshoot Azure Files by creating alerts](files-troubleshoot-create-alerts.md). --> [!IMPORTANT] -> For standard storage accounts with large file shares (LFS) enabled, throttling occurs at the account level. For premium files shares and standard file shares without LFS enabled, throttling occurs at the share level. --1. In the Azure portal, go to your storage account. --1. On the left pane, under **Monitoring**, select **Metrics**. --1. Select **File** as the metric namespace for your storage account scope. --1. Select **Transactions** as the metric. --1. Add a filter for **Response type**, and then check to see whether any requests have been throttled. -- For standard file shares that don't have large file shares enabled, the following response types are logged if a request is throttled at the share level: -- - SuccessWithThrottling - - SuccessWithShareIopsThrottling - - ClientShareIopsThrottlingError -- For standard file shares that have large file shares enabled, the following response types are logged if a request is throttled at the client account level: -- - ClientAccountRequestThrottlingError - - ClientAccountBandwidthThrottlingError -- For premium file shares, the following response types are logged if a request is throttled at the share level: -- - SuccessWithShareEgressThrottling - - SuccessWithShareIngressThrottling - - SuccessWithShareIopsThrottling - - ClientShareEgressThrottlingError - - ClientShareIngressThrottlingError - - ClientShareIopsThrottlingError -- If a throttled request was authenticated with Kerberos, you might see a prefix indicating the authentication protocol, such as: -- - KerberosSuccessWithShareEgressThrottling - - KerberosSuccessWithShareIngressThrottling -- To learn more about each response type, see [Metric dimensions](storage-files-monitoring-reference.md#metrics-dimensions). --  --#### Solution --- If you're using a standard file share, [enable large file shares](storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account) on your storage account and [increase the size of file share quota to take advantage of the large file share support](storage-how-to-create-file-share.md#expand-existing-file-shares). Large file shares support great IOPS and bandwidth limits; see [Azure Files scalability and performance targets](storage-files-scale-targets.md) for details.-- If you're using a premium file share, increase the provisioned file share size to increase the IOPS limit. To learn more, see the [Understanding provisioning for premium file shares](understanding-billing.md#provisioned-model).--### Cause 2: Metadata or namespace heavy workload --If the majority of your requests are metadata-centric (such as `createfile`, `openfile`, `closefile`, `queryinfo`, or `querydirectory`), the latency will be worse than that of read/write operations. --To determine whether most of your requests are metadata-centric, start by following steps 1-4 as previously outlined in Cause 1. For step 5, instead of adding a filter for **Response type**, add a property filter for **API name**. -- --#### Workarounds --- Check to see whether the application can be modified to reduce the number of metadata operations.-- Separate the file share into multiple file shares within the same storage account.-- Add a virtual hard disk (VHD) on the file share and mount the VHD from the client to perform file operations against the data. This approach works for single writer/reader scenarios or scenarios with multiple readers and no writers. Because the file system is owned by the client rather than Azure Files, this allows metadata operations to be local. The setup offers performance similar to that of local directly attached storage. However, because the data is in a VHD, it can't be accessed via any other means other than the SMB mount, such as REST API or through the Azure portal.- 1. From the machine which needs to access the Azure file share, mount the file share using the storage account key and map it to an available network drive (for example, Z:). - 1. Go to **Disk Management** and select **Action > Create VHD**. - 1. Set **Location** to the network drive that the Azure file share is mapped to, set **Virtual hard disk size** as needed, and select **Fixed size**. - 1. Select **OK**. Once the VHD creation is complete, it will automatically mount, and a new unallocated disk will appear. - 1. Right-click the new unknown disk and select **Initialize Disk**. - 1. Right-click the unallocated area and create a **New Simple Volume**. - 1. You should see a new drive letter appear in **Disk Management** representing this VHD with read/write access (for example, E:). In **File Explorer**, you should see the new VHD on the mapped Azure file share's network drive (Z: in this example). To be clear, there should be two drive letters present: the standard Azure file share network mapping on Z:, and the VHD mapping on the E: drive. - 1. There should be much better performance on heavy metadata operations against files on the VHD mapped drive (E:) versus the Azure file share mapped drive (Z:). If desired, it should be possible to disconnect the mapped network drive (Z:) and still access the mounted VHD drive (E:). -- - To mount a VHD on a Windows client, you can also use the [`Mount-DiskImage`](/powershell/module/storage/mount-diskimage) PowerShell cmdlet. - - To mount a VHD on Linux, consult the documentation for your Linux distribution. [Here's an example](https://man7.org/linux/man-pages/man5/nfs.5.html). --### Cause 3: Single-threaded application --If the application that you're using is single-threaded, this setup can result in significantly lower IOPS throughput than the maximum possible throughput, depending on your provisioned share size. --#### Solution --- Increase application parallelism by increasing the number of threads.-- Switch to applications where parallelism is possible. For example, for copy operations, you could use AzCopy or RoboCopy from Windows clients or the **parallel** command from Linux clients.--### Cause 4: Number of SMB channels exceeds four --If you're using SMB MultiChannel and the number of channels you have exceeds four, this will result in poor performance. To determine if your connection count exceeds four, use the PowerShell cmdlet `get-SmbClientConfiguration` to view the current connection count settings. --#### Solution --Set the Windows per NIC setting for SMB so that the total channels don't exceed four. For example, if you have two NICs, you can set the maximum per NIC to two using the following PowerShell cmdlet: `Set-SmbClientConfiguration -ConnectionCountPerRssNetworkInterface 2`. ---## Very high latency for requests --### Cause --The client VM could be located in a different region than the file share. Other reason for high latency could be due to the latency caused by the client or the network. --### Solution --- Run the application from a VM that's located in the same region as the file share.-- For your storage account, review transaction metrics **SuccessE2ELatency** and **SuccessServerLatency** via **Azure Monitor** in Azure portal. A high difference between SuccessE2ELatency and SuccessServerLatency metrics values is an indication of latency that is likely caused by the network or the client. See [Transaction metrics](storage-files-monitoring-reference.md#transaction-metrics) in Azure Files Monitoring data reference.--## Client unable to achieve maximum throughput supported by the network --### Cause -One potential cause is a lack of SMB multi-channel support for standard file shares. Currently, Azure Files supports only single channel for standard file shares, so there's only one connection from the client VM to the server. This single connection is pegged to a single core on the client VM, so the maximum throughput achievable from a VM is bound by a single core. --### Workaround --- For premium file shares, [Enable SMB Multichannel](files-smb-protocol.md#smb-multichannel).-- Obtaining a VM with a bigger core might help improve throughput.-- Running the client application from multiple VMs will increase throughput.-- Use REST APIs where possible.-- For NFS Azure file shares, `nconnect` is available. See [Improve NFS Azure file share performance with nconnect](nfs-nconnect-performance.md).---<a id="slowperformance"></a> -## Slow performance on an Azure file share mounted on a Linux VM --### Cause 1: Caching --One possible cause of slow performance is disabled caching. Caching can be useful if you are accessing a file repeatedly, otherwise, it can be an overhead. Check if you are using the cache before disabling it. --### Solution for cause 1 --To check whether caching is disabled, look for the **cache=** entry. --**Cache=none** indicates that caching is disabled. Remount the share by using the default mount command or by explicitly adding the **cache=strict** option to the mount command to ensure that default caching or "strict" caching mode is enabled. --In some scenarios, the **serverino** mount option can cause the **ls** command to run stat against every directory entry. This behavior results in performance degradation when you're listing a large directory. You can check the mount options in your **/etc/fstab** entry: --`//azureuser.file.core.windows.net/cifs /cifs cifs vers=2.1,serverino,username=xxx,password=xxx,dir_mode=0777,file_mode=0777` --You can also check whether the correct options are being used by running the **sudo mount | grep cifs** command and checking its output. The following is example output: --``` -//azureuser.file.core.windows.net/cifs on /cifs type cifs (rw,relatime,vers=2.1,sec=ntlmssp,cache=strict,username=xxx,domain=X,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.10.1,file_mode=0777, dir_mode=0777,persistenthandles,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,actimeo=1) -``` --If the **cache=strict** or **serverino** option is not present, unmount and mount Azure Files again by running the mount command from the [documentation](./storage-how-to-use-files-linux.md). Then, recheck that the **/etc/fstab** entry has the correct options. --### Cause 2: Throttling --It's possible you're experiencing throttling and your requests are being sent to a queue. You can verify this by leveraging [Azure Storage metrics in Azure Monitor](../blobs/monitor-blob-storage.md). You can also create alerts that will notify you if a share is being throttled or is about to be throttled. See [Troubleshoot Azure Files by creating alerts](files-troubleshoot-create-alerts.md). --### Solution for cause 2 --Ensure your app is within the [Azure Files scale targets](storage-files-scale-targets.md#azure-files-scale-targets). If you're using standard Azure file shares, consider switching to premium. ---## Throughput on Linux clients is lower than that of Windows clients --### Cause --This is a known issue with the implementation of the SMB client on Linux. --### Workaround --- Spread the load across multiple VMs.-- On the same VM, use multiple mount points with a `nosharesock` option, and spread the load across these mount points.-- On Linux, try mounting with a `nostrictsync` option to avoid forcing an SMB flush on every `fsync` call. For Azure Files, this option doesn't interfere with data consistency, but it might result in stale file metadata on directory listings (`ls -l` command). Directly querying file metadata by using the `stat` command will return the most up-to-date file metadata.--## High latencies for metadata-heavy workloads involving extensive open/close operations --### Cause --Lack of support for directory leases. --### Workaround --- If possible, avoid using an excessive opening/closing handle on the same directory within a short period of time.-- For Linux VMs, increase the directory entry cache timeout by specifying `actimeo=<sec>` as a mount option. By default, the timeout is 1 second, so a larger value, such as 30 seconds, might help.-- For CentOS Linux or Red Hat Enterprise Linux (RHEL) VMs, upgrade the system to CentOS Linux 8.2 or RHEL 8.2. For other Linux distros, upgrade the kernel to 5.0 or later.---## Slow enumeration of files and folders --### Cause --This problem can occur if there isn't enough cache on the client machine for large directories. --### Solution --To resolve this problem, adjust the **DirectoryCacheEntrySizeMax** registry value to allow caching of larger directory listings in the client machine: --- Location: `HKLM\System\CCS\Services\Lanmanworkstation\Parameters`-- Value name: `DirectoryCacheEntrySizeMax` -- Value type: `DWORD` - -For example, you can set it to `0x100000` and see if performance improves. ---## Slow file copying to and from Azure file shares --You might see slow performance when you try to transfer files to the Azure Files service. If you don't have a specific minimum I/O size requirement, we recommend that you use 1 MiB as the I/O size for optimal performance. --# [Windows](#tab/windows) --### Slow file copying to and from Azure Files in Windows --- If you know the final size of a file that you are extending with writes, and your software doesn't have compatibility problems when the unwritten tail on the file contains zeros, then set the file size in advance instead of making every write an extending write.-- Use the right copy method:- - Use [AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json) for any transfer between two file shares. - - Use [Robocopy](storage-how-to-create-file-share.md) between file shares on an on-premises computer. --# [Linux](#tab/linux) --<a id="slowfilecopying"></a> -### Slow file copying to and from Azure Files in Linux --- Use the right copy method:- - Use [AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json) for any transfer between two file shares. - - Using cp or dd with parallel could improve copy speed, the number of threads depends on your use case and workload. The following examples use six: - - cp example (cp will use the default block size of the file system as the chunk size): `find * -type f | parallel --will-cite -j 6 cp {} /mntpremium/ &`. - - dd example (this command explicitly sets chunk size to 1 MiB): `find * -type f | parallel --will-cite-j 6 dd if={} of=/mnt/share/{} bs=1M` - - Open source third-party tools such as: - - [GNU Parallel](https://www.gnu.org/software/parallel/). - - [Fpart](https://github.com/martymac/fpart) - Sorts files and packs them into partitions. - - [Fpsync](https://github.com/martymac/fpart/blob/master/tools/fpsync) - Uses Fpart and a copy tool to spawn multiple instances to migrate data from src_dir to dst_url. - - [Multi](https://github.com/pkolano/mutil) - Multi-threaded cp and md5sum based on GNU coreutils. -- Setting the file size in advance, instead of making every write an extending write, helps improve copy speed in scenarios where the file size is known. If extending writes need to be avoided, you can set a destination file size with `truncate --size <size> <file>` command. After that, `dd if=<source> of=<target> bs=1M conv=notrunc`command will copy a source file without having to repeatedly update the size of the target file. For example, you can set the destination file size for every file you want to copy (assume a share is mounted under /mnt/share):- - `for i in `` find * -type f``; do truncate --size ``stat -c%s $i`` /mnt/share/$i; done` - - and then copy files without extending writes in parallel: `find * -type f | parallel -j6 dd if={} of =/mnt/share/{} bs=1M conv=notrunc` -----## Excessive DirectoryOpen/DirectoryClose calls --### Cause --If the number of **DirectoryOpen/DirectoryClose** calls is among the top API calls and you don't expect the client to make that many calls, the issue might be caused by the antivirus software that's installed on the Azure client VM. --### Workaround --- A fix for this issue is available in the [April Platform Update for Windows](https://support.microsoft.com/help/4052623/update-for-windows-defender-antimalware-platform).---## SMB Multichannel isn't being triggered --### Cause --Recent changes to SMB Multichannel config settings without a remount. --### Solution - -- After any changes to Windows SMB client or account SMB multichannel configuration settings, you have to unmount the share, wait for 60 seconds, and remount the share to trigger the multichannel.-- For Windows client OS, generate IO load with high queue depth say QD=8, for example copying a file to trigger SMB Multichannel. For server OS, SMB Multichannel is triggered with QD=1, which means as soon as you start any IO to the share.--## Slow performance when unzipping files in SMB file shares -Depending on the exact compression method and unzip operation used, decompression operations may perform more slowly on an Azure file share than on your local disk. This is often because unzipping tools perform a number of metadata operations in the process of performing the decompression of a compressed archive. For the best performance, we recommend copying the compressed archive from the Azure file share to your local disk, unzipping there, and then using a copy tool such as Robocopy (or AzCopy) to copy back to the Azure file share. Using a copy tool like Robocopy can compensate for the decreased performance of metadata operations in Azure Files relative to your local disk by using multiple threads to copy data in parallel. --## High latency on web sites hosted on file shares --### Cause --High number file change notification on file shares can result in high latencies. This typically occurs with web sites hosted on file shares with deep nested directory structure. A typical scenario is IIS hosted web application where file change notification is set up for each directory in the default configuration. Each change ([ReadDirectoryChangesW](/windows/win32/api/winbase/nf-winbase-readdirectorychangesw)) on the share that the client is registered for pushes a change notification from the file service to the client, which takes system resources, and the issue worsens with the number of changes. This can cause share throttling and thus result in higher client-side latency. --To confirm, you can use Azure Metrics in the portal. --1. In the Azure portal, go to your storage account. -1. In the left menu, under Monitoring, select Metrics. -1. Select File as the metric namespace for your storage account scope. -1. Select Transactions as the metric. -1. Add a filter for ResponseType and check to see if any requests have a response code of SuccessWithThrottling (for SMB or NFS) or ClientThrottlingError (for REST). --### Solution --- If file change notification isn't used, disable file change notification (preferred).- - [Disable file change notification](https://support.microsoft.com/help/911272/fix-asp-net-2-0-connected-applications-on-a-web-site-may-appear-to-sto) by updating FCNMode. - - Update the IIS Worker Process (W3WP) polling interval to 0 by setting `HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\W3SVC\Parameters\ConfigPollMilliSeconds ` in your registry and restart the W3WP process. To learn about this setting, see [Common registry keys that are used by many parts of IIS](/troubleshoot/iis/use-registry-keys#registry-keys-that-apply-to-iis-worker-process-w3wp). -- Increase frequency of the file change notification polling interval to reduce volume.- - Update the W3WP worker process polling interval to a higher value (e.g. 10mins or 30mins) based on your requirement. Set `HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\W3SVC\Parameters\ConfigPollMilliSeconds ` [in your registry](/troubleshoot/iis/use-registry-keys#registry-keys-that-apply-to-iis-worker-process-w3wp) and restart the W3WP process. -- If your web site's mapped  physical directory has nested directory structure, you can try to limit scope of file change notification to reduce the notification volume. By default, IIS uses configuration from Web.config files in the physical directory to which the virtual directory is mapped, as well as in any child directories in that physical directory. If you don't want to use Web.config files in child directories, specify false for the allowSubDirConfig attribute on the virtual directory. More details can be found [here](/iis/get-started/planning-your-iis-architecture/understanding-sites-applications-and-virtual-directories-on-iis#virtual-directories). - - Set IIS virtual directory "allowSubDirConfig" setting in Web.Config to *false* to exclude mapped physical child directories from the scope. ---## See also -- [Troubleshoot Azure Files](files-troubleshoot.md)-- [Troubleshoot Azure Files by creating alerts](files-troubleshoot-create-alerts.md)-- [Understand Azure Files performance](understand-performance.md)-- [Overview of alerts in Microsoft Azure](../../azure-monitor/alerts/alerts-overview.md)-- [Azure Files FAQ](storage-files-faq.md) |
storage | Files Troubleshoot Smb Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-authentication.md | - Title: Troubleshoot Azure Files identity-based authentication and authorization issues (SMB) -description: Troubleshoot problems using identity-based authentication to connect to SMB Azure file shares, and see possible resolutions. --- Previously updated : 05/15/2023-----# Troubleshoot Azure Files identity-based authentication and authorization issues (SMB) --This article lists common problems when using SMB Azure file shares with identity-based authentication. It also provides possible causes and resolutions for these problems. Identity-based authentication isn't currently supported for NFS Azure file shares. --## Applies to -| File share type | SMB | NFS | -|-|:-:|:-:| -| Standard file shares (GPv2), LRS/ZRS |  |  | -| Standard file shares (GPv2), GRS/GZRS |  |  | -| Premium file shares (FileStorage), LRS/ZRS |  |  | --## Error 5 when mounting an Azure file share --When you try to mount a file share, you might receive the following error: --- System error 5 has occurred. Access is denied.--### Cause: Share-level permissions are incorrect --If end users are accessing the Azure file share using Active Directory Domain Services (AD DS) or Azure Active Directory Domain Services (Azure AD DS) authentication, access to the file share fails with "Access is denied" error if share-level permissions are incorrect. --> [!NOTE] -> This error might be caused by issues other than incorrect share-level permissions. For information on other possible causes and solutions, see [Troubleshoot Azure Files connectivity and access issues](files-troubleshoot-smb-connectivity.md#error-5-when-you-mount-an-azure-file-share). --### Solution --Validate that permissions are configured correctly: --- **Active Directory Domain Services (AD DS)** see [Assign share-level permissions](storage-files-identity-ad-ds-assign-permissions.md).-- Share-level permission assignments are supported for groups and users that have been synced from AD DS to Azure Active Directory (Azure AD) using Azure AD Connect sync or Azure AD Connect cloud sync. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups. -- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign share-level permissions](storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-share-level-permissions).--## Error AadDsTenantNotFound in enabling Azure AD DS authentication for Azure Files "Unable to locate active tenants with tenant ID aad-tenant-id" --### Cause --Error AadDsTenantNotFound happens when you try to [enable Azure AD DS authentication on Azure Files](storage-files-identity-auth-active-directory-domain-service-enable.md) on a storage account where Azure AD DS isn't created on the Azure AD tenant of the associated subscription. --### Solution --Enable Azure AD DS on the Azure AD tenant of the subscription that your storage account is deployed to. You need administrator privileges of the Azure AD tenant to create a managed domain. If you aren't the administrator of the Azure AD tenant, contact the administrator and follow the step-by-step guidance to [create and configure an Azure AD DS managed domain](../../active-directory-domain-services/tutorial-create-instance.md). --## Unable to mount Azure file shares with AD credentials --### Self diagnostics steps -First, make sure that you've followed the steps to [enable Azure Files AD DS Authentication](./storage-files-identity-auth-active-directory-enable.md). --Second, try [mounting Azure file share with storage account key](storage-how-to-use-files-windows.md). If the share fails to mount, download [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) to help you validate the client running environment. AzFileDiagnostics can detect incompatible client configurations that might cause access failure for Azure Files, give prescriptive guidance on self-fix, and collect the diagnostics traces. --Third, you can run the `Debug-AzStorageAccountAuth` cmdlet to conduct a set of basic checks on your AD configuration with the logged on AD user. This cmdlet is supported on [AzFilesHybrid v0.1.2+ version](https://github.com/Azure-Samples/azure-files-samples/releases). You need to run this cmdlet with an AD user that has owner permission on the target storage account. -```PowerShell -$ResourceGroupName = "<resource-group-name-here>" -$StorageAccountName = "<storage-account-name-here>" --Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -Verbose -``` -The cmdlet performs these checks in sequence and provides guidance for failures: -1. CheckADObjectPasswordIsCorrect: Ensure that the password configured on the AD identity that represents the storage account is matching that of the storage account kerb1 or kerb2 key. If the password is incorrect, you can run [Update-AzStorageAccountADObjectPassword](./storage-files-identity-ad-ds-update-password.md) to reset the password. -2. CheckADObject: Confirm that there is an object in the Active Directory that represents the storage account and has the correct SPN (service principal name). If the SPN isn't correctly set up, run the `Set-AD` cmdlet returned in the debug cmdlet to configure the SPN. -3. CheckDomainJoined: Validate that the client machine is domain joined to AD. If your machine isn't domain joined to AD, refer to this [article](/windows-server/identity/ad-fs/deployment/join-a-computer-to-a-domain) for domain join instruction. -4. CheckPort445Connectivity: Check that port 445 is opened for SMB connection. If port 445 isn't open, refer to the troubleshooting tool [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) for connectivity issues with Azure Files. -5. CheckSidHasAadUser: Check that the logged on AD user is synced to Azure AD. If you want to look up whether a specific AD user is synchronized to Azure AD, you can specify the -UserName and -Domain in the input parameters. -6. CheckGetKerberosTicket: Attempt to get a Kerberos ticket to connect to the storage account. If there isn't a valid Kerberos token, run the `klist get cifs/storage-account-name.file.core.windows.net` cmdlet and examine the error code to root-cause the ticket retrieval failure. -7. CheckStorageAccountDomainJoined: Check if the AD authentication has been enabled and the account's AD properties are populated. If not, refer to the instructions [here](./storage-files-identity-ad-ds-enable.md) to enable AD DS authentication on Azure Files. -8. CheckUserRbacAssignment: Check if the AD identity has the proper RBAC role assignment to provide share level permission to access Azure Files. If not, refer to the instructions [here](storage-files-identity-ad-ds-assign-permissions.md) to configure the share level permission. (Supported on AzFilesHybrid v0.2.3+ version) -9. CheckUserFileAccess: Check if the AD identity has the proper directory/file permission (Windows ACLs) to access Azure Files. If not, refer to the instructions [here](storage-files-identity-ad-ds-configure-permissions.md) to configure the directory/file level permission. (Supported on AzFilesHybrid v0.2.3+ version) --## Unable to configure directory/file level permissions (Windows ACLs) with Windows File Explorer --### Symptom --You may experience one of the symptoms described below when trying to configure Windows ACLs with File Explorer on a mounted file share: -- After you click on **Edit permission** under the Security tab, the Permission wizard doesn't load. -- When you try to select a new user or group, the domain location doesn't display the right AD DS domain. -- You're using multiple AD forests and get the following error message: "The Active Directory domain controllers required to find the selected objects in the following domains are not available. Ensure the Active Directory domain controllers are available, and try to select the objects again."--### Solution --We recommend that you [configure directory/file level permissions using icacls](storage-files-identity-ad-ds-configure-permissions.md#configure-windows-acls-with-icacls) instead of using Windows File Explorer. --## Errors when running Join-AzStorageAccountForAuth cmdlet --### Error: "The directory service was unable to allocate a relative identifier" --This error might occur if a domain controller that holds the RID Master FSMO role is unavailable or was removed from the domain and restored from backup. Confirm that all Domain Controllers are running and available. --### Error: "Cannot bind positional parameters because no names were given" --This error is most likely triggered by a syntax error in the `Join-AzStorageAccountforAuth` command. Check the command for misspellings or syntax errors and verify that the latest version of the **AzFilesHybrid** module (https://github.com/Azure-Samples/azure-files-samples/releases) is installed. --## Azure Files on-premises AD DS Authentication support for AES-256 Kerberos encryption --Azure Files supports AES-256 Kerberos encryption for AD DS authentication beginning with the AzFilesHybrid module v0.2.2. AES-256 is the recommended encryption method, and it's the default encryption method beginning in AzFilesHybrid module v0.2.5. If you've enabled AD DS authentication with a module version lower than v0.2.2, you'll need to [download the latest AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases) and run the PowerShell below. If you haven't enabled AD DS authentication on your storage account yet, follow this [guidance](./storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module). --> [!IMPORTANT] -> If you were previously using RC4 encryption and update the storage account to use AES-256, you should run `klist purge` on the client and then remount the file share to get new Kerberos tickets with AES-256. --```PowerShell -$ResourceGroupName = "<resource-group-name-here>" -$StorageAccountName = "<storage-account-name-here>" --Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -StorageAccountName $StorageAccountName -``` --## User identity formerly having the Owner or Contributor role assignment still has storage account key access -The storage account Owner and Contributor roles grant the ability to list the storage account keys. The storage account key enables full access to the storage account's data including file shares, blob containers, tables, and queues, and limited access to the Azure Files management operations via the legacy management APIs exposed through the FileREST API. If you're changing role assignments, you should consider that the users being removed from the Owner or Contributor roles may continue to maintain access to the storage account through saved storage account keys. --### Solution 1 -You can remedy this issue easily by rotating the storage account keys. We recommend rotating the keys one at a time, switching access from one to the other as they are rotated. There are two types of shared keys the storage account provides: the storage account keys, which provide super-administrator access to the storage account's data, and the Kerberos keys, which function as a shared secret between the storage account and the Windows Server Active Directory domain controller for Windows Server Active Directory scenarios. --To rotate the Kerberos keys of a storage account, see [Update the password of your storage account identity in AD DS](./storage-files-identity-ad-ds-update-password.md). --# [Portal](#tab/azure-portal) -Navigate to the desired storage account in the Azure portal. In the table of contents for the desired storage account, select **Access keys** under the **Security + networking** heading. In the **Access key** pane, select **Rotate key** above the desired key. -- --# [PowerShell](#tab/azure-powershell) -The following script will rotate both keys for the storage account. If you desire to swap out keys during rotation, you'll need to provide additional logic in your script to handle this scenario. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment. --```PowerShell -$resourceGroupName = "<resource-group>" -$storageAccountName = "<storage-account>" --# Rotate primary key (key 1). You should switch to key 2 before rotating key 1. -New-AzStorageAccountKey ` - -ResourceGroupName $resourceGroupName ` - -Name $storageAccountName ` - -KeyName "key1" --# Rotate secondary key (key 2). You should switch to the new key 1 before rotating key 2. -New-AzStorageAccountKey ` - -ResourceGroupName $resourceGroupName ` - -Name $storageAccountName ` - -KeyName "key2" -``` --# [Azure CLI](#tab/azure-cli) -The following script will rotate both keys for the storage account. If you desire to swap out keys during rotation, you'll need to provide additional logic in your script to handle this scenario. Remember to replace `<resource-group>` and `<storage-account>` with the appropriate values for your environment. --```bash -RESOURCE_GROUP_NAME="<resource-group>" -STORAGE_ACCOUNT_NAME="<storage-account>" --# Rotate primary key (key 1). You should switch to key 2 before rotating key 1. -az storage account keys renew \ - --resource-group $RESOURCE_GROUP_NAME \ - --account-name $STORAGE_ACCOUNT_NAME \ - --key "primary" --# Rotate secondary key (key 2). You should switch to the new key 1 before rotating key 2. -az storage account keys renew \ - --resource-group $RESOURCE_GROUP_NAME \ - --account-name $STORAGE_ACCOUNT_NAME \ - --key "secondary" -``` ----## Set the API permissions on a newly created application --After enabling Azure AD Kerberos authentication, you'll need to explicitly grant admin consent to the new Azure AD application registered in your Azure AD tenant to complete your configuration. You can configure the API permissions from the [Azure portal](https://portal.azure.com) by following these steps. --1. Open **Azure Active Directory**. -2. Select **App registrations** in the left pane. -3. Select **All Applications** in the right pane. -- :::image type="content" source="media/files-troubleshoot-smb-authentication/azure-portal-azure-ad-app-registrations.png" alt-text="Screenshot of the Azure portal. Azure Active Directory is open. App registrations is selected in the left pane. All applications is highlighted in the right pane." lightbox="media/files-troubleshoot-smb-authentication/azure-portal-azure-ad-app-registrations.png"::: --4. Select the application with the name matching **[Storage Account] $storageAccountName.file.core.windows.net**. -5. Select **API permissions** in the left pane. -6. Select **Add permissions** at the bottom of the page. -7. Select **Grant admin consent for "DirectoryName"**. --## Potential errors when enabling Azure AD Kerberos authentication for hybrid users --You might encounter the following errors when enabling Azure AD Kerberos authentication for hybrid user accounts. --### Error - Grant admin consent disabled --In some cases, Azure AD admin may disable the ability to grant admin consent to Azure AD applications. Below is the screenshot of what this may look like in the Azure portal. -- :::image type="content" source="media/files-troubleshoot-smb-authentication/grant-admin-consent-disabled.png" alt-text="Screenshot of the Azure portal configured permissions blade displaying a warning that some actions may be disabled due to your permissions." lightbox="media/files-troubleshoot-smb-authentication/grant-admin-consent-disabled.png"::: --If this is the case, ask your Azure AD admin to grant admin consent to the new Azure AD application. To find and view your administrators, select **roles and administrators**, then select **Cloud application administrator**. --### Error - "The request to AAD Graph failed with code BadRequest" --#### Cause 1: an application management policy is preventing credentials from being created --When enabling Azure AD Kerberos authentication, you might encounter this error if the following conditions are met: --1. You're using the beta/preview feature of [application management policies](/graph/api/resources/applicationauthenticationmethodpolicy). -2. You (or your administrator) have set a [tenant-wide policy](/graph/api/resources/tenantappmanagementpolicy) that: - - Has no start date, or has a start date before 2019-01-01 - - Sets a restriction on service principal passwords, which either disallows custom passwords or sets a maximum password lifetime of less than 365.5 days --There is currently no workaround for this error. --#### Cause 2: an application already exists for the storage account --You might also encounter this error if you previously enabled Azure AD Kerberos authentication through manual limited preview steps. To delete the existing application, the customer or their IT admin can run the following script. Running this script will remove the old manually created application and allow the new experience to auto-create and manage the newly created application. --> [!IMPORTANT] -> This script must be run in PowerShell 5 because the AzureAD module doesn't work in PowerShell 7. This PowerShell snippet uses Azure AD Graph. --```powershell -$storageAccount = "exampleStorageAccountName" -$tenantId = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee" -Import-Module AzureAD -Connect-AzureAD -TenantId $tenantId --$application = Get-AzureADApplication -Filter "DisplayName eq '${storageAccount}'" -if ($null -ne $application) { - Remove-AzureADApplication -ObjectId $application.ObjectId -} -``` --### Error - Service principal password has expired in Azure AD --If you've previously enabled Azure AD Kerberos authentication through manual limited preview steps, the password for the storage account's service principal is set to expire every six months. Once the password expires, users won't be able to get Kerberos tickets to the file share. --To mitigate this, you have two options: either rotate the service principal password in Azure AD every six months, or disable Azure AD Kerberos, delete the existing application, and reconfigure Azure AD Kerberos. --#### Option 1: Update the service principal password using PowerShell --1. Install the latest Az.Storage and AzureAD modules. Use PowerShell 5.1, because currently the AzureAD module doesn't work in PowerShell 7. Azure Cloud Shell won't work in this scenario. For more information about installing PowerShell, see [Install Azure PowerShell on Windows with PowerShellGet](/powershell/azure/install-azure-powershell). --To install the modules, open PowerShell with elevated privileges and run the following commands: --```azurepowershell -Install-Module -Name Az.Storage -Install-Module -Name AzureAD -``` --2. Set the required variables for your tenant, subscription, storage account name, and resource group name by running the following cmdlets, replacing the values with the ones relevant to your environment. --```azurepowershell -$tenantId = "<MyTenantId>" -$subscriptionId = "<MySubscriptionId>" -$resourceGroupName = "<MyResourceGroup>" -$storageAccountName = "<MyStorageAccount>" -``` --3. Generate a new kerb1 key and password for the service principal. --```azurepowershell -Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId -$kerbKeys = New-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -KeyName "kerb1" -ErrorAction Stop | Select-Object -ExpandProperty Keys -$kerbKey = $kerbKeys | Where-Object { $_.KeyName -eq "kerb1" } | Select-Object -ExpandProperty Value -$azureAdPasswordBuffer = [System.Linq.Enumerable]::Take([System.Convert]::FromBase64String($kerbKey), 32); -$password = "kk:" + [System.Convert]::ToBase64String($azureAdPasswordBuffer); -``` --4. Connect to Azure AD and retrieve the tenant information, application, and service principal. --```azurepowershell -Connect-AzureAD -$azureAdTenantDetail = Get-AzureADTenantDetail; -$azureAdTenantId = $azureAdTenantDetail.ObjectId -$azureAdPrimaryDomain = ($azureAdTenantDetail.VerifiedDomains | Where-Object {$_._Default -eq $true}).Name -$application = Get-AzureADApplication -Filter "DisplayName eq '$($storageAccountName)'" -ErrorAction Stop; -$servicePrincipal = Get-AzureADServicePrincipal -Filter "AppId eq '$($application.AppId)'" -if ($servicePrincipal -eq $null) { - Write-Host "Could not find service principal corresponding to application with app id $($application.AppId)" - Write-Error -Message "Make sure that both service principal and application exist and are correctly configured" -ErrorAction Stop -} -``` --5. Set the password for the storage account's service principal. --```azurepowershell -$Token = ([Microsoft.Open.Azure.AD.CommonLibrary.AzureSession]::AccessTokens['AccessToken']).AccessToken; -$Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version=1.6' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId) -$json = @' -{ - "passwordCredentials": [ - { - "customKeyIdentifier": null, - "endDate": "<STORAGEACCOUNTENDDATE>", - "value": "<STORAGEACCOUNTPASSWORD>", - "startDate": "<STORAGEACCOUNTSTARTDATE>" - }] -} -'@ - -$now = [DateTime]::UtcNow -$json = $json -replace "<STORAGEACCOUNTSTARTDATE>", $now.AddHours(-12).ToString("s") - $json = $json -replace "<STORAGEACCOUNTENDDATE>", $now.AddMonths(6).ToString("s") -$json = $json -replace "<STORAGEACCOUNTPASSWORD>", $password - -$Headers = @{'authorization' = "Bearer $($Token)"} - -try { - Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method Patch -Headers $Headers -Body $json - Write-Host "Success: Password is set for $storageAccountName" -} catch { - Write-Host $_.Exception.ToString() - Write-Host "StatusCode: " $_.Exception.Response.StatusCode.value - Write-Host "StatusDescription: " $_.Exception.Response.StatusDescription -} -``` --#### Option 2: Disable Azure AD Kerberos, delete the existing application, and reconfigure --If you don't want to rotate the service principal password every six months, you can follow these steps. Be sure to save domain properties (domainName and domainGUID) before disabling Azure AD Kerberos, as you'll need them during reconfiguration if you want to configure directory and file-level permissions using Windows File Explorer. If you didn't save domain properties, you can still [configure directory/file-level permissions using icacls](storage-files-identity-ad-ds-configure-permissions.md#configure-windows-acls-with-icacls) as a workaround. --1. [Disable Azure AD Kerberos](storage-files-identity-auth-azure-active-directory-enable.md#disable-azure-ad-authentication-on-your-storage-account) -1. [Delete the existing application](#cause-2-an-application-already-exists-for-the-storage-account) -1. [Reconfigure Azure AD Kerberos via the Azure portal](storage-files-identity-auth-azure-active-directory-enable.md#enable-azure-ad-kerberos-authentication-for-hybrid-user-accounts) --Once you've reconfigured Azure AD Kerberos, the new experience will auto-create and manage the newly created application. --### Error 1326 - The username or password is incorrect when using private link --If you're connecting to a storage account via a private endpoint/private link using Azure AD Kerberos authentication, when attempting to mount a file share via `net use` or other method, the client is prompted for credentials. The user will likely type their credentials in, but the credentials are rejected. --#### Cause --This is because the SMB client has tried to use Kerberos but failed, so it falls back to using NTLM authentication, and Azure Files doesn't support using NTLM authentication for domain credentials. The client can't get a Kerberos ticket to the storage account because the private link FQDN isn't registered to any existing Azure AD application. --#### Solution --The solution is to add the privateLink FQDN to the storage account's Azure AD application before you mount the file share. You can add the required identifierUris to the application object using the [Azure portal](https://portal.azure.com) by following these steps. --1. Open **Azure Active Directory**. -1. Select **App registrations** in the left pane. -1. Select **All Applications**. -1. Select the application with the name matching **[Storage Account] $storageAccountName.file.core.windows.net**. -1. Select **Manifest** in the left pane. -1. Copy and paste the existing content so you have a duplicate copy. Replace all instances of `<storageaccount>.file.core.windows.net` with `<storageaccount>.privatelink.file.core.windows.net`. -1. Review the content and select **Save** to update the application object with the new identifierUris. -1. Update any internal DNS references to point to the private link. -1. Retry mounting the share. --## Need help? -If you still need help, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to get your problem resolved quickly. --## See also -- [Troubleshoot Azure Files](files-troubleshoot.md)-- [Troubleshoot Azure Files performance](files-troubleshoot-performance.md)-- [Troubleshoot Azure Files connectivity (SMB)](files-troubleshoot-smb-connectivity.md)-- [Troubleshoot Azure Files general SMB issues on Linux](files-troubleshoot-linux-smb.md)-- [Troubleshoot Azure Files general NFS issues on Linux](files-troubleshoot-linux-nfs.md)-- [Troubleshoot Azure File Sync issues](../file-sync/file-sync-troubleshoot.md) |
storage | Files Troubleshoot Smb Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-troubleshoot-smb-connectivity.md | - Title: Troubleshoot Azure Files SMB connectivity and access issues -description: Troubleshoot problems connecting to and accessing SMB Azure file shares from Windows and Linux clients, and see possible resolutions. --- Previously updated : 02/21/2023----# Troubleshoot Azure Files connectivity and access issues (SMB) --This article lists common problems that might occur when you try to connect to and access SMB Azure file shares from Windows or Linux clients. It also provides possible causes and resolutions for these problems. --> [!IMPORTANT] -> This article only applies to SMB shares. For details on NFS shares, see [Troubleshoot Azure NFS file shares](files-troubleshoot-linux-nfs.md). --## Applies to -| File share type | SMB | NFS | -|-|:-:|:-:| -| Standard file shares (GPv2), LRS/ZRS |  |  | -| Standard file shares (GPv2), GRS/GZRS |  |  | -| Premium file shares (FileStorage), LRS/ZRS |  |  | --## Can't connect to or mount an Azure file share --Select the Windows or Linux tab depending on the client operating system you're using to access Azure file shares. --# [Windows](#tab/windows) --When trying to connect to an Azure file share on Windows, you might see the following errors. --<a id="error5"></a> -### Error 5 when you mount an Azure file share --- System error 5 has occurred. Access is denied.--#### Cause 1: Unencrypted communication channel --For security reasons, connections to Azure file shares are blocked if the communication channel isn't encrypted and if the connection attempt isn't made from the same datacenter where the Azure file shares reside. If the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is enabled on the storage account, unencrypted connections within the same datacenter are also blocked. An encrypted communication channel is provided only if the end-user's client OS supports SMB encryption. --Windows 8, Windows Server 2012, and later versions of each system negotiate requests that include SMB 3.x, which supports encryption. --#### Solution for cause 1 --1. Connect from a client that supports SMB encryption (Windows 8/Windows Server 2012 or later). -2. Connect from a virtual machine (VM) in the same datacenter as the Azure storage account that is used for the Azure file share. -3. Verify the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is disabled on the storage account if the client doesn't support SMB encryption. --#### Cause 2: Virtual network or firewall rules are enabled on the storage account -Network traffic is denied if virtual network (VNET) and firewall rules are configured on the storage account, unless the client IP address or virtual network is allow-listed. --#### Solution for cause 2 --Verify that virtual network and firewall rules are configured properly on the storage account. To test if virtual network or firewall rules is causing the issue, temporarily change the setting on the storage account to **Allow access from all networks**. To learn more, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md). --#### Cause 3: Share-level permissions are incorrect when using identity-based authentication --If end users are accessing the Azure file share using Active Directory (AD) or Azure Active Directory Domain Services (Azure AD DS) authentication, access to the file share fails with "Access is denied" error if share-level permissions are incorrect. --#### Solution for cause 3 --Validate that permissions are configured correctly: --- **Active Directory Domain Services (AD DS)** see [Assign share-level permissions](storage-files-identity-ad-ds-assign-permissions.md).-- Share-level permission assignments are supported for groups and users that have been synced from AD DS to Azure Active Directory (Azure AD) using Azure AD Connect sync or Azure AD Connect cloud sync. Confirm that groups and users being assigned share-level permissions are not unsupported "cloud-only" groups. -- **Azure Active Directory Domain Services (Azure AD DS)** see [Assign share-level permissions](storage-files-identity-auth-active-directory-domain-service-enable.md?tabs=azure-portal#assign-share-level-permissions).--<a id="error53-67-87"></a> -### Error 53, Error 67, or Error 87 when you mount or unmount an Azure file share --When you try to mount a file share from on-premises or from a different datacenter, you might receive the following errors: --- System error 53 has occurred. The network path was not found.-- System error 67 has occurred. The network name cannot be found.-- System error 87 has occurred. The parameter is incorrect.--#### Cause 1: Port 445 is blocked --System error 53 or system error 67 can occur if port 445 outbound communication to an Azure Files datacenter is blocked. To see the summary of ISPs that allow or disallow access from port 445, go to [TechNet](https://social.technet.microsoft.com/wiki/contents/articles/32346.azure-summary-of-isps-that-allow-disallow-access-from-port-445.aspx). --To check if your firewall or ISP is blocking port 445, use the [`AzFileDiagnostics`](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Windows) tool or `Test-NetConnection` cmdlet. --To use the `Test-NetConnection` cmdlet, the Azure PowerShell module must be installed. See [Install Azure PowerShell module](/powershell/azure/install-azure-powershell) for more information. Remember to replace `<your-storage-account-name>` and `<your-resource-group-name>` with the relevant names for your storage account. -- -```azurepowershell -$resourceGroupName = "<your-resource-group-name>" -$storageAccountName = "<your-storage-account-name>" --# This command requires you to be logged into your Azure account and set the subscription your storage account is under, run: -# Connect-AzAccount -SubscriptionId ΓÇÿxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxΓÇÖ -# if you haven't already logged in. -$storageAccount = Get-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName --# The ComputerName, or host, is <storage-account>.file.core.windows.net for Azure Public Regions. -# $storageAccount.Context.FileEndpoint is used because non-Public Azure regions, such as sovereign clouds -# or Azure Stack deployments, will have different hosts for Azure file shares (and other storage resources). -Test-NetConnection -ComputerName ([System.Uri]::new($storageAccount.Context.FileEndPoint).Host) -Port 445 -``` - -If the connection was successful, you should see the following output: - - -```azurepowershell -ComputerName : <your-storage-account-name> -RemoteAddress : <storage-account-ip-address> -RemotePort : 445 -InterfaceAlias : <your-network-interface> -SourceAddress : <your-ip-address> -TcpTestSucceeded : True -``` - -> [!Note] -> The above command returns the current IP address of the storage account. This IP address is not guaranteed to remain the same, and may change at any time. Don't hardcode this IP address into any scripts, or into a firewall configuration. --#### Solutions for cause 1 --**Solution 1 ΓÇö Use Azure File Sync as a QUIC endpoint** -You can use Azure File Sync as a workaround to access Azure Files from clients that have port 445 blocked. Although Azure Files doesn't directly support SMB over QUIC, Windows Server 2022 Azure Edition does support the QUIC protocol. You can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. This configuration uses port 443, which is widely open outbound to support HTTPS, instead of port 445. To learn more about this option, see [SMB over QUIC with Azure File Sync](storage-files-networking-overview.md#smb-over-quic). --**Solution 2 ΓÇö Use VPN or ExpressRoute** -By setting up a VPN or ExpressRoute from on-premises to your Azure storage account, with Azure Files exposed on your internal network using private endpoints, the traffic will go through a secure tunnel as opposed to over the internet. Follow the [instructions to setup VPN](storage-files-configure-p2s-vpn-windows.md) to access Azure Files from Windows. --**Solution 3 ΓÇö Unblock port 445 with help from your ISP/IT admin** -Work with your IT department or ISP to open port 445 outbound to [Azure IP ranges](https://www.microsoft.com/download/details.aspx?id=56519). --**Solution 4 ΓÇö Use REST API-based tools like Storage Explorer/PowerShell** -Azure Files also supports REST in addition to SMB. REST access works over port 443 (standard tcp). There are various tools that are written using REST API that enable a rich UI experience. [Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md?tabs=windows) is one of them. [Download and Install Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) and connect to your file share backed by Azure Files. You can also use [PowerShell](./storage-how-to-use-files-portal.md) which also uses REST API. --#### Cause 2: NTLMv1 is enabled --System error 53 or system error 87 can occur if NTLMv1 communication is enabled on the client. Azure Files supports only NTLMv2 authentication. Having NTLMv1 enabled creates a less-secure client. Therefore, communication is blocked for Azure Files. --To determine whether this is the cause of the error, verify that the following registry subkey isn't set to a value less than 3: --**HKLM\SYSTEM\CurrentControlSet\Control\Lsa > LmCompatibilityLevel** --For more information, see the [LmCompatibilityLevel](/previous-versions/windows/it-pro/windows-2000-server/cc960646(v=technet.10)) topic on TechNet. --#### Solution for cause 2 --Revert the **LmCompatibilityLevel** value to the default value of 3 in the following registry subkey: -- **HKLM\SYSTEM\CurrentControlSet\Control\Lsa** --<a id="cannotaccess"></a> -### Application or service cannot access a mounted Azure Files drive --#### Cause --Drives are mounted per user. If your application or service is running under a different user account than the one that mounted the drive, the application won't see the drive. --#### Solution --Use one of the following solutions: --- Mount the drive from the same user account that contains the application. You can use a tool such as PsExec.-- Pass the storage account name and key in the user name and password parameters of the `net use` command.-- Use the `cmdkey` command to add the credentials into Credential Manager. Perform this action from a command line under the service account context, either through an interactive login or by using `runas`.- - `cmdkey /add:<storage-account-name>.file.core.windows.net /user:AZURE\<storage-account-name> /pass:<storage-account-key>` -- Map the share directly without using a mapped drive letter. Some applications might not reconnect to the drive letter properly, so using the full UNC path might more reliable. -- `net use * \\storage-account-name.file.core.windows.net\share` --After you follow these instructions, you might receive the following error message when you run net use for the system/network service account: "System error 1312 has occurred. A specified logon session does not exist. It may already have been terminated." If this error appears, make sure that the username that's passed to `net use` includes domain information (for example: "[storage account name].file.core.windows.net"). --<a id="shareismissing"></a> -### No folder with a drive letter in "My Computer" or "This PC" --If you map an Azure file share as an administrator by using the `net use` command, the share appears to be missing. --#### Cause --By default, Windows File Explorer doesn't run as an administrator. If you run `net use` from an administrative command prompt, you map the network drive as an administrator. Because mapped drives are user-centric, the user account that is logged in doesn't display the drives if they're mounted under a different user account. --#### Solution -Mount the share from a non-administrator command line. Alternatively, you can follow [this TechNet topic](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee844140(v=ws.10)) to configure the **EnableLinkedConnections** registry value. --<a id="netuse"></a> -### Net use command fails if the storage account contains a forward slash --#### Cause --The `net use` command interprets a forward slash (/) as a command-line option. If your user account name starts with a forward slash, the drive mapping fails. --#### Solution --You can use either of the following steps to work around the problem: --- Run the following PowerShell command:-- `New-SmbMapping -LocalPath y: -RemotePath \\server\share -UserName accountName -Password "password can contain / and \ etc"` -- From a batch file, you can run the command this way: -- `Echo new-smbMapping ... | powershell -command ΓÇô` --- Put double quotation marks around the key to work around this problem--unless the forward slash is the first character. If it is, either use the interactive mode and enter your password separately or regenerate your keys to get a key that doesn't start with a forward slash.---# [Linux](#tab/linux) --Common causes for this problem are: --- You're using a Linux distribution with an outdated SMB client. See [Use Azure Files with Linux](storage-how-to-use-files-linux.md) for more information on common Linux distributions available in Azure that have compatible clients.-- SMB utilities (cifs-utils) aren't installed on the client.-- The minimum SMB version, 2.1, isn't available on the client.-- SMB 3.x encryption isn't supported on the client. The preceding table provides a list of Linux distributions that support mounting from on-premises and cross-region using encryption. Other distributions require kernel 4.11 and later versions.-- You're trying to connect to an Azure file share from an Azure VM, and the VM isn't in the same region as the storage account.-- If the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is enabled on the storage account, Azure Files will allow only connections that use SMB 3.x with encryption.--### Solution --To resolve the problem, use the [troubleshooting tool for Azure Files mounting errors on Linux](https://github.com/Azure-Samples/azure-files-samples/tree/master/AzFileDiagnostics/Linux). This tool: --* Helps you to validate the client running environment. -* Detects the incompatible client configuration that would cause access failure for Azure Files. -* Gives prescriptive guidance on self-fixing. -* Collects the diagnostics traces. --<a id="mounterror13"></a> -## "Mount error(13): Permission denied" when you mount an Azure file share --### Cause 1: Unencrypted communication channel --For security reasons, connections to Azure file shares are blocked if the communication channel isn't encrypted and if the connection attempt isn't made from the same datacenter where the Azure file shares reside. Unencrypted connections within the same datacenter can also be blocked if the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is enabled on the storage account. An encrypted communication channel is provided only if the user's client OS supports SMB encryption. --To learn more, see [Prerequisites for mounting an Azure file share with Linux and the cifs-utils package](storage-how-to-use-files-linux.md#prerequisites). --### Solution for cause 1 --1. Connect from a client that supports SMB encryption or connect from a virtual machine in the same datacenter as the Azure storage account that is used for the Azure file share. -2. Verify the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is disabled on the storage account if the client does not support SMB encryption. --### Cause 2: Virtual network or firewall rules are enabled on the storage account --If virtual network (VNET) and firewall rules are configured on the storage account, network traffic will be denied access unless the client IP address or virtual network is allowed access. --### Solution for cause 2 --Verify virtual network and firewall rules are configured properly on the storage account. To test if virtual network or firewall rules are causing the issue, temporarily change the setting on the storage account to **Allow access from all networks**. To learn more, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md). --<a id="error115"></a> -## "Mount error(115): Operation now in progress" when you mount Azure Files by using SMB 3.x --### Cause --Some Linux distributions don't yet support encryption features in SMB 3.x. Users might receive a "115" error message if they try to mount Azure Files by using SMB 3.x because of a missing feature. SMB 3.x with full encryption is supported only on latest version of a Linux Distro. --### Solution --The encryption feature for SMB 3.x for Linux was introduced in the 4.11 kernel. This feature enables mounting of an Azure file share from on-premises or from a different Azure region. Some Linux distributions may have backported changes from the 4.11 kernel to older versions of the Linux kernel that they maintain. To help determine if your version of Linux supports SMB 3.x with encryption, consult with [Use Azure Files with Linux](storage-how-to-use-files-linux.md). --If your Linux SMB client doesn't support encryption, mount Azure Files using SMB 2.1 from a Linux VM that's in the same Azure datacenter as the file share. Verify that the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is disabled on the storage account. --<a id="error112"></a> -## "Mount error(112): Host is down" because of a reconnection time-out --A "112" mount error occurs on the Linux client when the client has been idle for a long time. After an extended idle time, the client disconnects and the connection times out. --### Cause --The connection can be idle for the following reasons: --- Network communication failures that prevent re-establishing a TCP connection to the server when the default "soft" mount option is used-- Recent reconnection fixes that are not present in older kernels--### Solution --This reconnection problem in the Linux kernel is now fixed as part of the following changes: --- [Fix reconnect to not defer smb3 session reconnect long after socket reconnect](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/fs/cifs?id=4fcd1813e6404dd4420c7d12fb483f9320f0bf93)-- [Call echo service immediately after socket reconnect](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=b8c600120fc87d53642476f48c8055b38d6e14c7)-- [CIFS: Fix a possible memory corruption during reconnect](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=53e0e11efe9289535b060a51d4cf37c25e0d0f2b)-- [CIFS: Fix a possible double locking of mutex during reconnect (for kernel v4.9 and later)](https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=96a988ffeb90dba33a71c3826086fe67c897a183)--However, these changes might not be ported yet to all the Linux distributions. If you're using a popular Linux distribution, you can check on the [Use Azure Files with Linux](storage-how-to-use-files-linux.md) to see which version of your distribution has the necessary kernel changes. --### Workaround --You can work around this problem by specifying a hard mount. A hard mount forces the client to wait until a connection is established or until it's explicitly interrupted. You can use it to prevent errors because of network time-outs. However, this workaround might cause indefinite waits. Be prepared to stop connections as necessary. --If you can't upgrade to the latest kernel versions, you can work around this problem by keeping a file in the Azure file share that you write to every 30 seconds or less. This must be a write operation, such as rewriting the created or modified date on the file. Otherwise, you might get cached results, and your operation might not trigger the reconnection. -----## Unable to access, modify, or delete an Azure file share (or share snapshot) --<a id="noaaccessfailureportal"></a> -### Error "No access" when you try to access or delete an Azure file share -When you try to access or delete an Azure file share using the Azure portal, you might receive the following error: --No access -Error code: 403 --#### Cause 1: Virtual network or firewall rules are enabled on the storage account --#### Solution for cause 1 --Verify that virtual network and firewall rules are configured properly on the storage account. To test if virtual network or firewall rules are causing the issue, temporarily change the setting on the storage account to **Allow access from all networks**. To learn more, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md). --#### Cause 2: Your user account doesn't have access to the storage account --#### Solution for cause 2 --Browse to the storage account in which the Azure file share is located, select **Access control (IAM)**, and verify your user account has access to the storage account. To learn more, see [How to secure your storage account with Azure role-based access control (Azure RBAC)](../blobs/security-recommendations.md#data-protection). --### File locks and leases -If you can't modify or delete an Azure file share or snapshot, it might be due to file locks or leases. Azure Files provides two ways to prevent accidental modification or deletion of Azure file shares and share snapshots: --- **Storage account resource locks**: All Azure resources, including the storage account, support [resource locks](../../azure-resource-manager/management/lock-resources.md). Locks might be put on the storage account by an administrator, or by services such as Azure Backup. Two variations of resource locks exist: **modify**, which prevents all modifications to the storage account and its resources, and **delete**, which only prevent deletes of the storage account and its resources. When modifying or deleting shares through the `Microsoft.Storage` resource provider, resource locks are enforced on Azure file shares and share snapshots. Most portal operations, Azure PowerShell cmdlets for Azure Files with `Rm` in the name (i.e. `Get-AzRmStorageShare`), and Az |