Updates from: 04/06/2022 01:11:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| client_id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com). | | response_type |Required |The response type, which must include `code` for the authorization code flow. | | redirect_uri |Required |The redirect URI of your app, where authentication responses are sent and received by your app. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded. |
-| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). |
+| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources.The client-id indicates the token issued are intended for use by Azure AD B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). |
| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. | | state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. | | prompt |Optional |The type of user interaction that is required. Currently, the only valid value is `login`, which forces the user to enter their credentials on that request. Single sign-on will not take effect. |
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
To enable sign-in for users with a Google account in Azure Active Directory B2C
1. In the upper-left corner of the page, select the project list, and then select **New Project**. 1. Enter a **Project Name**, select **Create**. 1. Make sure you are using the new project by selecting the project drop-down in the top-left of the screen. Select your project by name, then select **Open**.
-1. Select **OAuth consent screen** in the left menu, select **External**, and then select **Create**.
-Enter a **Name** for your application. Enter *b2clogin.com* in the **Authorized domains** section and select **Save**.
+1. In the left menu, select **OAuth consent screen**, select **External**, and then select **Create**.
+ 1. Enter a **Name** for your application.
+ 1. Select a **User support email**.
+ 1. In the **Authorized domains** section, enter *b2clogin.com*.
+ 1. In the **Developer contact information** section, enter comma separated emails for Google to notify you about any changes to your project.
+ 1. Select **Save**.
1. Select **Credentials** in the left menu, and then select **Create credentials** > **Oauth client ID**. 1. Under **Application type**, select **Web application**. 1. Enter a **Name** for your application.
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
The **CryptographicKeys** element contains the following attributes:
| SamlAssertionDecryption |No | The X509 certificate (RSA key set). A SAML identity provider uses the public portion of the certificate to encrypt the assertion of the SAML response. Azure AD B2C uses the private portion of the certificate to decrypt the assertion. | | MetadataSigning |No | The X509 certificate (RSA key set) to use to sign SAML metadata. Azure AD B2C uses this key to sign the metadata. |
-## SAML entityID customization
-
-If you have multiple SAML applications that depend on different entityID values, you can override the `issueruri` value in your relying party file. To do this, copy the technical profile with the "Saml2AssertionIssuer" ID from the base file and override the `issueruri` value.
-
-> [!TIP]
-> Copy the `<ClaimsProviders>` section from the base and preserve these elements within the claims provider: `<DisplayName>Token Issuer</DisplayName>`, `<TechnicalProfile Id="Saml2AssertionIssuer">`, and `<DisplayName>Token Issuer</DisplayName>`.
-
-Example:
-
-```xml
- <ClaimsProviders>
- <ClaimsProvider>
- <DisplayName>Token Issuer</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="Saml2AssertionIssuer">
- <DisplayName>Token Issuer</DisplayName>
- <Metadata>
- <Item Key="IssuerUri">customURI</Item>
- </Metadata>
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
- </ClaimsProviders>
- <RelyingParty>
- <DefaultUserJourney ReferenceId="SignUpInSAML" />
- <TechnicalProfile Id="PolicyProfile">
- <DisplayName>PolicyProfile</DisplayName>
- <Protocol Name="SAML2" />
- <Metadata>
- …
-```
- ## Next steps See the following articles for examples of working with SAML identity providers in Azure AD B2C:
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md
Previously updated : 10/25/2021 Last updated : 04/05/2022 zone_pivot_groups: b2c-policy-type
After you add the above snippets, your technical profile should look like the fo
</ClaimsProvider> ```
+### Call the REST technical profile
+
+To call the `REST-GetProfile` technical profile, you first need to acquire an Azure AD access token using the `REST-AcquireAccessToken` technical profile. The following example shows how to call the `REST-GetProfile` technical profile from a [validation technical profile](validation-technical-profile.md):
+
+```xml
+<ValidationTechnicalProfiles>
+ <ValidationTechnicalProfile ReferenceId="REST-AcquireAccessToken" />
+ <ValidationTechnicalProfile ReferenceId="REST-GetProfile" />
+</ValidationTechnicalProfiles>
+```
+
+The following example shows how to call the `REST-GetProfile` technical profile from a [user journey](userjourneys.md), or a [sub journey](subjourneys.md):
+
+```xml
+<OrchestrationSteps>
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="REST-AcquireAccessTokens" TechnicalProfileReferenceId="REST-AcquireAccessToken" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="3" Type="ClaimsExchange">
+ <ClaimsExchanges>
+ <ClaimsExchange Id="REST-GetProfile" TechnicalProfileReferenceId="REST-GetProfile" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
+</OrchestrationSteps>
+```
+ ## Using a static OAuth2 bearer ### Add the OAuth2 bearer token policy key
active-directory-b2c Tutorial Register Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md
To register a web application in your Azure AD B2C tenant, you can use our new u
The following restrictions apply to redirect URIs:
- * The reply URL must begin with the scheme `https`.
+ * The reply URL must begin with the scheme `https`, unless you use a localhost redirect URL.
* The reply URL is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, do not specify `.../ABC/response-oidc` in the reply URL. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
+ * The reply URL should include or exclude the trailing forward slash as your application expects it. For example, `https://contoso.com/auth-response` and `https://contoso.com/auth-response/` might be treated as nonmatching URLs in your application.
1. Under **Permissions**, select the *Grant admin consent to openid and offline_access permissions* check box. 1. Select **Register**.
To register a web application in your Azure AD B2C tenant, you can use our new u
* The reply URL must begin with the scheme `https`, unless using `localhost`. * The reply URL is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, do not specify `.../ABC/response-oidc` in the reply URL. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
+ * The reply URL should include or exclude the trailing forward slash as your application expects it. For example, `https://contoso.com/auth-response` and `https://contoso.com/auth-response/` might be treated as nonmatching URLs in your application.
1. Select **Create** to complete the application registration.
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Previously updated : 11/29/2021 Last updated : 04/04/2022
Since ECMA Connector Host currently only supports the USER object type, the OBJE
You can define one or more matching attribute(s) and prioritize them based on the precedence. Should you want to change the matching attribute you can also do so. [![Matching attribute](.\media\on-premises-application-provisioning-architecture\match-1.png)](.\media\on-premises-application-provisioning-architecture\match-1.png#lightbox)
-2. ECMA Connector Host receives the GET request and queries its internal cache to see if the user exists and has based imported. This is done using the **query attribute**. The query attribute is defined in the object types page.
- [![Query attribute](.\media\on-premises-application-provisioning-architecture\match-2.png)](.\media\on-premises-application-provisioning-architecture\match-2.png#lightbox)
-
+2. ECMA Connector Host receives the GET request and queries its internal cache to see if the user exists and has based imported. This is done using the matching attribute(s) above. If you define multiple matching attributes, the Azure AD provisioning service will send a GET request for each attribute and the ECMA host will check it's cache for a match until it finds one.
3. If the user does not exist, Azure AD will make a POST request to create the user. The ECMA Connector Host will respond back to Azure AD with the HTTP 201 and provide an ID for the user. This ID is derived from the anchor value defined in the object types page. This anchor will be used by Azure AD to query the ECMA Connector Host for future and subsequent requests. 4. If a change happens to the user in Azure AD, then Azure AD will make a GET request to retrieve the user using the anchor from the previous step, rather than the matching attribute in step 1. This allows, for example, the UPN to change without breaking the link between the user in Azure AD and in the app.
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Previously updated : 02/03/2022 Last updated : 04/04/2022
The file location for wizard logging is C:\Program Files\Microsoft ECMA2Host\Wiz
<listeners> <add initializeData="ECMA2Host" type="System.Diagnostics.EventLogTraceListener, System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ECMA2HostListener" traceOutputOptions="LogicalOperationStack, DateTime, Timestamp, Callstack" /> ```
+## Query the ECMA Host Cache
+The ECMA Host has a cache of users in your application that is updated according to the schedule you specify in the properties page of the ECMA Host wizard. In order to query the cache, perform the steps below:
+1. Set the Debug flag to `true`.
+2. Restart the ECMA Host service.
+3. Query this endpoint from the server the ECMA Host is installed on, replacing `{connector name}` with the name of your connector, specified in the properties page of the ECMA Host. `https://localhost:8585/ecma2host_{connectorName}/scim/cache`
+
+Please be aware that setting the debug flag to `true` disables authentication on the ECMA Host. You will want to set it back to `false` and restart the ECMA Host service once you are done querying the cache.
+
+The file location for verbose service logging is C:\Program Files\Microsoft ECMA2Host\Service\Microsoft.ECMA2Host.Service.exe.config.
+ ```
+ <?xml version="1.0" encoding="utf-8"?>
+ <configuration>
+ <startup>
+ <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6" />
+ </startup>
+ <appSettings>
+ <add key="Debug" value="true" />
+ </appSettings>
+
+ ```
## Target attribute is missing The provisioning service automatically discovers attributes in your target application. If you see that a target attribute is missing in the target attribute list in the Azure portal, perform the following troubleshooting step:
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
Previously updated : 02/17/2021 Last updated : 04/04/2022 - # Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory
Allow access to the following URLs:
| `*.msappproxy.net` <br> `*.servicebus.windows.net` | 443/HTTPS | Communication between the connector and the Application Proxy cloud service | | `crl3.digicert.com` <br> `crl4.digicert.com` <br> `ocsp.digicert.com` <br> `crl.microsoft.com` <br> `oneocsp.microsoft.com` <br> `ocsp.msocsp.com`<br> | 80/HTTP | The connector uses these URLs to verify certificates. | | `login.windows.net` <br> `secure.aadcdn.microsoftonline-p.com` <br> `*.microsoftonline.com` <br> `*.microsoftonline-p.com` <br> `*.msauth.net` <br> `*.msauthimages.net` <br> `*.msecnd.net` <br> `*.msftauth.net` <br> `*.msftauthimages.net` <br> `*.phonefactor.net` <br> `enterpriseregistration.windows.net` <br> `management.azure.com` <br> `policykeyservice.dc.ad.msft.net` <br> `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 443/HTTPS | The connector uses these URLs during the registration process. |
-| `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 80/HTTP | The connector uses this URL during the registration process. |
+| `ctldl.windowsupdate.com` <br> `www.microsoft.com/pkiops` | 80/HTTP | The connector uses these URLs during the registration process. |
You can allow connections to `*.msappproxy.net`, `*.servicebus.windows.net`, and other URLs above if your firewall or proxy lets you configure access rules based on domain suffixes. If not, you need to allow access to the [Azure IP ranges and Service Tags - Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). The IP ranges are updated each week.
active-directory Application Proxy Configure Connectors With Proxy Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-connectors-with-proxy-servers.md
Previously updated : 04/27/2021 Last updated : 04/04/2022
Allow access to the following URLs:
| &ast;.msappproxy.net<br>&ast;.servicebus.windows.net | 443/HTTPS | Communication between the connector and the Application Proxy cloud service | | crl3.digicert.com<br>crl4.digicert.com<br>ocsp.digicert.com<br>crl.microsoft.com<br>oneocsp.microsoft.com<br>ocsp.msocsp.com<br> | 80/HTTP | The connector uses these URLs to verify certificates. | | login.windows.net<br>secure.aadcdn.microsoftonline-p.com<br>&ast;.microsoftonline.com<br>&ast;.microsoftonline-p.com<br>&ast;.msauth.net<br>&ast;.msauthimages.net<br>&ast;.msecnd.net<br>&ast;.msftauth.net<br>&ast;.msftauthimages.net<br>&ast;.phonefactor.net<br>enterpriseregistration.windows.net<br>management.azure.com<br>policykeyservice.dc.ad.msft.net<br>ctldl.windowsupdate.com | 443/HTTPS | The connector uses these URLs during the registration process. |
-| ctldl.windowsupdate.com<br>www.microsoft.com/pkiops | 80/HTTP | The connector uses this URL during the registration process. |
+| ctldl.windowsupdate.com<br>www.microsoft.com/pkiops | 80/HTTP | The connector uses these URLs during the registration process. |
If your firewall or proxy allows you to configure DNS allow lists, you can allow connections to \*.msappproxy.net and \*.servicebus.windows.net.
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
When you install the extension, you need the *Tenant ID* and admin credentials f
The NPS server must be able to communicate with the following URLs over ports 80 and 443: * *https:\//strongauthenticationservice.auth.microsoft.com*
+* *https:\//strongauthenticationservice.auth.microsoft.us*
+* *https:\//strongauthenticationservice.auth.microsoft.cn*
* *https:\//adnotifications.windowsazure.com* * *https:\//login.microsoftonline.com* * *https:\//credentials.azure.com*
For customers that use the Azure Government or Azure China 21Vianet clouds, the
| Registry key | Value | |--|--|
- | AZURE_MFA_HOSTNAME | adnotifications.windowsazure.us |
+ | AZURE_MFA_HOSTNAME | strongauthenticationservice.auth.microsoft.us |
| STS_URL | https://login.microsoftonline.us/ | 1. For Azure China 21Vianet customers, set the following key values: | Registry key | Value | |--|--|
- | AZURE_MFA_HOSTNAME | adnotifications.windowsazure.cn |
+ | AZURE_MFA_HOSTNAME | strongauthenticationservice.auth.microsoft.cn |
| STS_URL | https://login.chinacloudapi.cn/ | 1. Repeat the previous two steps to set the registry key values for each NPS server.
Verify that AD Connect is running, and that the user is present in both the on-p
### Why do I see HTTP connect errors in logs with all my authentications failing?
-Verify that https://adnotifications.windowsazure.com is reachable from the server running the NPS extension.
+Verify that https://adnotifications.windowsazure.com, https://strongauthenticationservice.auth.microsoft.com is reachable from the server running the NPS extension.
### Why is authentication not working, despite a valid certificate being present?
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
This setting works with all browsers. However, to satisfy a device policy, like
| Windows 10 + | Microsoft Edge, [Chrome](#chrome-support), [Firefox 91+](https://support.mozilla.org/kb/windows-sso) | | Windows Server 2022 | Microsoft Edge, [Chrome](#chrome-support) | | Windows Server 2019 | Microsoft Edge, [Chrome](#chrome-support) |
-| iOS | Microsoft Edge, Safari |
+| iOS | Microsoft Edge, Safari (see the notes) |
| Android | Microsoft Edge, Chrome | | macOS | Microsoft Edge, Chrome, Safari | These browsers support device authentication, allowing the device to be identified and validated against a policy. The device check fails if the browser is running in private mode or if cookies are disabled. > [!NOTE]
-> Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario.
+> Edge 85+ requires the user to be signed in to the browser to properly pass device identity. Otherwise, it behaves like Chrome without the accounts extension. This sign-in might not occur automatically in a Hybrid Azure AD Join scenario.
+>
> Safari is supported for device-based Conditional Access, but it can not satisfy the **Require approved client app** or **Require app protection policy** conditions. A managed browser like Microsoft Edge will satisfy approved client app and app protection policy requirements.
+> On iOS with 3rd party MDM solution only Microsoft Edge browser supports device policy.
+>
> [Firefox 91+](https://support.mozilla.org/kb/windows-sso) is supported for device-based Conditional Access, but "Allow Windows single sign-on for Microsoft, work, and school accounts" needs to be enabled. #### Why do I see a certificate prompt in the browser
active-directory Directory Delegated Administration Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delegated-administration-primer.md
+
+ Title: Delegated administration in Azure Active Directory
+description: The relationship between older delegated admin permissions and new granular delegated admin permissions in Azure Active Directory
+keywords:
++++ Last updated : 03/24/2022+++++++
+#Customer intent: As a new Azure AD identity administrator, access management requires me to understand the permissions of partners who have access to our resources.
++
+# What is delegated administration?
+
+Managing permissions for external partners is a key part of your security posture. WeΓÇÖve added capabilities to the Azure Active Directory (Azure AD) admin portal experience so that an administrator can see the relationships that their Azure AD tenant has with Microsoft Cloud Service Providers (CSP) who can manage the tenant. This permissions model is called delegated administration. This article introduces the Azure AD administrator to the relationship between the old Delegated Admin Permissions (DAP) permission model and the new Granular Delegated Admin Permissions (GDAP) permission model.
+
+## Delegated administration relationships
+
+Delegated administration relationships enable technicians at a Microsoft CSP to administer Microsoft services such as Microsoft 365, Dynamics, 365, and Azure on behalf of your organization. These technicians administer these services for you using the same roles and permissions as administrators in your organization. These roles are assigned to security groups in the CSPΓÇÖs Azure AD tenant, which is why CSP technicians donΓÇÖt need user accounts in your tenant in order to administer services for you.
+
+There are two types of delegated administration relationships that are visible in the Azure AD admin portal experience. The newer type of delegated admin relationship is known as Granular Delegated Admin Permission. The older type of relationship is known as Delegated Admin Permission. You can see both types of relationship if you sign in to the Azure AD admin portal and then select **Delegated administration**.
+
+## Granular delegated admin permission
+
+When a Microsoft CSP creates a GDAP relationship request for your tenant, a GDAP relationship is created in the tenant when a global administrator approves the request. The GDAP relationship request specifies:
+
+* The CSP partner tenant
+* The roles that the partner needs to delegate to their technicians
+* The expiration date
+
+If you have any GDAP relationships in your tenant, you will see a notification banner on the **Delegated Administration** page in the Azure AD admin portal. Select the notification banner to see and manage GDAP relationships in the **Partners** page in Microsoft Admin Center.
+
+## Delegated admin permission
+
+When a Microsoft CSP creates a DAP relationship request for your tenant, a GDAP relationship is created in the tenant when a global administrator approves the request. All DAP relationships enable the CSP to delegate Global administrator and Helpdesk administrator roles to their technicians. Unlike a GDAP relationship, a DAP relationship persists until they are revoked either by you or by your CSP.
+
+If you have any DAP relationships in your tenant, you will see them in the list on the Delegated Administration page in the Azure AD admin portal. To remove a DAP relationship for a CSP, follow the link to the Partners page in the Microsoft Admin Center.
+
+## Next steps
+
+If you're a beginning Azure AD administrator, get the basics down in [Azure Active Directory Fundamentals](../fundamentals/index.yml).
active-directory Entitlement Management Access Package First https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md
For a step-by-step demonstration of the process of deploying Azure Active Direct
>[!VIDEO https://www.youtube.com/embed/zaaKvaaYwI4]
-This rest of this article uses the Azure portal to configure and demonstrate Azure AD entitlement management. You can also follow a tutorial to [manage access to resources via Microsoft Graph](/graph/tutorial-access-package-api?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json) or [via PowerShell](/powershell/microsoftgraph/tutorial-entitlement-management?view=graph-powershell-beta).
+This rest of this article uses the Azure portal to configure and demonstrate Azure AD entitlement management.
## Prerequisites
active-directory Entitlement Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-overview.md
Here are some example license scenarios to help you determine the number of lice
- If you are interested in using the Azure portal to manage access to resources, see [Tutorial: Manage access to resources - Azure portal](entitlement-management-access-package-first.md). - if you are interested in using Microsoft Graph to manage access to resources, see [Tutorial: manage access to resources - Microsoft Graph](/graph/tutorial-access-package-api?toc=/azure/active-directory/governance/toc.json&bc=/azure/active-directory/governance/breadcrumb/toc.json)-- If you are interested in using Microsoft PowerShell to manage access to resources, see [Tutorial: manage access to resources - PowerShell](/powershell/microsoftgraph/tutorial-entitlement-management?view=graph-powershell-beta) - [Common scenarios](entitlement-management-scenarios.md)
active-directory Identity Governance Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-automation.md
There are two places where you can see the expiration date in the Azure portal.
## Next steps - [Create an Automation account using the Azure portal](../../automation/quickstarts/create-account-portal.md)-- [Manage access to resources in Active Directory entitlement management using Microsoft Graph PowerShell](/powershell/microsoftgraph/tutorial-entitlement-management?view=graph-powershell-beta)
active-directory Plan Connect Design Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-connect-design-concepts.md
When you are selecting the attribute for providing the value of UPN to be used i
In express settings, the assumed choice for the attribute is userPrincipalName. If the userPrincipalName attribute does not contain the value you want your users to sign in to Azure, then you must choose **Custom Installation**.
+>[!NOTE]
+>It's recommended as a best practice that the UPN prefix contains more than one character.
+ ### Custom domain state and UPN It is important to ensure that there is a verified domain for the UPN suffix.
active-directory Delegate By Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/delegate-by-task.md
You can further restrict permissions by assigning roles at smaller scopes or by
> [!div class="mx-tableFixed"] > | Task | Least privileged role | Additional roles | > | - | | - |
-> | Create Azure AD Domain Services instance | [Application Administrator](../roles/permissions-reference.md#application-administrator) and [Groups Administrator](../roles/permissions-reference.md#groups-administrator)|[Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) |
+> | Create Azure AD Domain Services instance | [Application Administrator](../roles/permissions-reference.md#application-administrator)<br>[Groups Administrator](../roles/permissions-reference.md#groups-administrator)<br> [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor)| |
> | Perform all Azure AD Domain Services tasks | [AAD DC Administrators group](../../active-directory-domain-services/tutorial-create-management-vm.md#administrative-tasks-you-can-perform-on-a-managed-domain) | | > | Read all configuration | Reader on Azure subscription containing AD DS service | |
active-directory Code42 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/code42-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Code42 SSO
-To configure single sign-on on **Code42** side, you need to send the **App Federation Metadata Url** to [Code42 support team](mailto:idpsupport@code42.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Code42** side, you need to send the **App Federation Metadata Url** to [Code42 support team](http://gethelp.code42.com/). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Code42 test user
-In this section, you create a user called B.Simon in Code42. Work with [Code42 support team](mailto:idpsupport@code42.com) to add the users in the Code42 platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called B.Simon in Code42. Work with [Code42 support team](http://gethelp.code42.com/) to add the users in the Code42 platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to Oracle Enterprise Business Suite
+description: Learn to implement SHA with header-based SSO to Oracle Enterprise Business Suite using F5ΓÇÖs BIG-IP Easy Button guided configuration
++++++++ Last updated : 03/28/2022+++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to Oracle Enterprise Business Suite
+
+In this article, learn to secure Oracle Enterprise Business Suite (EBS) using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Integrating a BIG-IP with Azure AD provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](../conditional-access/overview.md)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage Identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
+
+To learn about all the benefits, see the article on [F5 BIG-IP and Azure AD integration](/azure/active-directory/manage-apps/f5-aad-integration) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+This scenario looks at the classic **Oracle EBS application** that uses **HTTP authorization headers** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the app enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The secure hybrid access solution for this scenario is made up of several components including a multi-tiered Oracle architecture:
+
+**Oracle EBS Application:** BIG-IP published service to be protected by Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required session attributes.
+
+**Oracle Internet Directory (OID):** Hosts the user database. BIG-IP checks via LDAP for authorization attributes.
+
+**Oracle AccessGate:** Validates authorization attributes through back channel with OID service, before issuing EBS access cookies
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the Oracle application.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-oracle-ebs/sp-initiated-flow.png)
+
+| Steps| Description |
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected back to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP performs LDAP query for users Unique ID (UID) attribute |
+| 6| BIG-IP injects returned UID attribute as user_orclguid header in EBS session cookie request to Oracle AccessGate |
+| 7| Oracle AccessGate validates UID against Oracle Internet Directory (OID) service and issues EBS access cookie
+| 8| EBS user headers and cookie sent to application and returns the payload to the user |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but you need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or deploy a BIG-IP Virtual Edition (VE) in Azure.
+
+* Any of the following F5 BIG-IP license SKUs
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* An [SSL Web certificate](/azure/active-directory/manage-apps/f5-bigip-deployment-guide#ssl-profile) for publishing services over HTTPS, or use default BIG-IP certs while testing
+
+* An existing Oracle EBS suite including Oracle AccessGate and an LDAP enabled OID (Oracle Internet Database)
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template. With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](../develop/quickstart-register-app.md)
+
+This first step creates a tenant app registration that will be used to authorize the **Easy Button** access to Graph. Through these permissions, the BIG-IP will be allowed to push the configurations required to establish a trust between a SAML SP instance for published application, and Azure AD as the SAML IdP.
+
+1. Sign in to the [Azure AD portal](https://portal.azure.com/) with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, F5 BIG-IP Easy Button
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. Go to **Certificates & Secrets**, generate a new **Client secret** and note it down
+
+10. Go to **Overview**, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+
+1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-oracle-ebs/easy-button-template.png)
+
+2. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-oracle-ebs/config-steps.png)
+
+3. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-oracle-ebs/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
+
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** that enables an admin to easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID**, and **Client Secret** you noted when registering the Easy Button client in your tenant.
+
+4. Before you select **Next**, confirm the BIG-IP can successfully connect to your tenant.
+
+ ![ Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-oracle-ebs/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-oracle-ebs/service-provider-settings.png)
+
+ Next, under optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle-ebs/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle-ebs/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM uses to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP uploads to Azure AD for encrypting the issued SAML assertions.
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-oracle-ebs/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant. Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario select **Oracle E-Business Suite > Add**.
+
+![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-oracle-ebs/azure-configuration-add-big-ip-application.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users see on [MyApps portal](https://myapplications.microsoft.com/)
+
+2. In the **Sign On URL (optional)** enter the public FQDN of the EBS application being secured, along with the default path for the Oracle EBS homepage
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-oracle-ebs/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+4. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+5. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-oracle-ebs/azure-configuration-sign-certificates.png)
+
+6. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-oracle-ebs/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims** tab shows the default claims to issue for the new application. It also lets you configure more claims.
+
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-oracle-ebs/user-attributes-claims.png)
+
+You can include additional Azure AD attributes if necessary, but the Oracle EBS scenario only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+1. Enable the **Advanced Settings** option
+
+2. Check the **LDAP Attributes** check box
+
+3. Select **Create New** in **Choose Authentication Server**
+
+4. Select **Use pool** or **Direct** server connection mode depending on your setup. This provides the **Server Address** of the target LDAP service. If using a single LDAP server, select **Direct**.
+
+5. Enter **Service Port** as 3060 (Default), 3161 (Secure), or any other port your Oracle LDAP service operates on
+
+6. Enter the **Base Search DN** (distinguished name) from which to search. This search DN is used to search groups across a whole directory.
+
+7. Set the **Admin DN** to the exact distinguished name for the account the APM will use to authenticate for LDAP queries, along with its password
+
+ ![Screenshot for additional user attributes](./media/f5-big-ip-oracle-ebs/additional-user-attributes.png)
+
+8. Leave all default **LDAP Schema Attributes**
+
+ ![Screenshot for LDAP schema attributes](./media/f5-big-ip-oracle-ebs/ldap-schema-attributes.png)
+
+9. Under **LDAP Query Properties**, set the **Search Dn** to the base node of the LDAP server from which to search for user objects
+
+10. Add the name of the user object attribute that must be returned from the LDAP directory. For EBS, the default is **orclguid**
+
+ ![Screenshot for LDAP query properties.png](./media/f5-big-ip-oracle-ebs/ldap-query-properties.png)
+
+#### Conditional Access Policy
+
+Conditional Access policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all Conditional Access policies that do not include user-based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+
+2. Select the right arrow and move it to the **Selected Policies** list
+
+ The selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the policy is not enforced.
+
+ ![Screenshot for CA policies](./media/f5-big-ip-oracle-ebs/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the application itself. Using a test PC's localhost DNS is fine for testing.
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+
+ ![Screenshot for Virtual server](./media/f5-big-ip-oracle-ebs/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. For **Pool Servers** select an existing node or specify an IP and port for the servers hosting the Oracle EBS application.
+
+ ![Screenshot for Application pool](./media/f5-big-ip-oracle-ebs/application-pool.png)
+
+4. The **Access Gate Pool** specifies the servers Oracle EBS uses for mapping an SSO authenticated user to an Oracle E-Business Suite session. Update **Pool Servers** with the IP and port for of the Oracle application servers hosting the application
+
+ ![Screenshot for AccessGate pool](./media/f5-big-ip-oracle-ebs/access-gate-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO to published applications. As the Oracle EBS application expects headers, enable **HTTP Headers** and enter the following properties.
+
+* **Header Operation:** replace
+* **Header Name:** USER_NAME
+* **Header Value:** %{session.sso.token.last.username}
+
+* **Header Operation:** replace
+* **Header Name:** USER_ORCLGUID
+* **Header Value:** %{session.ldap.last.attr.orclguid}
+
+ ![ Screenshot for SSO and HTTP headers](./media/f5-big-ip-oracle-ebs/sso-and-http-headers.png)
+
+>[!NOTE]
+>APM session variables defined within curly brackets are CASE sensitive. For example, if you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Refer to [F5's docs](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered here however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users sign off. When the Easy Button instantiates a SAML application in your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Azure AD MyApps portal also terminate the session between the BIG-IP and a client.
+
+Along with this the SAML federation metadata for the published application is also imported from your tenant, providing the APM with the SAML logout endpoint for Azure AD. This ensures SP initiated sign outs terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out of the application.
+
+If the BIG-IP webtop portal is used to access published applications then a sign-out from there would be processed by the APM to also call the Azure AD sign-out endpoint. But consider a scenario where the BIG-IP webtop portal isnΓÇÖt used, then the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this. So for this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to either the Azure AD SAML or BIG-IP sign-out endpoint. The URL for SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+
+If making a change to the app is a no go, then consider having the BIG-IP listen for the application's sign-out call, and upon detecting the request have it trigger SLO. Refer to our [Oracle PeopleSoft SLO guidance](/azure/active-directory/manage-apps/f5-big-ip-oracle-peoplesoft-easy-button#peoplesoft-single-logout) for using BIG-IP irules to achieve this. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+
+## Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of ΓÇÿEnterprise applications.
+
+## Next steps
+
+From a browser, connect to the **Oracle EBS applicationΓÇÖs external URL** or select the applicationΓÇÖs icon in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](/azure/active-directory/manage-apps/f5-big-ip-header-advanced). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle-ebs/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+Failure to access a SHA protected application can be due to any number of factors. BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list then **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see if the logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes from Azure AD or another source
+
+See [BIG-IP APM variable assign examples](https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference](https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
+
+The following command from a bash shell validates the APM service account used for LDAP queries and can successfully authenticate and query a user object:
+
+```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=oraclef5,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ```
+
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this [F5 knowledge article on LDAP Query](https://techdocs.f5.com/en-us/bigip-16-1-0/big-ip-access-policy-manager-authentication-methods/ldap-query.html).
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md
az aks create \
## Update a cluster's API server authorized IP ranges
-To update the API server authorized IP ranges on an existing cluster, use [az aks update][az-aks-update] command and use the *`--api-server-authorized-ip-ranges`*,--load-balancer-outbound-ip-prefixes*, *`--load-balancer-outbound-ips`*, or--load-balancer-outbound-ip-prefixes* parameters.
+To update the API server authorized IP ranges on an existing cluster, use [az aks update][az-aks-update] command and use the *`--api-server-authorized-ip-ranges`*, *`--load-balancer-outbound-ip-prefixes`*, *`--load-balancer-outbound-ips`*, or *`--load-balancer-outbound-ip-prefixes`* parameters.
The following example updates API server authorized IP ranges on the cluster named *myAKSCluster* in the resource group named *myResourceGroup*. The IP address range to authorize is *73.140.245.0/24*:
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
To update a cluster to use OIDC Issuer.
az aks update -n aks -g myResourceGroup --enable-oidc-issuer ```
+### Show the OIDC Issuer URL
+
+```azurecli-interactive
+az aks show -n aks -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv
+```
+ ## Next steps - Learn how [upgrade the node images](node-image-upgrade.md) in your cluster.
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
metadata:
namespace: kube-system data: test.server: | # you may select any name here, but it must end with the .server file extension
- <domain to be rewritten>.com:53 {
- log
- errors
- rewrite stop {
- name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local
- answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com
- }
- forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
-}
+ <domain to be rewritten>.com:53 {
+ log
+ errors
+ rewrite stop {
+ name regex (.*)\.<domain to be rewritten>.com {1}.default.svc.cluster.local
+ answer name (.*)\.default\.svc\.cluster\.local {1}.<domain to be rewritten>.com
+ }
+ forward . /etc/resolv.conf # you can redirect this to a specific DNS server such as 10.0.0.10, but that server must be able to resolve the rewritten domain name
+ }
``` > [!IMPORTANT]
aks Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md
Azure Kubernetes Service (AKS) provides additional, supported functionality for
## Add-ons
-Add-ons provide extra capabilities for your AKS cluster and their installation and configuration is managed Azure. Use `az aks addon` to manage all add-ons for your cluster.
+Add-ons provide extra capabilities for your AKS cluster and their installation and configuration is managed by Azure. Use `az aks addon` to manage all add-ons for your cluster.
The below table shows the available add-ons.
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
The following parameters can be leveraged to configure Private DNS Zone.
- "system", which is also the default value. If the --private-dns-zone argument is omitted, AKS will create a Private DNS Zone in the Node Resource Group. - "none", defaults to public DNS which means AKS will not create a Private DNS Zone. -- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID", which requires you to create a Private DNS Zone in this format for Azure global cloud: `privatelink.<region>.azmk8s.io` or `<subzone>.privatelink.<region>.azmk8s.io`. You will need the Resource ID of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `vnet contributor` roles.
+- "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID", which requires you to create a Private DNS Zone in this format for Azure global cloud: `privatelink.<region>.azmk8s.io` or `<subzone>.privatelink.<region>.azmk8s.io`. You will need the Resource ID of that Private DNS Zone going forward. Additionally, you will need a user assigned identity or service principal with at least the `private dns zone contributor` and `network contributor` roles.
- If the Private DNS Zone is in a different subscription than the AKS cluster, you need to register Microsoft.ContainerServices in both the subscriptions. - "fqdn-subdomain" can be utilized with "CUSTOM_PRIVATE_DNS_ZONE_RESOURCE_ID" only to provide subdomain capabilities to `privatelink.<region>.azmk8s.io`
Once the A record is created, link the private DNS zone to the virtual network t
[availability-zones]: availability-zones.md [command-invoke]: command-invoke.md [container-registry-private-link]: ../container-registry/container-registry-private-link.md
-[virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server
+[virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
The following limitations apply when you integrate Azure Dedicated Host with Azu
* An existing agent pool can't be converted from non-ADH to ADH or ADH to non-ADH. * It is not supported to update agent pool from host group A to host group B.
-* Fault domain count can only be 1.
+* Using ADH across subscriptions.
## Add a Dedicated Host Group to an AKS cluster
az vm host group create \
--name myHostGroup \ -g myDHResourceGroup \ -z 1\platform-fault-domain-count 1
+--platform-fault-domain-count 5
--automatic-placement true ```
In this article, you learned how to create an AKS cluster with a Dedicated host,
[aks-faq]: faq.md [azure-cli-install]: /cli/azure/install-azure-cli [dedicated-hosts]: ../virtual-machines/dedicated-hosts.md
-[az-vm-host-group-create]: /cli/azure/vm/host/group#az_vm_host_group_create
+[az-vm-host-group-create]: /cli/azure/vm/host/group#az_vm_host_group_create
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Title: Use the migration feature to migrate App Service Environment v2 to App Se
description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 using the migration feature Previously updated : 2/2/2022 Last updated : 4/5/2022 zone_pivot_groups: app-service-cli-portal
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
From the [Azure portal](https://portal.azure.com), navigate to the **Overview** page for the App Service Environment you'll be migrating. The platform will validate if migration is supported for your App Service Environment. Wait a couple seconds after the page loads for this validation to take place.
-If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the Overview page, a new item in the left-hand side menu called **Migration (preview)**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
+If migration is supported for your App Service Environment, there are three ways to access the migration feature. These methods include a banner at the top of the Overview page, a new item in the left-hand side menu called **Migration**, and an info box on the **Configuration** page. Select any of these methods to move on to the next step in the migration process.
![migration access points](./media/migration/portal-overview.png) ![configuration page view](./media/migration/configuration-migration-support.png)
-If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
+If you don't see these elements, your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state (which blocks migration). If your environment [won't be supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
The migration page will guide you through the series of steps to complete the migration.
The migration page will guide you through the series of steps to complete the mi
## 2. Generate IP addresses for your new App Service Environment v3
-Under **Generate new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time. If you may see a message a few minutes after starting this step asking you to refresh the page, select refresh as shown in the sample to allow your new IP addresses to appear.
+Under **Get new IP addresses**, confirm you understand the implications and start the process. This step will take about 15 minutes to complete. You won't be able to scale or make changes to your existing App Service Environment during this time. If after 15 minutes you don't see your new IP addresses, select refresh as shown in the sample to allow your new IP addresses to appear.
![pre-migration request to refresh](./media/migration/pre-migration-refresh.png)
When the previous step finishes, you'll be shown the IP addresses for your new A
## 4. Delegate your App Service Environment subnet
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and update the delegation if needed before migrating. A link to your subnet is given so that you can confirm and update as needed.
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You'll need to confirm your subnet is delegated properly and/or update the delegation if needed before migrating. A link to your subnet is given so that you can confirm and update as needed.
![ux subnet delegation sample](./media/migration/subnet-delegation-ux.png)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
description: Overview of the migration feature for migration to App Service Environment v3 Previously updated : 3/29/2022 Last updated : 4/5/2022
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
To configure environment variables for the web app from VS Code, you must have t
|:-|--:| | [!INCLUDE [VS Code connect app to postgres step 1](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-azure-extension.png" alt-text="A screenshot showing how to locate the Azure Tools extension in VS Code." ::: | | [!INCLUDE [VS Code connect app to postgres step 2](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-create-setting-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-create-setting.png" alt-text="A screenshot showing how to add a setting to the App Service in VS Code." ::: |
-| [!INCLUDE [VS Code connect app to postgres step 3](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a.png" alt-text="A screenshot showing adding setting name for app service to connect to Postgresql database in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b.png" alt-text="A screenshot showing adding setting value for app service to connect to Postgresql database in VS Code." ::: |
+| [!INCLUDE [VS Code connect app to postgres step 3](<./includes/tutorial-python-postgresql-app/connect-postgres-to-app-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-a.png" alt-text="A screenshot showing adding setting name for app service to connect to PostgreSQL database in VS Code." ::: :::image type="content" source="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b-240px.png" lightbox="./media/tutorial-python-postgresql-app/connect-app-to-database-settings-example-b.png" alt-text="A screenshot showing adding setting value for app service to connect to PostgreSQL database in VS Code." ::: |
### [Azure CLI](#tab/azure-cli)
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Extract text, tables, structure, key-value pairs, and named entities from docume
// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
- const poller = await client.beginAnalyzeDocuments("prebuilt-document", formUrl);
+ const poller = await client.beginAnalyzeDocument("prebuilt-document", formUrl);
const { keyValuePairs,
Extract text, selection marks, text styles, table structures, and bounding regio
async function main() { const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
- const poller = await client.beginAnalyzeDocuments("prebuilt-layout", formUrl);
+ const poller = await client.beginAnalyzeDocument("prebuilt-layout", formUrl);
const { pages,
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
- const poller = await client.beginAnalyzeDocuments(PrebuiltModels.Invoice, invoiceUrl);
+ const poller = await client.beginAnalyzeDocument(PrebuiltModels.Invoice, invoiceUrl);
const { documents: [result]
azure-arc Create Complete Managed Instance Directly Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-complete-managed-instance-directly-connected.md
After the connect command completes successfully, you can view the shadow object
The next step is to create the data controller in directly connected mode via the Azure portal. Use the same subscription and resource group that you used to [create a cluster](#create-a-cluster). 1. In the portal, locate the resource group from the previous step.
-1. Select the **Kubernetes - Azure Arc** object name.
-1. Select **Settings** > **Extensions**. Select **Add**.
-1. Select **Azure Arc data controller**.
-1. Click **Create**.
+1. From the search bar in Azure portal, search for *Azure Arc data controllers*, and select **+ Create**.
+1. Select **Azure Arc-enabled Kubernetes cluster (Direct connectivity mode)**. Select **Next: Data controller details**.
1. Specify a name for the data controller. 1. Specify a custom location (namespace).
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Table below lists API endpoints in Azure vs. Azure Government for accessing and
||Language Understanding|cognitiveservices.azure.com|cognitiveservices.azure.us </br>[Portal](https://luis.azure.us/)|| ||Personalizer|cognitiveservices.azure.com|cognitiveservices.azure.us|| ||QnA Maker|cognitiveservices.azure.com|cognitiveservices.azure.us||
-||Speech service|See [STT API docs](../cognitive-services/speech-service/rest-speech-to-text.md#regions-and-endpoints)|[Speech Studio](https://speech.azure.us/)</br></br>See [Speech service endpoints](../cognitive-services/Speech-Service/sovereign-clouds.md)</br></br>**Speech translation endpoints**</br>Virginia: `https://usgovvirginia.s2s.speech.azure.us`</br>Arizona: `https://usgovarizona.s2s.speech.azure.us`</br>||
+||Speech service|See [STT API docs](../cognitive-services/speech-service/rest-speech-to-text-short.md#regions-and-endpoints)|[Speech Studio](https://speech.azure.us/)</br></br>See [Speech service endpoints](../cognitive-services/Speech-Service/sovereign-clouds.md)</br></br>**Speech translation endpoints**</br>Virginia: `https://usgovvirginia.s2s.speech.azure.us`</br>Arizona: `https://usgovarizona.s2s.speech.azure.us`</br>||
||Text Analytics|cognitiveservices.azure.com|cognitiveservices.azure.us|| ||Translator|See [Translator API docs](../cognitive-services/translator/reference/v3-0-reference.md#base-urls)|cognitiveservices.azure.us|| |**Analytics**|Azure HDInsight|azurehdinsight.net|azurehdinsight.us||
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Azure Maps uses a key-based authentication scheme. When you create your account,
> [!NOTE] > Azure Maps shares customer-provided address/location queries with third-party TomTom for mapping functionality purposes. These queries aren't linked to any customer or end user when shared with TomTom and can't be used to identify individuals.
-Microsoft is currently in the process of adding TomTom, Moovit, and AccuWeather to the Online Services Subcontractor List.
+Microsoft is currently in the process of adding TomTom and AccuWeather to the Online Services Subcontractor List.
## Supported regions
azure-monitor Alerts Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log.md
You can also [create log alert rules using Azure Resource Manager templates](../
## Enable recommended out-of-the-box alert rules in the Azure portal (preview) > [!NOTE]
-> The alert recommendations feature is currently in preview and is only enabled for VMs.
+> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
-If you don't have any alert rules defined for the selected resource, you can enable our recommended out-of-the-box alert rules.
+If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can enable our recommended out-of-the-box alert rules.
:::image type="content" source="media/alerts-managing-alert-instances/enable-recommended-alert-rules.jpg" alt-text="Screenshot of alerts page with link to recommended alert rules.":::
azure-monitor Alerts Managing Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-managing-alert-instances.md
You can go to the alerts page in any of the following ways:
The **Alerts** page summarizes all your alert instances across Azure. ### Alert Recommendations (preview) > [!NOTE]
-> The alert recommendations feature is currently in preview and is only enabled for VMs.
+> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
-If you don't have any alerts defined for the selected resource, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
+If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
:::image type="content" source="media/alerts-managing-alert-instances/enable-recommended-alert-rules.jpg" alt-text="Screenshot of alerts page with link to recommended alert rules."::: ### Alerts summary pane
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
You can alert on metrics and logs, as described in [monitoring data sources](./.
The Alerts page provides a summary of the alerts created in the last 24 hours. ### Alert Recommendations (preview) > [!NOTE]
-> The alert recommendations feature is currently in preview and is only enabled for VMs.
+> The alert rule recommendations feature is currently in preview and is only enabled for VMs.
-If you don't have any alerts defined for the selected resource, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
+If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or [enable recommended out-of-the-box alert rules in the Azure portal (preview)](alerts-log.md#enable-recommended-out-of-the-box-alert-rules-in-the-azure-portal-preview).
:::image type="content" source="media/alerts-managing-alert-instances/enable-recommended-alert-rules.jpg" alt-text="Screenshot of alerts page with link to recommended alert rules."::: ### Alerts summary pane
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
var appInsights = new ApplicationInsights({
instrumentationKey: 'YOUR_INSTRUMENTATION_KEY_GOES_HERE', enableAutoRouteTracking: true, extensions: [reactPlugin]
- }
} }); appInsights.loadAppInsights();
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
The key value pairs provide an easy way for users to define a prefix suffix comb
## Scenario overview
-Customer scenarios where we visualize this having the most impact:
+Scenarios most affected by this change:
- Firewall exceptions or proxy redirects
Customer scenarios where we visualize this having the most impact:
### Finding my connection string?
-Your connection string is displayed on the Overview blade of your Application Insights resource.
+Your connection string is displayed on the Overview section of your Application Insights resource.
![connection string on overview blade](media/overview-dashboard/overview-connection-string.png)
See also: [Regions that require endpoint modification](./custom-endpoints.md#reg
`InstrumentationKey=00000000-0000-0000-0000-000000000000;EndpointSuffix=ai.contoso.com;`
-In this example, this connection string specifies the endpoint suffix and the SDK will construct service endpoints.
+In this example, the connection string specifies the endpoint suffix and the SDK will construct service endpoints.
- Authorization scheme defaults to "ikey" - Instrumentation Key: 00000000-0000-0000-0000-000000000000
In this example, this connection string specifies the endpoint suffix and the SD
`InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://custom.com:111/;LiveEndpoint=https://custom.com:222/;ProfilerEndpoint=https://custom.com:333/;SnapshotEndpoint=https://custom.com:444/;`
-In this example, this connection string specifies explicit overrides for every service. The SDK will use the exact endpoints provided without modification.
+In this example, the connection string specifies explicit overrides for every service. The SDK will use the exact endpoints provided without modification.
- Authorization scheme defaults to "ikey" - Instrumentation Key: 00000000-0000-0000-0000-000000000000
In this example, this connection string specifies explicit overrides for every s
- Profiler: `https://custom.com:333/` - Debugger: `https://custom.com:444/`
+### Connection string with explicit region
+
+`InstrumentationKey=00000000-0000-0000-0000-000000000000;IngestionEndpoint=https://southcentralus.in.applicationinsights.azure.com/`
+
+In this example, the connection string specifies the South Central US region.
+
+- Authorization scheme defaults to "ikey"
+- Instrumentation Key: 00000000-0000-0000-0000-000000000000
+- The regional service URIs are based on the explicit override values:
+ - Ingestion: `https://southcentralus.in.applicationinsights.azure.com/`
+
+Run the following command in the [Azure Command-Line Interface (CLI)](https://docs.microsoft.com/cli/azure/account?view=azure-cli-latest#az-account-list-locations) to list available regions.
+
+`az account list-locations -o table`
## How to set a connection string
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-update-metrics.md
az role assignment create --assignee <clientIdOfSPN> --scope <clusterResourceId>
```
-To get the value for `clientIdOfSPNOrMsi`, you can run the command `az aks show` as shown in the following example. If the `servicePrincipalProfile` object has a valid `clientid` value, you can use that. Otherwise, if it's set to `msi`, you need to pass in the client ID from `addonProfiles.omsagent.identity.clientId`.
+To get the value for `clientIdOfSPNOrMsi`, you can run the command `az aks show` as shown in the following example. If the `servicePrincipalProfile` object has a valid `objectid` value, you can use that. Otherwise, if it's set to `msi`, you need to pass in the Object ID from `addonProfiles.omsagent.identity.objectId`.
```azurecli az login
azure-monitor Design Logs Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/design-logs-deployment.md
Title: Designing your Azure Monitor Logs deployment | Microsoft Docs description: This article describes the considerations and recommendations for customers preparing to deploy a workspace in Azure Monitor. -- Previously updated : 09/20/2019+++ Last updated : 05/04/2022
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
- [Application Monitoring for Azure App Service and Node.js](./app/azure-web-apps-nodejs.md) - [Enable Snapshot Debugger for .NET apps in Azure App Service](./app/snapshot-debugger-appservice.md) - [Profile live Azure App Service apps with Application Insights](./app/profiler.md)-- [Visualizations for Application Change Analysis (preview)](/azure/azure-monitor/app/change-analysis-visualizations.md)
+- [Visualizations for Application Change Analysis (preview)](/azure/azure-monitor/app/change-analysis-visualizations)
### Autoscale
This article lists significant changes to Azure Monitor documentation.
**Updated articles** - [Troubleshoot VM insights guest health (preview)](vm/vminsights-health-troubleshoot.md)-- [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)
+- [Create interactive reports VM insights with workbooks](vm/vminsights-workbooks.md)
azure-netapp-files Dynamic Change Volume Service Level https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/dynamic-change-volume-service-level.md
na Previously updated : 03/22/2022 Last updated : 04/05/2022 # Dynamically change the service level of a volume
The capacity pool that you want to move the volume to must already exist. The ca
* After the volume is moved to another capacity pool, you will no longer have access to the previous volume activity logs and volume metrics. The volume will start with new activity logs and metrics under the new capacity pool.
-* If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*).
+* If you move a volume to a capacity pool of a higher service level (for example, moving from *Standard* to *Premium* or *Ultra* service level), you must wait at least seven days before you can move that volume *again* to a capacity pool of a lower service level (for example, moving from *Ultra* to *Premium* or *Standard*). You can always change to higher service level without wait time.
## Register the feature
azure-netapp-files Monitor Volume Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/monitor-volume-capacity.md
na Previously updated : 04/30/2021 Last updated : 04/04/2022 # Monitor the capacity of a volume
The following snapshot shows volume capacity reporting in Linux:
The *available space* is accurate using the `df` command. However, the *consumed/used space* will be an estimate when snapshots are generated on the volume. The [consumed snapshot capacity](azure-netapp-files-cost-model.md#capacity-consumption-of-snapshots) counts towards the total consumed space on the volume. To get the absolute volume consumption, including the capacity used by snapshots, use the [Azure NetApp Metrics](azure-netapp-files-metrics.md#volumes) in the Azure portal.
+> [!NOTE]
+> The `du` command doesnΓÇÖt account for the space used by snapshots generated in the volume. As such, itΓÇÖs not recommended for determining the available capacity in a volume.
+ ## Using Azure portal Azure NetApp Files leverages the standard [Azure Monitor](../azure-monitor/overview.md) functionality. As such, you can use Azure Monitor to monitor Azure NetApp Files volumes.
The REST API specification and example code for Azure NetApp Files are available
* [Understand volume quota](volume-quota-introduction.md) * [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md) * [Resize the capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md)
-* [Capacity management FAQs](faq-capacity-management.md)
+* [Capacity management FAQs](faq-capacity-management.md)
azure-percept Azure Percept Dk Datasheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azure-percept-dk-datasheet.md
Last updated 02/16/2021
|Included in Box |1x Azure Percept DK Carrier Board <br> 1x [Azure Percept Vision](./azure-percept-vision-datasheet.md) <br> 1x RGB Sensor (Camera) <br> 1x USB 3.0 Type C Cable <br> 1x DC Power Cable <br> 1x AC/DC Converter <br> 2x Wi-Fi Antennas | |OS  |[CBL-Mariner](https://github.com/microsoft/CBL-Mariner) | |Management Control Plane |Azure Device Update (ADU) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) |
-|Supported Software and Services |Azure Device Update <br> [Azure IoT](https://azure.microsoft.com/overview/iot/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Central](https://azure.microsoft.com/services/iot-central/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) and [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1) <br> [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) <br> [Azure Mariner OS with Connectivity](https://github.com/microsoft/CBL-Mariner) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [TensorFlow](https://www.tensorflow.org/) <br> [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) <br> IoT Plug and Play <br> [Azure Device Provisioning Service (DPS)](../iot-dps/index.yml) <br> [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) <br> [Power BI](https://powerbi.microsoft.com/) |
+|Supported Software and Services |Azure Device Update <br> [Azure IoT](https://azure.microsoft.com/overview/iot/) <br> [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) <br> [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/) and [Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/internet-of-things?page=1) <br> [Azure Container Registry](https://azure.microsoft.com/services/container-registry/) <br> [Azure Mariner OS with Connectivity](https://github.com/microsoft/CBL-Mariner) <br> [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) <br> [ONNX Runtime](https://www.onnxruntime.ai/) <br> [TensorFlow](https://www.tensorflow.org/) <br> [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) <br> IoT Plug and Play <br> [Azure Device Provisioning Service (DPS)](../iot-dps/index.yml) <br> [Azure Cognitive Services](https://azure.microsoft.com/services/cognitive-services/) <br> [Power BI](https://powerbi.microsoft.com/) |
|General Processor |NXP iMX8m (Azure Percept DK Carrier Board) | |AI Acceleration |1x Intel Movidius Myriad X Integrated ISP (Azure Percept Vision) | |Sensors and Visual Indicators |Sony IMX219 Camera sensor with 6P Lens<br>Resolution: 8MP at 30FPS, Distance: 50 cm - infinity<br>FoV: 120-degrees diagonal, Color: Wide Dynamic Range, Fixed Focus Rolling Shutter|
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 12/07/2021 Last updated : 04/05/2022
The `bicep install` and `bicep upgrade` commands don't work in an air-gapped env
- **Linux** 1. Download **bicep-linux-x64** from the [Bicep release page](https://github.com/Azure/bicep/releases/latest/) in a non-air-gapped environment.
- 1. Copy the executable to the **$HOME/.azure/bin** directory on an air-gapped machine.
+ 1. Copy the executable to the **$HOME/.azure/bin** directory on an air-gapped machine. Rename file to **bicep**.
- **macOS** 1. Download **bicep-osx-x64** from the [Bicep release page](https://github.com/Azure/bicep/releases/latest/) in a non-air-gapped environment.
- 1. Copy the executable to the **$HOME/.azure/bin** directory on an air-gapped machine.
+ 1. Copy the executable to the **$HOME/.azure/bin** directory on an air-gapped machine. Rename file to **bicep**.
- **Windows** 1. Download **bicep-win-x64.exe** from the [Bicep release page](https://github.com/Azure/bicep/releases/latest/) in a non-air-gapped environment.
- 1. Copy the executable to the **%UserProfile%/.azure/bin** directory on an air-gapped machine.
+ 1. Copy the executable to the **%UserProfile%/.azure/bin** directory on an air-gapped machine. Rename file to **bicep.exe**.
## Install the nightly builds
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
Title: Resource naming restrictions description: Shows the rules and restrictions for naming Azure resources. Previously updated : 04/04/2022 Last updated : 04/05/2022 # Naming rules and restrictions for Azure resources
In the following tables, the term alphanumeric refers to:
> | | | | | > | configurationStores | resource group | 5-50 | Alphanumerics, underscores, and hyphens. |
+## Microsoft.AppPlatform
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | spring | resource group | 4-32 | Lowercase letters, numbers, and hyphens. |
+ ## Microsoft.Authorization > [!div class="mx-tableFixed"]
In the following tables, the term alphanumeric refers to:
> | mediaservices / liveEvents / liveOutputs | Live event | 1-256 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. | > | mediaservices / streamingEndpoints | Media service | 1-24 | Alphanumerics and hyphens.<br><br>Start with alphanumeric. |
+## Microsoft.NetApp
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | netAppAccounts | resource group | 1-128 | Alphanumerics, underscores, periods, and hyphens. |
+> | netAppAccounts / capacityPools | NetApp account | 1-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. |
+> | netAppAccounts / snapshotPolicies | NetApp account | 1-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. |
+> | netAppAccounts / volumeGroups | NetApp account | 1-64 | Alphanumerics, underscores, periods, and hyphens.<br><br>Start with alphanumeric. |
+ ## Microsoft.Network > [!div class="mx-tableFixed"]
In the following tables, the term alphanumeric refers to:
> | | | | | > | capacities | region | 3-63 | Lowercase letters or numbers<br><br>Start with lowercase letter. |
+## Microsoft.Quantum
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | workspaces | region | 2-54 | Alphanumerics and hyphens.<br><br>Can't start or end with hyphen. |
+ ## Microsoft.RecoveryServices > [!div class="mx-tableFixed"]
azure-signalr Signalr Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-service.md
ms.devlang: azurecli Previously updated : 12/08/2021 Last updated : 03/30/2022
-# Create a SignalR Service
+# Create a SignalR Service
This sample script creates a new Azure SignalR Service resource in a new resource group with a random name.
This sample script creates a new Azure SignalR Service resource in a new resourc
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-This script creates a new SignalR Service resource and a new resource group.
-
-```azurecli-interactive
-#!/bin/bash
-
-# Generate a unique suffix for the service name
-let randomNum=$RANDOM*$RANDOM
-# Generate a unique service and group name with the suffix
-SignalRName=SignalRTestSvc$randomNum
-#resource name must be lowercase
-mySignalRSvcName=${SignalRName}
-myResourceGroupName=$SignalRName"Group"
+### Run the script
-# Create resource group
-az group create --name $myResourceGroupName --location eastus
-# Create the Azure SignalR Service resource
-az signalr create \
- --name $mySignalRSvcName \
- --resource-group $myResourceGroupName \
- --sku Standard_S1 \
- --unit-count 1 \
- --service-mode Default
+## Clean up resources
-# Get the SignalR primary connection string
-primaryConnectionString=$(az signalr key list --name $mySignalRSvcName \
- --resource-group $myResourceGroupName --query primaryConnectionString -o tsv)
-echo "$primaryConnectionString"
+```azurecli
+az group delete --name $resourceGroup
```
-Make a note of the actual name generated for the new resource group. You will use that resource group name when you want to delete all group resources.
--
-## Script explanation
+## Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
Each command in the table links to command specific documentation. This script u
| [az signalr create](/cli/azure/signalr#az-signalr-create) | Creates an Azure SignalR Service resource. | | [az signalr key list](/cli/azure/signalr/key#az-signalr-key-list) | List the keys, which will be used by your application when pushing real-time content updates with SignalR. | - ## Next steps For more information on the Azure CLI, see [Azure CLI documentation](/cli/azure).
azure-signalr Signalr Cli Create With App Service Github Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service-github-oauth.md
ms.devlang: azurecli Previously updated : 04/22/2018 Last updated : 03/30/2022
This sample script creates a new Azure SignalR Service resource, which is used t
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you choose to install and use the CLI locally, this article requires that you are running the Azure CLI version 2.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI]( /cli/azure/install-azure-cli).
+## Sample scripts
-## Sample script
-This script uses the *signalr* extension for the Azure CLI. Execute the following command to install the *signalr* extension for the Azure CLI before using this sample script:
+### Create the SignalR service with an App service
-```azurecli-interactive
-#!/bin/bash
-#========================================================================
-#=== Update these values based on your desired deployment username ===
-#=== and password. ===
-#========================================================================
-deploymentUser=<Replace with your desired username>
-deploymentUserPassword=<Replace with your desired password>
+### Enable Github authentication and Git deployment for web app
-#========================================================================
-#=== Update these values based on your GitHub OAuth App registration. ===
-#========================================================================
-GitHubClientId=<Replace with your GitHub OAuth app Client ID>
-GitHubClientSecret=<Replace with your GitHub OAuth app Client Secret>
+1. Update the values in the following script for the desired deployment username and its passwor
+ ```azurecli
+ deploymentUser=<Replace with your desired username>
+ deploymentUserPassword=<Replace with your desired password>
+ ```
-# Generate a unique suffix for the service name
-let randomNum=$RANDOM*$RANDOM
+2. Update the values in the following script based on your GitHub OAuth App registration.
-# Generate unique names for the SignalR service, resource group,
-# app service, and app service plan
-SignalRName=SignalRTestSvc$randomNum
-#resource name must be lowercase
-mySignalRSvcName=${SignalRName,,}
-myResourceGroupName=$SignalRName"Group"
-myWebAppName=SignalRTestWebApp$randomNum
-myAppSvcPlanName=$myAppSvcName"Plan"
+ ```azurecli
+ GitHubClientId=<Replace with your GitHub OAuth app Client ID>
+ GitHubClientSecret=<Replace with your GitHub OAuth app Client Secret>
+ ```
-# Create resource group
-az group create --name $myResourceGroupName --location eastus
+3. Add app settings to use with GitHub authentication
-# Create the Azure SignalR Service resource
-az signalr create \
- --name $mySignalRSvcName \
- --resource-group $myResourceGroupName \
- --sku Standard_S1 \
- --unit-count 1 \
- --service-mode Default
+ ```Azure CLI
+ az webapp config appsettings set --name $webApp --resource-group $resourceGroup --settings "GitHubClientSecret=$GitHubClientSecret"
+ ```
-# Create an App Service plan.
-az appservice plan create --name $myAppSvcPlanName --resource-group $myResourceGroupName --sku FREE
+4. Update the webapp with the desired deployment user name and password
-# Create the Web App
-az webapp create --name $myWebAppName --resource-group $myResourceGroupName --plan $myAppSvcPlanName
+ ```Azure CLI
+ az webapp deployment user set --user-name $deploymentUser --password $deploymentUserPassword
+ ```
-# Get the SignalR primary connection string
-primaryConnectionString=$(az signalr key list --name $mySignalRSvcName \
- --resource-group $myResourceGroupName --query primaryConnectionString -o tsv)
+5. Configure Git deployment and return the deployment URL.
-#Add an app setting to the web app for the SignalR connection
-az webapp config appsettings set --name $myWebAppName --resource-group $myResourceGroupName \
- --settings "Azure:SignalR:ConnectionString=$primaryConnectionString"
+ ```Azure CLI
+ az webapp deployment source config-local-git --name $webAppName --resource-group $resourceGroupName --query [url] -o tsv
+ ```
-#Add app settings to use with GitHub authentication
-az webapp config appsettings set --name $myWebAppName --resource-group $myResourceGroupName \
- --settings "GitHubClientId=$GitHubClientId"
-az webapp config appsettings set --name $myWebAppName --resource-group $myResourceGroupName \
- --settings "GitHubClientSecret=$GitHubClientSecret"
+## Clean up resources
-# Add the desired deployment user name and password
-az webapp deployment user set --user-name $deploymentUser --password $deploymentUserPassword
-# Configure Git deployment and note the deployment URL in the output
-az webapp deployment source config-local-git --name $myWebAppName --resource-group $myResourceGroupName \
- --query [url] -o tsv
+```azurecli
+az group delete --name $resourceGroup
```
-Make a note of the actual name generated for the new resource group. It will be shown in the output. You will use that resource group name when you want to delete all group resources.
--
-## Script explanation
+## Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
azure-signalr Signalr Cli Create With App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service.md
ms.devlang: azurecli Previously updated : 11/13/2018 Last updated : 03/30/2022
This sample script creates a new Azure SignalR Service resource, which is used t
[!INCLUDE [azure-cli-prepare-your-environment.md](../../../includes/azure-cli-prepare-your-environment.md)] - ## Sample script
-This script uses the *signalr* extension for the Azure CLI. Execute the following command to install the *signalr* extension for the Azure CLI before using this sample script:
-
-```azurecli-interactive
-#!/bin/bash
-
-# Generate a unique suffix for the service name
-let randomNum=$RANDOM*$RANDOM
-
-# Generate unique names for the SignalR service, resource group,
-# app service, and app service plan
-SignalRName=SignalRTestSvc$randomNum
-#resource name must be lowercase
-mySignalRSvcName=${SignalRName,,}
-myResourceGroupName=$SignalRName"Group"
-myWebAppName=SignalRTestWebApp$randomNum
-myAppSvcPlanName=$myAppSvcName"Plan"
-# Create resource group
-az group create --name $myResourceGroupName --location eastus
+### Run the script
-# Create the Azure SignalR Service resource
-az signalr create \
- --name $mySignalRSvcName \
- --resource-group $myResourceGroupName \
- --sku Standard_S1 \
- --unit-count 1 \
- --service-mode Default
-# Create an App Service plan.
-az appservice plan create --name $myAppSvcPlanName --resource-group $myResourceGroupName --sku FREE
+## Clean up resources
-# Create the Web App
-az webapp create --name $myWebAppName --resource-group $myResourceGroupName --plan $myAppSvcPlanName
-# Get the SignalR primary connection string
-primaryConnectionString=$(az signalr key list --name $mySignalRSvcName \
- --resource-group $myResourceGroupName --query primaryConnectionString -o tsv)
-
-#Add an app setting to the web app for the SignalR connection
-az webapp config appsettings set --name $myWebAppName --resource-group $myResourceGroupName \
- --settings "AzureSignalRConnectionString=$primaryConnectionString"
+```azurecli
+az group delete --name $resourceGroup
```
-Make a note of the actual name generated for the new resource group. It will be shown in the output. You will use that resource group name when you want to delete all group resources.
--
-## Script explanation
+## Sample reference
Each command in the table links to command specific documentation. This script uses the following commands:
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes-whats-new.md
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, either manually, or through a built-in image, you can leverage Azure features to improve your experience. This article summarizes the documentation changes associated with new features and improvements in the recent releases of [SQL Server on Azure Virtual Machines (VMs)](https://azure.microsoft.com/services/virtual-machines/sql-server/). To learn more about SQL Server on Azure VMs, see the [overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
+## April 2022
++
+| Changes | Details |
+| | |
+| **Ebdsv5-series** | The new [Ebdsv5-series](../../../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) provides the highest I/O throughput-to-vCore ratio in Azure along with a memory-to-vCore ratio of 8. This series offers the best price-performance for SQL Server workloads on Azure VMs. Consider this series first for most SQL Server workloads. To learn more, see the updates in [VM sizes](performance-guidelines-best-practices-vm-size.md). |
++ ## March 2022 | Changes | Details |
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
This article provides a quick checklist as a series of best practices and guidelines to optimize performance of your SQL Server on Azure Virtual Machines (VMs).
-For comprehensive details, see the other articles in this series: [Checklist](performance-guidelines-best-practices-checklist.md), [VM size](performance-guidelines-best-practices-vm-size.md), [Storage](performance-guidelines-best-practices-storage.md), [Security](security-considerations-best-practices.md), [HADR configuration](hadr-cluster-best-practices.md), [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
+For comprehensive details, see the other articles in this series: [VM size](performance-guidelines-best-practices-vm-size.md), [Storage](performance-guidelines-best-practices-storage.md), [Security](security-considerations-best-practices.md), [HADR configuration](hadr-cluster-best-practices.md), [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
-Enable [SQL Assessment for SQL Server on Azure VMs](sql-assessment-for-sql-vm.md) and your SQL Server will be evaluated against known best practices and results shown on the [SQL VM management page](manage-sql-vm-portal.md) of the Azure portal.
+Enable [SQL Assessment for SQL Server on Azure VMs](sql-assessment-for-sql-vm.md) and your SQL Server will be evaluated against known best practices with results on the [SQL VM management page](manage-sql-vm-portal.md) of the Azure portal.
-For video introductions and the latest features on Azure SQL VM optimization and management automation, review this video series from Data Exposed:
+For videos about the latest features to optimize SQL Server VM performance and automate management, review the following Data Exposed videos:
-- [Azure SQL VM: Caching and Storage Capping (Ep. 1)](/shows/data-exposed/azure-sql-vm-caching-and-storage-capping-ep-1-data-exposed)-- [Azure SQL VM: Automate Management with the SQL Server IaaS Agent extension (Ep. 2)](/shows/data-exposed/azure-sql-vm-automate-management-with-the-sql-server-iaas-agent-extension-ep-2)-- [Azure SQL VM: Use Azure Monitor Metrics to Track VM Cache Health (Ep. 3)](/shows/data-exposed/azure-sql-vm-use-azure-monitor-metrics-to-track-vm-cache-health-ep-3)-- [Azure SQL VM: Get the best price-performance for your SQL Server workloads on Azure VM](/shows/data-exposed/azure-sql-vm-get-the-best-price-performance-for-your-sql-server-workloads-on-azure-vm)-- [Azure SQL VM: Using PerfInsights to Evaluate Resource Health and Troubleshoot (Ep. 5)](/shows/data-exposed/azure-sql-vm-using-perfinsights-to-evaluate-resource-health-and-troubleshoot-ep-5)-- [Azure SQL VM: Best Price-Performance with Ebdsv5 Series (Ep.6)](/shows/data-exposed/azure-sql-vm-best-price-performance-with-ebdsv5-series)-- [Azure SQL VM: Optimally Configure SQL Server on Azure Virtual Machines with SQL Assessment (Ep. 7)](/shows/data-exposed/optimally-configure-sql-server-on-azure-virtual-machines-with-sql-assessment)-- [Azure SQL VM: New and Improved SQL on Azure VM deployment and management experience (Ep.8) | Data Exposed](/shows/data-exposed/new-and-improved-sql-on-azure-vm-deployment-and-management-experience)
+- [Caching and Storage Capping (Ep. 1)](/shows/data-exposed/azure-sql-vm-caching-and-storage-capping-ep-1-data-exposed)
+- [Automate Management with the SQL Server IaaS Agent extension (Ep. 2)](/shows/data-exposed/azure-sql-vm-automate-management-with-the-sql-server-iaas-agent-extension-ep-2)
+- [Use Azure Monitor Metrics to Track VM Cache Health (Ep. 3)](/shows/data-exposed/azure-sql-vm-use-azure-monitor-metrics-to-track-vm-cache-health-ep-3)
+- [Get the best price-performance for your SQL Server workloads on Azure VM](/shows/data-exposed/azure-sql-vm-get-the-best-price-performance-for-your-sql-server-workloads-on-azure-vm)
+- [Using PerfInsights to Evaluate Resource Health and Troubleshoot (Ep. 5)](/shows/data-exposed/azure-sql-vm-using-perfinsights-to-evaluate-resource-health-and-troubleshoot-ep-5)
+- [Best Price-Performance with Ebdsv5 Series (Ep.6)](/shows/data-exposed/azure-sql-vm-best-price-performance-with-ebdsv5-series)
+- [Optimally Configure SQL Server on Azure Virtual Machines with SQL Assessment (Ep. 7)](/shows/data-exposed/optimally-configure-sql-server-on-azure-virtual-machines-with-sql-assessment)
+- [New and Improved SQL Server on Azure VM deployment and management experience (Ep.8)](/shows/data-exposed/new-and-improved-sql-on-azure-vm-deployment-and-management-experience)
## Overview
There is typically a trade-off between optimizing for costs and optimizing for p
## VM Size
-The following is a quick checklist of VM size best practices for running your SQL Server on Azure VM:
+The following is a quick checklist of VM size best practices for running your SQL Server on Azure VM:
+- The new [Ebdsv5-series](../../../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) provides the highest I/O throughput-to-vCore ratio in Azure along with a memory-to-vCore ratio of 8. This series offers the best price-performance for SQL Server workloads on Azure VMs. Consider this series first for most SQL Server workloads.
- Use VM sizes with 4 or more vCPUs like the [E4ds_v5](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) or higher. - Use [memory optimized](../../../virtual-machines/sizes-memory.md) virtual machine sizes for the best performance of SQL Server workloads. - The [Edsv5](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) series, the [M-](../../../virtual-machines/m-series.md), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. -- The [Edsv5](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) series offers the best price-performance for SQL Server workloads on Azure VMs. Consider this series first for most SQL Server workloads. - The M series VMs offer the highest memory-to-vCore ratio in Azure. Consider these VMs for mission critical and data warehouse workloads. - Leverage Azure Marketplace images to deploy your SQL Server Virtual Machines as the SQL Server settings and storage options are configured for optimal performance. - Collect the target workload's performance characteristics and use them to determine the appropriate VM size for your business.
azure-sql Performance Guidelines Best Practices Vm Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-vm-size.md
Last updated 12/10/2021
+ # VM size: Performance best practices for SQL Server on Azure VMs+ [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)] This article provides VM size guidance a series of best practices and guidelines to optimize performance for your SQL Server on Azure Virtual Machines (VMs). There is typically a trade-off between optimizing for costs and optimizing for performance. This performance best practices series is focused on getting the *best* performance for SQL Server on Azure Virtual Machines. If your workload is less demanding, you might not require every recommended optimization. Consider your performance needs, costs, and workload patterns as you evaluate these recommendations.
-For comprehensive details, see the other articles in this series: [Checklist](performance-guidelines-best-practices-checklist.md), [Storage](performance-guidelines-best-practices-storage.md), [Security](security-considerations-best-practices.md), [HADR configuration](hadr-cluster-best-practices.md), [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
-
+For comprehensive details, see the other articles in this series: [Checklist](performance-guidelines-best-practices-checklist.md), [Storage](performance-guidelines-best-practices-storage.md), [Security](security-considerations-best-practices.md), [HADR configuration](hadr-cluster-best-practices.md), [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
## Checklist
-Review the following checklist for a brief overview of the VM size best practices that the rest of the article covers in greater detail:
+Review the following checklist for a brief overview of the VM size best practices that the rest of the article covers in greater detail:
+- The new [Ebdsv5-series](../../../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) provides the highest I/O throughput-to-vCore ratio in Azure along with a memory-to-vCore ratio of 8. This series offers the best price-performance for SQL Server workloads on Azure VMs. Consider this series first for most SQL Server workloads.
- Use VM sizes with 4 or more vCPUs like the [E4ds_v5](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) or higher. - Use [memory optimized](../../../virtual-machines/sizes-memory.md) virtual machine sizes for the best performance of SQL Server workloads. - The [Edsv5](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) series, the [M-](../../../virtual-machines/m-series.md), and the [Mv2-](../../../virtual-machines/mv2-series.md) series offer the optimal memory-to-vCore ratio required for OLTP workloads. -- The [Edsv5](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) series offers the best price-performance for SQL Server workloads on Azure VMs. Consider this series first for most SQL Server workloads. - The M series VMs offer the highest memory-to-vCore ratio in Azure. Consider these VMs for mission critical and data warehouse workloads. - Leverage Azure Marketplace images to deploy your SQL Server Virtual Machines as the SQL Server settings and storage options are configured for optimal performance. - Collect the target workload's performance characteristics and use them to determine the appropriate VM size for your business. - Use the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) [SKU recommendation](/sql/dma/dma-sku-recommend-sql-db) tool to find the right VM size for your existing SQL Server workload.
-To compare the VM size checklist with the others, see the comprehensive [Performance best practices checklist](performance-guidelines-best-practices-checklist.md).
+To compare the VM size checklist with the others, see the comprehensive [Performance best practices checklist](performance-guidelines-best-practices-checklist.md).
## Overview
-When you are creating a SQL Server on Azure VM, carefully consider the type of workload necessary. If you are migrating an existing environment, [collect a performance baseline](performance-guidelines-best-practices-collect-baseline.md) to determine your SQL Server on Azure VM requirements. If this is a new VM, then create your new SQL Server VM based on your vendor requirements.
+When you are creating a SQL Server on Azure VM, carefully consider the type of workload necessary. If you are migrating an existing environment, [collect a performance baseline](performance-guidelines-best-practices-collect-baseline.md) to determine your SQL Server on Azure VM requirements. If this is a new VM, then create your new SQL Server VM based on your vendor requirements.
If you are creating a new SQL Server VM with a new application built for the cloud, you can easily size your SQL Server VM as your data and usage requirements evolve.
-Start the development environments with the lower-tier D-Series, B-Series, or Av2-series and grow your environment over time.
+Start the development environments with the lower-tier D-Series, B-Series, or Av2-series and grow your environment over time.
-Use the SQL Server VM Azure Marketplace images with the storage configuration in the portal. This will make it easier to properly create the storage pools necessary to get the size, IOPS, and throughput required for your workloads. It is important to choose SQL Server VMs that support premium storage and premium storage caching. See the [storage](performance-guidelines-best-practices-storage.md) article to learn more.
+Use the SQL Server VM marketplace images with the storage configuration in the portal. This will make it easier to properly create the storage pools necessary to get the size, IOPS, and throughput necessary for your workloads. It is important to choose SQL Server VMs that support premium storage and premium storage caching. See the [storage](performance-guidelines-best-practices-storage.md) article to learn more.
-The recommended minimum for a production OLTP environment is 4 vCore, 32 GB of memory, and a memory-to-vCore ratio of 8. For new environments, start with 4 vCore machines and scale to 8, 16, 32 vCores or more when your data and compute requirements change. For OLTP throughput, target SQL Server VMs that have 5000 IOPS for every vCore.
+Use the SQL Server VM Azure Marketplace images with the storage configuration in the portal. This will make it easier to properly create the storage pools necessary to get the size, IOPS, and throughput required for your workloads. It is important to choose SQL Server VMs that support premium storage and premium storage caching. Currently, the [Ebdsv5-series](../../../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) provides the highest I/O throughput-to-vCore ratio available in Azure. If you do not know the I/O requirements for your SQL Server workload, this series is the one most likely to meet your needs. See the [storage](performance-guidelines-best-practices-storage.md) article to learn more.
+
+> [!NOTE]
+> If you are interested in participating in the [Ebdsv5-series](../../../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) public preview, please sign up at [https://aka.ms/signupEbsv5Preview](https://aka.ms/signupEbsv5Preview).
SQL Server data warehouse and mission critical environments will often need to scale beyond the 8 memory-to-vCore ratio. For medium environments, you may want to choose a 16 memory-to-vCore ratio, and a 32 memory-to-vCore ratio for larger data warehouse environments.
Use the vCPU and memory configuration from your source machine as a baseline for
## Memory optimized
-The [memory optimized virtual machine sizes](../../../virtual-machines/sizes-memory.md) are a primary target for SQL Server VMs and the recommended choice by Microsoft. The memory optimized virtual machines offer higher memory-to-vCore ratios and medium-to-large cache options.
+The [memory optimized virtual machine sizes](../../../virtual-machines/sizes-memory.md) are a primary target for SQL Server VMs and the recommended choice by Microsoft. The memory optimized virtual machines offer stronger memory-to-CPU ratios and medium-to-large cache options.
+
+### Ebdsv5-series
+
+The [Ebdsv5-series](../../../virtual-machines/ebdsv5-ebsv5-series.md#ebdsv5-series) is a new memory-optimized series of VMs that offer the highest remote storage throughput available in Azure. These VMs have a memory-to-vCore ratio of 8 which, together with the high I/O throughput, makes them ideal for SQL Server workloads. The Ebdsv5-series VMs offer the best price-performance for SQL Server workloads running on Azure virtual machines and we strongly recommend them for most of your production SQL Server workloads.
### Edsv5-series
-The [Edsv5-series](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) is designed for memory-intensive applications and is the VM series that Microsoft recommends for most SQL Server workloads. These VMs have a large local storage SSD capacity, up to 672 GiB of RAM, and the highest local and remote storage throughput currently available in Azure. There is a nearly consistent 8 GiB of memory per vCore across most of these virtual machines, which is ideal for most SQL Server workloads. These VMs offer the best price-performance for SQL Server workloads running on Azure virtual machines.
+The [Edsv5-series](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) is designed for memory-intensive applications and is ideal for SQL Server workloads that do not require as high I/O throughput as the Ebdsv5 series offers. These VMs have a large local storage SSD capacity, up to 672 GiB of RAM, and very high local and remote storage throughput. There is a nearly consistent 8 GiB of memory per vCore across most of these virtual machines, which is ideal for most SQL Server workloads.
The largest virtual machine in this group is the [Standard_E104ids_v5](../../../virtual-machines/edv5-edsv5-series.md#edsv5-series) that offers 104 vCores and 672 GiBs of memory. This virtual machine is notable because it is [isolated](../../../virtual-machines/isolation.md) which means it is guaranteed to be the only virtual machine running on the host, and therefore is isolated from other customer workloads. This has a memory-to-vCore ratio that is lower than what is recommended for SQL Server, so it should only be used if isolation is required.
Some of the features of the M and Mv2-series attractive for SQL Server performan
## General purpose
-The [general purpose virtual machine sizes](../../../virtual-machines/sizes-general.md) are designed to provide balanced memory-to-vCore ratios for smaller entry level workloads such as development and test, web servers, and smaller database servers.
+The [general purpose virtual machine sizes](../../../virtual-machines/sizes-general.md) are designed to provide balanced memory-to-vCore ratios for smaller entry level workloads such as development and test, web servers, and smaller database servers.
-Because of the smaller memory-to-vCore ratios with the general purpose virtual machines, it is important to carefully monitor memory-based performance counters to ensure SQL Server is able to get the buffer cache memory it needs. See [memory performance baseline](performance-guidelines-best-practices-collect-baseline.md#memory) for more information.
+Because of the smaller memory-to-vCore ratios with the general purpose virtual machines, it is important to carefully monitor memory-based performance counters to ensure SQL Server is able to get the buffer cache memory it needs. See [memory performance baseline](performance-guidelines-best-practices-collect-baseline.md#memory) for more information.
Since the starting recommendation for production workloads is a memory-to-vCore ratio of 8, the minimum recommended configuration for a general purpose VM running SQL Server is 4 vCPU and 32 GiB of memory.
The [Ddsv5-series](../../../virtual-machines/ddv5-ddsv5-series.md#ddsv5-series)
The Ddsv5 VMs include lower latency and higher-speed local storage.
-These machines are ideal for side-by-side SQL and app deployments that require fast access to temp storage and departmental relational databases. There is a standard memory-to-vCore ratio of 4 across all of the virtual machines in this series.
+These machines are ideal for side-by-side SQL and app deployments that require fast access to temp storage and departmental relational databases. There is a standard memory-to-vCore ratio of 4 across all of the virtual machines in this series.
For this reason, it is recommended to leverage the D8ds_v5 as the starter virtual machine in this series, which has 8 vCores and 32 GiBs of memory. The largest machine is the D96ds_v5, which has 96 vCores and 256 GiBs of memory.
The [Ddsv5-series](../../../virtual-machines/ddv5-ddsv5-series.md#ddsv5-series)
### B-series
-The [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) virtual machine sizes are ideal for workloads that do not need consistent performance such as proof of concept and very small application and development servers.
+The [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) virtual machine sizes are ideal for workloads that do not need consistent performance such as proof of concept and very small application and development servers.
Most of the [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) virtual machine sizes have a memory-to-vCore ratio of 4. The largest of these machines is the [Standard_B20ms](../../../virtual-machines/sizes-b-series-burstable.md) with 20 vCores and 80 GiB of memory.
-This series is unique as the apps have the ability to **burst** during business hours with burstable credits varying based on machine size.
+This series is unique as the apps have the ability to **burst** during business hours with burstable credits varying based on machine size.
When the credits are exhausted, the VM returns to the baseline machine performance.
The benefit of the B-series is the compute savings you could achieve compared to
This series supports [premium storage](../../../virtual-machines/premium-storage-performance.md), but **does not support** [premium storage caching](../../../virtual-machines/premium-storage-performance.md#disk-caching).
-> [!NOTE]
+> [!NOTE]
> The [burstable B-series](../../../virtual-machines/sizes-b-series-burstable.md) does not have the memory-to-vCore ratio of 8 that is recommended for SQL Server workloads. As such, consider using these virtual machines for smaller applications, web servers, and development workloads only. ### Av2-series
The [Av2-series](../../../virtual-machines/av2-series.md) VMs are best suited fo
Only the [Standard_A2m_v2](../../../virtual-machines/av2-series.md) (2 vCores and 16GiBs of memory), [Standard_A4m_v2](../../../virtual-machines/av2-series.md) (4 vCores and 32GiBs of memory), and the [Standard_A8m_v2](../../../virtual-machines/av2-series.md) (8 vCores and 64GiBs of memory) have a good memory-to-vCore ratio of 8 for these top three virtual machines.
-These virtual machines are both good options for smaller development and test SQL Server machines.
+These virtual machines are both good options for smaller development and test SQL Server machines.
The 8 vCore [Standard_A8m_v2](../../../virtual-machines/av2-series.md) may also be a good option for small application and web servers.
-> [!NOTE]
+> [!NOTE]
> The Av2 series does not support premium storage and as such, is not recommended for production SQL Server workloads even with the virtual machines that have a memory-to-vCore ratio of 8. ## Storage optimized
-The [storage optimized VM sizes](../../../virtual-machines/sizes-storage.md) are for specific use cases. These virtual machines are specifically designed with optimized disk throughput and IO.
+The [storage optimized VM sizes](../../../virtual-machines/sizes-storage.md) are for specific use cases. These virtual machines are specifically designed with optimized disk throughput and IO.
### Lsv2-series
-The [Lsv2-series](../../../virtual-machines/lsv2-series.md) features high throughput, low latency, and local NVMe storage. The Lsv2-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using durable data disks.
+The [Lsv2-series](../../../virtual-machines/lsv2-series.md) features high throughput, low latency, and local NVMe storage. The Lsv2-series VMs are optimized to use the local disk on the node attached directly to the VM rather than using durable data disks.
These virtual machines are strong options for big data, data warehouse, reporting, and ETL workloads. The high throughput and IOPS of the local NVMe storage is a good use case for processing files that will be loaded into your database and other scenarios where the data can be recreated from the source system or other repositories such as Azure Blob storage or Azure Data Lake. [Lsv2-series](../../../virtual-machines/lsv2-series.md) VMs can also burst their disk performance for up to 30 minutes at a time.
These virtual machines size from 8 to 80 vCPU with 8 GiB of memory per vCPU and
The NVMe storage is ephemeral meaning that data will be lost on these disks if you deallocate your virtual machine, or if it's moved to a different host for service healing.
-The Lsv2 and Ls series support [premium storage](../../../virtual-machines/premium-storage-performance.md), but not premium storage caching. The creation of a local cache to increase IOPs is not supported.
+The Lsv2 and Ls series support [premium storage](../../../virtual-machines/premium-storage-performance.md), but not premium storage caching. The creation of a local cache to increase IOPs is not supported.
> [!WARNING]
-> Storing your data files on the ephemeral NVMe storage could result in data loss when the VM is deallocated.
+> Storing your data files on the ephemeral NVMe storage could result in data loss when the VM is deallocated.
## Constrained vCores
-High performing SQL Server workloads often need larger amounts of memory, I/O, and throughput without the higher vCore counts.
+High performing SQL Server workloads often need larger amounts of memory, IOPS, and throughput without the higher vCore counts.
-Most OLTP workloads are application databases driven by large numbers of smaller transactions. With OLTP workloads, only a small amount of the data is read or modified, but the volumes of transactions driven by user counts are much higher. It is important to have the SQL Server memory available to cache plans, store recently accessed data for performance, and ensure physical reads can be read into memory quickly.
+Most OLTP workloads are application databases driven by large numbers of smaller transactions. With OLTP workloads, only a small amount of the data is read or modified, but the volumes of transactions driven by user counts are much higher. It is important to have the SQL Server memory available to cache plans, store recently accessed data for performance, and ensure physical reads can be read into memory quickly.
-These OLTP environments need higher amounts of memory, fast storage, and the I/O bandwidth necessary to perform optimally.
+These OLTP environments need higher amounts of memory, fast storage, and the I/O bandwidth necessary to perform optimally.
-In order to maintain this level of performance without the higher SQL Server licensing costs, Azure offers VM sizes with [constrained vCPU counts](../../../virtual-machines/constrained-vcpu.md).
+In order to maintain this level of performance without the higher SQL Server licensing costs, Azure offers VM sizes with [constrained vCPU counts](../../../virtual-machines/constrained-vcpu.md).
This helps control licensing costs by reducing the available vCores while maintaining the same memory, storage, and I/O bandwidth of the parent virtual machine. The vCPU count can be constrained to one-half to one-quarter of the original VM size. Reducing the vCores available to the virtual machine will achieve higher memory-to-vCore ratios, but the compute cost will remain the same.
-These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier to identify.
+These new VM sizes have a suffix that specifies the number of active vCPUs to make them easier to identify.
For example, the [M64-32ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 32 SQL Server vCores with the memory, I/O, and throughput of the [M64ms](../../../virtual-machines/m-series.md) and the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) requires licensing only 16 vCores. Though while the [M64-16ms](../../../virtual-machines/constrained-vcpu.md) has a quarter of the SQL Server licensing cost of the M64ms, the compute cost of the virtual machine will be the same.
-> [!NOTE]
-> - Medium to large data warehouse workloads may still benefit from [constrained vCore VMs](../../../virtual-machines/constrained-vcpu.md), but data warehouse workloads are commonly characterized by fewer users and processes addressing larger amounts of data through query plans that run in parallel.
-> - The compute cost, which includes operating system licensing, will remain the same as the parent virtual machine.
--
+> [!NOTE]
+>
+> - Medium to large data warehouse workloads may still benefit from [constrained vCore VMs](../../../virtual-machines/constrained-vcpu.md), but data warehouse workloads are commonly characterized by fewer users and processes addressing larger amounts of data through query plans that run in parallel.
+> - The compute cost, which includes operating system licensing, will remain the same as the parent virtual machine.
## Next steps
azure-sql Sql Server On Azure Vm Iaas What Is Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview.md
To get started with SQL Server on Azure VMs, review the following resources:
- **Pricing**: For information about the pricing structure of your SQL Server on Azure VM, review the [Pricing guidance](pricing-guidance.md). - **Frequently asked questions**: For commonly asked questions, and scenarios, review the [FAQ](frequently-asked-questions-faq.yml).
+## Videos
+
+For videos about the latest features to optimize SQL Server VM performance and automate management, review the following Data Exposed videos:
+
+- [Caching and Storage Capping (Ep. 1)](/shows/data-exposed/azure-sql-vm-caching-and-storage-capping-ep-1-data-exposed)
+- [Automate Management with the SQL Server IaaS Agent extension (Ep. 2)](/shows/data-exposed/azure-sql-vm-automate-management-with-the-sql-server-iaas-agent-extension-ep-2)
+- [Use Azure Monitor Metrics to Track VM Cache Health (Ep. 3)](/shows/data-exposed/azure-sql-vm-use-azure-monitor-metrics-to-track-vm-cache-health-ep-3)
+- [Get the best price-performance for your SQL Server workloads on Azure VM](/shows/data-exposed/azure-sql-vm-get-the-best-price-performance-for-your-sql-server-workloads-on-azure-vm)
+- [Using PerfInsights to Evaluate Resource Health and Troubleshoot (Ep. 5)](/shows/data-exposed/azure-sql-vm-using-perfinsights-to-evaluate-resource-health-and-troubleshoot-ep-5)
+- [Best Price-Performance with Ebdsv5 Series (Ep.6)](/shows/data-exposed/azure-sql-vm-best-price-performance-with-ebdsv5-series)
+- [Optimally Configure SQL Server on Azure Virtual Machines with SQL Assessment (Ep. 7)](/shows/data-exposed/optimally-configure-sql-server-on-azure-virtual-machines-with-sql-assessment)
+- [New and Improved SQL Server on Azure VM deployment and management experience (Ep.8)](/shows/data-exposed/new-and-improved-sql-on-azure-vm-deployment-and-management-experience)
++ ## High availability & disaster recovery On top of the built-in [high availability provided by Azure virtual machines](../../../virtual-machines/availability.md), you can also leverage the high availability and disaster recovery features provided by SQL Server.
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
-description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and vSphere clusters.
+description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters.
Last updated 08/25/2021
Azure VMware Solution delivers VMware-based private clouds in Azure. The private
A private cloud includes clusters with: - Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor -- vCenter Server for managing ESXi and vSAN -- VMware NSX-T software-defined networking for vSphere workload VMs
+- VMware vCenter Server for managing ESXi and vSAN
+- VMware NSX-T Data Center software-defined networking for vSphere workload VMs
- VMware vSAN datastore for vSphere workload VMs - VMware HCX for workload mobility - Resources in the Azure underlay (required for connectivity and to operate the private cloud)
Azure VMware Solution monitors the following conditions on the host:
- Connection failure > [!NOTE]
-> Azure VMware Solution tenant admins must not edit or delete the above defined VMware vCenter alarms, as these are managed by the Azure VMware Solution control plane on vCenter. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
+> Azure VMware Solution tenant admins must not edit or delete the above defined VMware vCenter Server alarms, as these are managed by the Azure VMware Solution control plane on vCenter Server. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
## Backup and restoration
-Private cloud vCenter and NSX-T configurations are on an hourly backup schedule. Backups are kept for three days. If you need to restore from a backup, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
+Private cloud vCenter Server and NSX-T Data Center configurations are on an hourly backup schedule. Backups are kept for three days. If you need to restore from a backup, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
-Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components.
+Azure VMware Solution continuously monitors the health of both the physical underlay and the VMware Solution components. When Azure VMware Solution detects a failure, it takes action to repair the failed components.
## Next steps
Now that you've covered Azure VMware Solution private cloud concepts, you may wa
[concepts-networking]: ./concepts-networking.md <!-- LINKS - external-->
-[VCSA versions]: https://kb.vmware.com/s/article/2143838
+[vCSA versions]: https://kb.vmware.com/s/article/2143838
[ESXi versions]: https://kb.vmware.com/s/article/2143832 [vSAN versions]: https://kb.vmware.com/s/article/2150753
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
Azure VMware Solution private clouds provide native, cluster-wide storage with V
## vSAN clusters
-Local storage in each cluster host is used as part of a vSAN datastore. All diskgroups use an NVMe cache tier of 1.6 TB with the raw, per host, SSD-based capacity of 15.4 TB. The size of the raw capacity tier of a cluster is the per host capacity times the number of hosts. For example, a four host cluster provides 61.6-TB raw capacity in the vSAN capacity tier.
+Local storage in each cluster host is claimed as part of a vSAN datastore. All diskgroups use an NVMe cache tier of 1.6 TB with the raw, per host, SSD-based capacity of 15.4 TB. The size of the raw capacity tier of a cluster is the per host capacity times the number of hosts. For example, a four host cluster provides 61.6-TB raw capacity in the vSAN capacity tier.
-Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datastores are created as part of private cloud deployment and are available for use immediately. The **cloudadmin** user and all users assigned to the CloudAdmin role can manage datastores with these vSAN privileges:
+Local storage in cluster hosts is used in the cluster-wide vSAN datastore. All datastores are created as part of private cloud deployment and are available for use immediately. The **cloudadmin** user and all users assigned to the CloudAdmin role can manage datastores with these vSAN privileges:
- Datastore.AllocateSpace - Datastore.Browse
Local storage in cluster hosts is used in cluster-wide vSAN datastore. All datas
## Storage policies and fault tolerance
-That default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provisioning. Unless you adjust the storage policy or apply a new policy, the cluster grows with this configuration. To set the storage policy, see [Configure storage policy](configure-storage-policy.md).
+The default storage policy is set to RAID-1 (Mirroring), FTT-1, and thick provisioning. Unless you adjust the storage policy or apply a new policy, the cluster grows with this configuration. To set the storage policy, see [Configure storage policy](configure-storage-policy.md).
-In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an architecture perspective.
+In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft governs failures regularly and replaces the hardware when events are detected from an operations perspective.
:::image type="content" source="media/concepts/vsphere-vm-storage-policies.png" alt-text="Screenshot that shows the vSphere Client VM Storage Policies.":::
In a three-host cluster, FTT-1 accommodates a single host's failure. Microsoft g
|Provisioning type |Description | ||| |**Thick** | Reserved or pre-allocated storage space. It protects systems by allowing them to function even if the vSAN datastore is full because the space is already reserved. For example, if you create a 10-GB virtual disk with thick provisioning. In that case, the full amount of virtual disk storage capacity is pre-allocated on the physical storage of the virtual disk and consumes all the space allocated to it in the datastore. It won't allow other virtual machines (VMs) to share the space from the datastore. |
-|**Thin** | Consumes the space that it needs initially and grows to the data space demand used in the datastore. Outside the default (thick provision), you can create VMs with FTT-1 thin provisioning. For dedupe setup, use thin provisioning for your VM template. |
+|**Thin** | Consumes the space that it needs initially and grows to the data space demand used in the datastore. Outside the default (thick provision), you can create VMs with FTT-1 thin provisioning. For deduplication setup, use thin provisioning for your VM template. |
>[!TIP] >If you're unsure if the cluster will grow to four or more, then deploy using the default policy. If you're sure your cluster will grow, then instead of expanding the cluster after your initial deployment, we recommend to deploy the extra hosts during deployment. As the VMs are deployed to the cluster, change the disk's storage policy in the VM settings to either RAID-5 FTT-1 or RAID-6 FTT-2. >
->:::image type="content" source="media/concepts/vsphere-vm-storage-policies-2.png" alt-text="Screenshot showing the RAID-5 FTT-1 and RAID-6 Ftt-2 options highlighed.":::
+>:::image type="content" source="media/concepts/vsphere-vm-storage-policies-2.png" alt-text="Screenshot showing the RAID-5 FTT-1 and RAID-6 FTT-2 options highlighed.":::
## Data-at-rest encryption
-vSAN datastores use data-at-rest encryption by default using keys stored in Azure Key Vault. The encryption solution is KMS-based and supports vCenter operations for key management. When a host is removed from a cluster, data on SSDs is invalidated immediately.
+vSAN datastores use data-at-rest encryption by default using keys stored in Azure Key Vault. The encryption solution is KMS-based and supports vCenter Server operations for key management. When a host is removed from a cluster, all data on SSDs is invalidated immediately.
## Azure storage integration
Now that you've covered Azure VMware Solution storage concepts, you may want to
- [Azure NetApp Files with Azure VMware Solution](netapp-files-with-azure-vmware-solution.md) - You can use Azure NetApp to migrate and run the most demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes. -- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter and restricted administrator rights for NSX-T Manager.
+- [vSphere role-based access control for Azure VMware Solution](concepts-identity.md) - You use vCenter Server to manage VM workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the CloudAdmin role for vCenter Server and restricted administrator rights for NSX-T Manager.
<!-- LINKS - external-->
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/archive-tier-support.md
Archive tier supports the following workloads:
| Workloads | Operations | | | |
-| Azure Virtual Machines | Only monthly and yearly recovery points. Daily and weekly recovery points aren't supported. <br><br> Age >= 3 months in Vault-atandard tier <br><br> Retention left >= 6 months. <br><br> No active daily and weekly dependencies. |
+| Azure Virtual Machines | Only monthly and yearly recovery points. Daily and weekly recovery points aren't supported. <br><br> Age >= 3 months in Vault-standard tier <br><br> Retention left >= 6 months. <br><br> No active daily and weekly dependencies. |
| SQL Server in Azure Virtual Machines <br><br> SAP HANA in Azure Virtual Machines | Only full recovery points. Logs and differentials aren't supported. <br><br> Age >= 45 days in Vault-standard tier. <br><br> Retention left >= 6 months. <br><br> No dependencies. | >[!Note]
If the list of recovery points is blank, then all the eligible/recommended recov
## Next steps - [Use Archive tier](use-archive-tier-support.md)-- [Azure Backup pricing](azure-backup-pricing.md)
+- [Azure Backup pricing](azure-backup-pricing.md)
backup Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints.md
If you're using your custom DNS servers, you'll need to create the required DNS
`<private ip><space><backup service privatelink FQDN>` >[!NOTE]
->As shown in the screenshot above, the FQDNs depict `xxxxxxxx.<geo>.backup.windowsazure.com` and not `xxxxxxxx.privatelink.<geo>.backup. windowsazure.com`. In such cases, ensure you include (and if required, add) the `.privatelink.` according to the stated format.
+>As shown in the screenshot above, the FQDNs depict `xxxxxxxx.<geo>.backup.windowsazure.com` and not `xxxxxxxx.privatelink.<geo>.backup.windowsazure.com`. In such cases, ensure you include (and if required, add) the `.privatelink.` according to the stated format.
#### For Blob and Queue services
Once the private endpoints created for the vault in your VNet have been approved
In the VM in the locked down network, ensure the following: 1. The VM should have access to AAD.
-2. Execute **nslookup** on the backup URL (`xxxxxxxx.privatelink.<geo>.backup. windowsazure.com`) from your VM, to ensure connectivity. This should return the private IP assigned in your virtual network.
+2. Execute **nslookup** on the backup URL (`xxxxxxxx.privatelink.<geo>.backup.windowsazure.com`) from your VM, to ensure connectivity. This should return the private IP assigned in your virtual network.
### Configure backup
After following the process detailed in this article, you don't need to do addit
## Next steps -- Read about all the [security features in Azure Backup](security-overview.md).
+- Read about all the [security features in Azure Backup](security-overview.md).
cognitive-services How To Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-migrate-face-data.md
This guide shows you how to move face data, such as a saved PersonGroup object w
This same migration strategy also applies to LargePersonGroup and LargeFaceList objects. If you aren't familiar with the concepts in this guide, see their definitions in the [Face recognition concepts](../concepts/face-recognition.md) guide. This guide uses the Face .NET client library with C#.
+> [!WARNING]
+> The Snapshot feature might move your data outside the geographic region you originally selected. Data might move to West US, West Europe, and Southeast Asia regions.
+ ## Prerequisites You need the following items:
cognitive-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/batch-transcription.md
You can use batch transcription REST APIs to call the following methods:
| Gets the transcription identified by the specified ID. | GET | speechtotext/v3.0/transcriptions/{id} | | Gets the result files of the transcription identified by the specified ID. | GET | speechtotext/v3.0/transcriptions/{id}/files |
-You can review and test the detailed API, which is available as a [Swagger document](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0).
+For more information, see the [Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation.
Batch transcription jobs are scheduled on a best-effort basis. You can't estimate when a job will change into the running state, but it should happen within minutes under normal system load. When the job is in the running state, the transcription occurs faster than the audio runtime playback speed.
while (completed < 1)
} ```
-For full details about the preceding calls, see our [Swagger document](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0). For the full sample shown here, go to [GitHub](https://aka.ms/csspeech/samples) in the `samples/batch` subdirectory.
+For full details about the preceding calls, see the [Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation. For the full sample shown here, go to [GitHub](https://aka.ms/csspeech/samples) in the `samples/batch` subdirectory.
This sample uses an asynchronous setup to post audio and receive transcription status. The `PostTranscriptions` method sends the audio file details, and the `GetTranscriptions` method receives the states. `PostTranscriptions` returns a handle, and `GetTranscriptions` uses it to create a handle to get the transcription status.
cognitive-services How To Custom Speech Upload Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-upload-data.md
To upload your data:
### Upload data by using Speech-to-text REST API v3.0
-You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) to automate any operations related to your custom models. In particular, you can use the REST API to upload a dataset.
+You can use [Speech-to-text REST API v3.0](rest-speech-to-text.md) to automate any operations related to your custom models. In particular, you can use the REST API to upload a dataset.
To create and upload a dataset, use a [Create Dataset](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateDataset) request.
-A dataset that you create by using the Speech-to-text REST API v3.0 will *not* be connected to any of the Speech Studio projects, unless you specify a special parameter in the request body (see the code block later in this section). Connection with a Speech Studio project is *not* required for any model customization operations, if you perform them by using the REST API.
+A dataset that you create by using the Speech-to-text REST API v3.0 won't be connected to any of the Speech Studio projects, unless you specify a special parameter in the request body (see the code block later in this section). Connection with a Speech Studio project isn't required for any model customization operations, if you perform them by using the REST API.
When you log on to Speech Studio, its user interface will notify you when any unconnected object is found (like datasets uploaded through the REST API without any project reference). The interface will also offer to connect such objects to an existing project.
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
The HTTP status code for each response indicates success or common errors.
| 200 | OK | The request was successful. | | 202 | Accepted | The request has been accepted and is being processed. | | 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |
-| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or [token](rest-speech-to-text.md#authentication) is valid and in the correct region. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or [token](rest-speech-to-text-short.md#authentication) is valid and in the correct region. |
| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. | | 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
cognitive-services How To Get Speech Session Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-get-speech-session-id.md
If you use [Speech-to-text](speech-to-text.md) and need to open a support case,
## Getting Session ID for Online transcription and Translation. (Speech SDK and REST API for short audio).
-[Online transcription](get-started-speech-to-text.md) and [Translation](speech-translation.md) use either the [Speech SDK](speech-sdk.md) or the [REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio).
+[Online transcription](get-started-speech-to-text.md) and [Translation](speech-translation.md) use either the [Speech SDK](speech-sdk.md) or the [REST API for short audio](rest-speech-to-text-short.md).
To get the Session ID, when using SDK you need to:
To get the Session ID, when using SDK you need to:
If you use [Speech CLI](spx-overview.md), you can also get the Session ID interactively. See details [below](#get-session-id-using-speech-cli).
-In case of [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) you need to "inject" the session information in the requests. See details [below](#provide-session-id-using-rest-api-for-short-audio).
+In case of [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) you need to "inject" the session information in the requests. See details [below](#provide-session-id-using-rest-api-for-short-audio).
### Enable logging in the Speech SDK
spx help translate log
### Provide Session ID using REST API for short audio
-Unlike Speech SDK, [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) does not automatically generate a Session ID. You need to generate it yourself and provide it within the REST request.
+Unlike Speech SDK, [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) does not automatically generate a Session ID. You need to generate it yourself and provide it within the REST request.
Generate a GUID inside your code or using any standard tool. Use the GUID value *without dashes or other dividers*. As an example we will use `9f4ffa5113a846eba289aa98b28e766f`.
https://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cogn
## Getting Transcription ID for Batch transcription. (REST API v3.0).
-[Batch transcription](batch-transcription.md) uses [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30).
+[Batch transcription](batch-transcription.md) uses [Speech-to-text REST API v3.0](rest-speech-to-text.md).
The required Transcription ID is the GUID value contained in the main `self` element of the Response body returned by requests, like [Create Transcription](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription).
cognitive-services How To Migrate From Bing Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md
Use this article to migrate your applications from the Bing Speech API to the Speech service.
+> [!IMPORTANT]
+> The Speech service has replaced Bing Speech API. Please migrate your applications to the Speech service.
+ This article outlines the differences between the Bing Speech APIs and the Speech service, and suggests strategies for migrating your applications. Your Bing Speech API subscription key won't work with the Speech service; you'll need a new Speech service subscription. A single Speech service subscription key grants access to the following features. Each is metered separately, so you're charged only for the features you use.
cognitive-services How To Migrate From Translator Speech Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-from-translator-speech-api.md
Use this article to migrate your applications from the Microsoft Translator Speech API to the [Speech service](index.yml). This guide outlines the differences between the Translator Speech API and Speech service, and suggests strategies for migrating your applications.
-> [!NOTE]
+> [!IMPORTANT]
+> The Speech service has replaced Translator Speech. Please migrate your applications to the Speech service.
+>
> Your Translator Speech API subscription key won't be accepted by the Speech service. You'll need to create a new Speech service subscription. ## Comparison of features
cognitive-services Ingestion Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/ingestion-client.md
The [Getting Started Guide for the Ingestion Client](https://github.com/Azure-Sa
> [!IMPORTANT] > Pricing varies depending on the mode of operation (batch vs real time) as well as the Azure Function SKU selected. By default the tool will create a Premium Azure Function SKU to handle large volume. Visit the [Pricing](https://azure.microsoft.com/pricing/details/functions/) page for more information.
-Both, the Microsoft [Speech SDK](speech-sdk.md) and the [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30), can be used to obtain transcripts. The decision does impact overall costs as it is explained in the guide.
+Both, the Microsoft [Speech SDK](speech-sdk.md) and the [Speech-to-text REST API v3.0](rest-speech-to-text.md), can be used to obtain transcripts. The decision does impact overall costs as it is explained in the guide.
> [!TIP] > You can use the tool and resulting solution in production to process a high volume of audio.
cognitive-services Migrate V2 To V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md
Compared to v2, the v3 version of the Speech services REST API for speech-to-text is more reliable, easier to use, and more consistent with APIs for similar services. Most teams can migrate from v2 to v3 in a day or two.
+> [!IMPORTANT]
+> The Speech-to-text REST API v2.0 is deprecated. Please migrate your applications to the [Speech-to-text REST API v3.0](rest-speech-to-text.md).
+ ## Forward compatibility
-All entities from v2 can also be found in the v3 API under the same identity. Where the schema of a result has changed, (for example, transcriptions), the result of a GET in the v3 version of the API uses the v3 schema. The result of a GET in the v2 version of the API uses the same v2 schema. Newly created entities on v3 are **not** available in responses from v2 APIs.
+All entities from v2 can also be found in the v3 API under the same identity. Where the schema of a result has changed, (for example, transcriptions), the result of a GET in the v3 version of the API uses the v3 schema. The result of a GET in the v2 version of the API uses the same v2 schema. Newly created entities on v3 aren't available in responses from v2 APIs.
## Migration steps
General changes:
### Host name changes
-Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Swagger document](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) lists valid regions and paths.
+Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths.
>[!IMPORTANT] >Change the hostname from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com` where region is the region of your speech subscription. Also remove `api/`from any path in your client code.
Accuracy tests have been renamed to evaluations because the new name describes b
## Next steps
-Examine all features of these commonly used REST APIs provided by Speech
-
-* [Speech-to-text REST API](rest-speech-to-text.md)
-* [Swagger document](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) for v3 of the REST API
-* For sample code to perform batch transcriptions, view the the [GitHub sample repository](https://aka.ms/csspeech/samples) in the `samples/batch` subdirectory.
+* [Speech-to-text REST API v3.0](rest-speech-to-text.md)
+* [Speech-to-text REST API v3.0 reference](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
The following features are part of the Speech service. Use the links in this tab
| | [Multidevice conversation](multi-device-conversation.md) | Connect multiple devices or clients in a conversation to send speech- or text-based messages, with easy support for transcription and translation.| Yes | No | | | [Conversation transcription](./conversation-transcription.md) | Enables real-time speech recognition, speaker identification, and diarization. It's perfect for transcribing in-person meetings with the ability to distinguish speakers. | Yes | No | | | [Create custom speech models](#customize-your-speech-experience) | If you're using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. | No | [Yes](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) |
-| | [Pronunciation assessment](./how-to-pronunciation-assessment.md) | Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. | [Yes](./how-to-pronunciation-assessment.md) | [Yes](./rest-speech-to-text.md#pronunciation-assessment-parameters) |
+| | [Pronunciation assessment](./how-to-pronunciation-assessment.md) | Pronunciation assessment evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of spoken audio. With pronunciation assessment, language learners can practice, get instant feedback, and improve their pronunciation so that they can speak and present with confidence. | [Yes](./how-to-pronunciation-assessment.md) | [Yes](./rest-speech-to-text-short.md#pronunciation-assessment-parameters) |
| [Text-to-speech](text-to-speech.md) | Prebuilt neural voices | Text-to-speech converts input text into humanlike synthesized speech by using the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md). Use neural voices, which are humanlike voices powered by deep neural networks. See [Language support](language-support.md). | [Yes](./speech-sdk.md) | [Yes](#reference-docs) | | | [Custom neural voices](#customize-your-speech-experience) | Create custom neural voice fonts unique to your brand or product. | No | [Yes](#reference-docs) | | [Speech translation](speech-translation.md) | Speech translation | Speech translation enables real-time, multilanguage translation of speech to your applications, tools, and devices. Use this feature for speech-to-speech and speech-to-text translation. | [Yes](./speech-sdk.md) | No |
cognitive-services Rest Speech To Text Short https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-short.md
+
+ Title: Speech-to-text REST API for short audio - Speech service
+
+description: Learn how to use Speech-to-text REST API for short audio to convert speech to text.
++++++ Last updated : 01/24/2022+
+ms.devlang: csharp
+++
+# Speech-to-text REST API for short audio
+
+Use cases for the speech-to-text REST API for short audio are limited. Use it only in cases where you can't use the [Speech SDK](speech-sdk.md). For [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md), you should always use [Speech to Text API v3.0](rest-speech-to-text.md).
+
+Before you use the speech-to-text REST API for short audio, consider the following limitations:
+
+* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio.
+* The REST API for short audio returns only final results. It doesn't provide partial results.
+
+> [!TIP]
+> For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
++
+### Regions and endpoints
+
+The endpoint for the REST API for short audio has this format:
+
+```
+https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1
+```
+
+Replace `<REGION_IDENTIFIER>` with the identifier that matches the region of your subscription from this table:
++
+> [!NOTE]
+> You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
+
+### Query parameters
+
+These parameters might be included in the query string of the REST request:
+
+| Parameter | Description | Required or optional |
+|--|-||
+| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md#speech-to-text). | Required |
+| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional |
+| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
+| `cid` | When you're using the [Custom Speech portal](./custom-speech-overview.md) to create custom models, you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
+
+### Request headers
+
+This table lists required and optional headers for speech-to-text requests:
+
+|Header| Description | Required or optional |
+||-||
+| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. |
+| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
+| `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results. These scores assess the pronunciation quality of speech input, with indicators like accuracy, fluency, and completeness. <br><br>This parameter is a Base64-encoded JSON that contains multiple detailed parameters. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters). | Optional |
+| `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required |
+| `Transfer-Encoding` | Specifies that chunked audio data is being sent, rather than a single file. Use this header only if you're chunking audio data. | Optional |
+| `Expect` | If you're using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if you're sending chunked audio data. |
+| `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It's good practice to always include `Accept`. | Optional, but recommended. |
+
+### Audio formats
+
+Audio is sent in the body of the HTTP `POST` request. It must be in one of the formats in this table:
+
+| Format | Codec | Bit rate | Sample rate |
+|--|-|-|--|
+| WAV | PCM | 256 kbps | 16 kHz, mono |
+| OGG | OPUS | 256 kpbs | 16 kHz, mono |
+
+>[!NOTE]
+>The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
+
+### Pronunciation assessment parameters
+
+This table lists required and optional parameters for pronunciation assessment:
+
+| Parameter | Description | Required or optional |
+|--|-||
+| `ReferenceText` | The text that the pronunciation will be evaluated against. | Required |
+| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
+| `Granularity` | The evaluation granularity. Accepted values are:<br><br> `Phoneme`, which shows the score on the full-text, word, and phoneme levels.<br>`Word`, which shows the score on the full-text and word levels. <br>`FullText`, which shows the score on the full-text level only.<br><br> The default setting is `Phoneme`. | Optional |
+| `Dimension` | Defines the output criteria. Accepted values are:<br><br> `Basic`, which shows the accuracy score only. <br>`Comprehensive`, which shows scores on more dimensions (for example, fluency score and completeness score on the full-text level, and error type on the word level).<br><br> To see definitions of different score dimensions and word error types, see [Response parameters](#response-parameters). The default setting is `Basic`. | Optional |
+| `EnableMiscue` | Enables miscue calculation. With this parameter enabled, the pronounced words will be compared to the reference text. They'll be marked with omission or insertion based on the comparison. Accepted values are `False` and `True`. The default setting is `False`. | Optional |
+| `ScenarioId` | A GUID that indicates a customized point system. | Optional |
+
+Here's example JSON that contains the pronunciation assessment parameters:
+
+```json
+{
+ "ReferenceText": "Good morning.",
+ "GradingSystem": "HundredMark",
+ "Granularity": "FullText",
+ "Dimension": "Comprehensive"
+}
+```
+
+The following sample code shows how to build the pronunciation assessment parameters into the `Pronunciation-Assessment` header:
+
+```csharp
+var pronAssessmentParamsJson = $"{{\"ReferenceText\":\"Good morning.\",\"GradingSystem\":\"HundredMark\",\"Granularity\":\"FullText\",\"Dimension\":\"Comprehensive\"}}";
+var pronAssessmentParamsBytes = Encoding.UTF8.GetBytes(pronAssessmentParamsJson);
+var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes);
+```
+
+We strongly recommend streaming (chunked) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment).
+
+>[!NOTE]
+> For more For more information, see [pronunciation assessment](how-to-pronunciation-assessment.md).
+
+### Sample request
+
+The following sample includes the host name and required headers. It's important to note that the service also expects audio data, which is not included in this sample. As mentioned earlier, chunking is recommended but not required.
+
+```HTTP
+POST speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1
+Accept: application/json;text/xml
+Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000
+Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY
+Host: westus.stt.speech.microsoft.com
+Transfer-Encoding: chunked
+Expect: 100-continue
+```
+
+To enable pronunciation assessment, you can add the following header. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters).
+
+```HTTP
+Pronunciation-Assessment: eyJSZWZlcm...
+```
+
+### HTTP status codes
+
+The HTTP status code for each response indicates success or common errors.
+
+| HTTP status code | Description | Possible reasons |
+||-|--|
+| 100 | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (This code is used with chunked transfer.) |
+| 200 | OK | The request was successful. The response body is a JSON object. |
+| 400 | Bad request | The language code wasn't provided, the language isn't supported, or the audio file is invalid (for example). |
+| 401 | Unauthorized | A subscription key or an authorization token is invalid in the specified region, or an endpoint is invalid. |
+| 403 | Forbidden | A subscription key or authorization token is missing. |
+
+### Chunked transfer
+
+Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it's transmitted. The REST API for short audio does not provide partial or interim results.
+
+The following code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object that's connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
+
+```csharp
+var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
+request.SendChunked = true;
+request.Accept = @"application/json;text/xml";
+request.Method = "POST";
+request.ProtocolVersion = HttpVersion.Version11;
+request.Host = host;
+request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";
+request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_SUBSCRIPTION_KEY";
+request.AllowWriteStreamBuffering = false;
+
+using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
+{
+ // Open a request stream and write 1,024-byte chunks in the stream one at a time.
+ byte[] buffer = null;
+ int bytesRead = 0;
+ using (var requestStream = request.GetRequestStream())
+ {
+ // Read 1,024 raw bytes from the input audio file.
+ buffer = new Byte[checked((uint)Math.Min(1024, (int)fs.Length))];
+ while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) != 0)
+ {
+ requestStream.Write(buffer, 0, bytesRead);
+ }
+
+ requestStream.Flush();
+ }
+}
+```
+
+### Response parameters
+
+Results are provided as JSON. The `simple` format includes the following top-level fields:
+
+| Parameter | Description |
+|--|--|
+|`RecognitionStatus`|Status, such as `Success` for successful recognition. See the next table.|
+|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization, and profanity masking. Present only on success. Inverse text normalization is conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith."|
+|`Offset`|The time (in 100-nanosecond units) at which the recognized speech begins in the audio stream.|
+|`Duration`|The duration (in 100-nanosecond units) of the recognized speech in the audio stream.|
+
+The `RecognitionStatus` field might contain these values:
+
+| Status | Description |
+|--|-|
+| `Success` | The recognition was successful, and the `DisplayText` field is present. |
+| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. This status usually means that the recognition language is different from the language that the user is speaking. |
+| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out while waiting for speech. |
+| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out while waiting for speech. |
+| `Error` | The recognition service encountered an internal error and could not continue. Try again if possible. |
+
+> [!NOTE]
+> If the audio consists only of profanity, and the `profanity` query parameter is set to `remove`, the service does not return a speech result.
+
+The `detailed` format includes additional forms of recognized results.
+When you're using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
+
+The object in the `NBest` list can include:
+
+| Parameter | Description |
+|--|-|
+| `Confidence` | The confidence score of the entry, from 0.0 (no confidence) to 1.0 (full confidence). |
+| `Lexical` | The lexical form of the recognized text: the actual words recognized. |
+| `ITN` | The inverse-text-normalized (ITN) or canonical form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
+| `MaskedITN` | The ITN form with profanity masking applied, if requested. |
+| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as what `DisplayText` provides when the format is set to `simple`. |
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. The accuracy score at the word and full-text levels is aggregated from the accuracy score at the phoneme level. |
+| `FluencyScore` | Fluency of the provided speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
+| `CompletenessScore` | Completeness of the speech, determined by calculating the ratio of pronounced words to reference text input. |
+| `PronScore` | Overall score that indicates the pronunciation quality of the provided speech. This score is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
+| `ErrorType` | Value that indicates whether a word is omitted, inserted, or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, and `Mispronunciation`. |
+
+### Sample responses
+
+Here's a typical response for `simple` recognition:
+
+```json
+{
+ "RecognitionStatus": "Success",
+ "DisplayText": "Remind me to buy 5 pencils.",
+ "Offset": "1236645672289",
+ "Duration": "1236645672289"
+}
+```
+
+Here's a typical response for `detailed` recognition:
+
+```json
+{
+ "RecognitionStatus": "Success",
+ "Offset": "1236645672289",
+ "Duration": "1236645672289",
+ "NBest": [
+ {
+ "Confidence": 0.9052885,
+ "Display": "What's the weather like?",
+ "ITN": "what's the weather like",
+ "Lexical": "what's the weather like",
+ "MaskedITN": "what's the weather like"
+ },
+ {
+ "Confidence": 0.92459863,
+ "Display": "what is the weather like",
+ "ITN": "what is the weather like",
+ "Lexical": "what is the weather like",
+ "MaskedITN": "what is the weather like"
+ }
+ ]
+}
+```
+
+Here's a typical response for recognition with pronunciation assessment:
+
+```json
+{
+ "RecognitionStatus": "Success",
+ "Offset": "400000",
+ "Duration": "11000000",
+ "NBest": [
+ {
+ "Confidence" : "0.87",
+ "Lexical" : "good morning",
+ "ITN" : "good morning",
+ "MaskedITN" : "good morning",
+ "Display" : "Good morning.",
+ "PronScore" : 84.4,
+ "AccuracyScore" : 100.0,
+ "FluencyScore" : 74.0,
+ "CompletenessScore" : 100.0,
+ "Words": [
+ {
+ "Word" : "Good",
+ "AccuracyScore" : 100.0,
+ "ErrorType" : "None",
+ "Offset" : 500000,
+ "Duration" : 2700000
+ },
+ {
+ "Word" : "morning",
+ "AccuracyScore" : 100.0,
+ "ErrorType" : "None",
+ "Offset" : 5300000,
+ "Duration" : 900000
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Next steps
+
+- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/)
+- [Customize acoustic models](./how-to-custom-speech-train-model.md)
+- [Customize language models](./how-to-custom-speech-train-model.md)
+- [Get familiar with batch transcription](batch-transcription.md)
+
cognitive-services Rest Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text.md
Title: Speech-to-text API reference (REST) - Speech service
+ Title: Speech-to-text REST API v3.0 - Speech service
-description: Learn how to use REST APIs to convert speech to text.
+description: Get reference documentation for Speech-to-text REST API v3.0.
Previously updated : 01/24/2022 Last updated : 04/01/2022 ms.devlang: csharp
-# Speech-to-text REST APIs
+# Speech-to-text REST API v3.0
-Speech-to-text has two REST APIs. Each API serves a special purpose and uses its own set of endpoints. In this article, you learn how to use those APIs, including authorization options, query options, how to structure a request, and how to interpret a response.
+Speech-to-text REST API v3.0 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md).
-## Speech-to-text REST API v3.0
-
-Speech-to-text REST API v3.0 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](./migrate-v2-to-v3.md). If you need to communicate with the online transcription via REST, use the [speech-to-text REST API for short audio](#speech-to-text-rest-api-for-short-audio).
+> See the [Speech to Text API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/) reference documentation for details.
Use REST API v3.0 to: - Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
Use REST API v3.0 to:
- Get logs for each endpoint if logs have been requested for that endpoint. - Request the manifest of the models that you create, to set up on-premises containers.
+## Features
+ REST API v3.0 includes such features as: - **Webhook notifications**: All running processes of the service support webhook notifications. REST API v3.0 provides the calls to enable you to register your webhooks where notifications are sent. - **Updating models behind endpoints**
For examples of using REST API v3.0 with batch transcription, see [How to use ba
For information about migrating to the latest version of the speech-to-text REST API, see [Migrate code from v2.0 to v3.0 of the REST API](./migrate-v2-to-v3.md).
-You can find the full speech-to-text REST API v3.0 reference on the [Microsoft developer portal](https://centralus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0).
-
-## Speech-to-text REST API for short audio
-
-As an alternative to the [Speech SDK](speech-sdk.md), the Speech service allows you to convert speech to text by using the [REST API for short audio](#speech-to-text-rest-api-for-short-audio).
-This API is very limited. Use it only in cases where you can't use the Speech SDK.
-
-Before you use the speech-to-text REST API for short audio, consider the following limitations:
-
-* Requests that use the REST API for short audio and transmit audio directly can contain no more than 60 seconds of audio.
-* The REST API for short audio returns only final results. It doesn't provide partial results.
-
-If sending longer audio is a requirement for your application, consider using the Speech SDK or [speech-to-text REST API v3.0](#speech-to-text-rest-api-v30).
-
-> [!TIP]
-> For Azure Government and Azure China endpoints, see [this article about sovereign clouds](sovereign-clouds.md).
--
-### Regions and endpoints
-
-The endpoint for the REST API for short audio has this format:
-
-```
-https://<REGION_IDENTIFIER>.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1
-```
-
-Replace `<REGION_IDENTIFIER>` with the identifier that matches the region of your subscription from this table:
--
-> [!NOTE]
-> You must append the language parameter to the URL to avoid receiving a 4xx HTTP error. For example, the language set to US English via the West US endpoint is: `https://westus.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
-
-### Query parameters
-
-These parameters might be included in the query string of the REST request:
-
-| Parameter | Description | Required or optional |
-|--|-||
-| `language` | Identifies the spoken language that's being recognized. See [Supported languages](language-support.md#speech-to-text). | Required |
-| `format` | Specifies the result format. Accepted values are `simple` and `detailed`. Simple results include `RecognitionStatus`, `DisplayText`, `Offset`, and `Duration`. Detailed responses include four different representations of display text. The default setting is `simple`. | Optional |
-| `profanity` | Specifies how to handle profanity in recognition results. Accepted values are: <br><br>`masked`, which replaces profanity with asterisks. <br>`removed`, which removes all profanity from the result. <br>`raw`, which includes profanity in the result. <br><br>The default setting is `masked`. | Optional |
-| `cid` | When you're using the [Custom Speech portal](./custom-speech-overview.md) to create custom models, you can take advantage of the **Endpoint ID** value from the **Deployment** page. Use the **Endpoint ID** value as the argument to the `cid` query string parameter. | Optional |
-
-### Request headers
-
-This table lists required and optional headers for speech-to-text requests:
-
-|Header| Description | Required or optional |
-||-||
-| `Ocp-Apim-Subscription-Key` | Your subscription key for the Speech service. | Either this header or `Authorization` is required. |
-| `Authorization` | An authorization token preceded by the word `Bearer`. For more information, see [Authentication](#authentication). | Either this header or `Ocp-Apim-Subscription-Key` is required. |
-| `Pronunciation-Assessment` | Specifies the parameters for showing pronunciation scores in recognition results. These scores assess the pronunciation quality of speech input, with indicators like accuracy, fluency, and completeness. <br><br>This parameter is a Base64-encoded JSON that contains multiple detailed parameters. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters). | Optional |
-| `Content-type` | Describes the format and codec of the provided audio data. Accepted values are `audio/wav; codecs=audio/pcm; samplerate=16000` and `audio/ogg; codecs=opus`. | Required |
-| `Transfer-Encoding` | Specifies that chunked audio data is being sent, rather than a single file. Use this header only if you're chunking audio data. | Optional |
-| `Expect` | If you're using chunked transfer, send `Expect: 100-continue`. The Speech service acknowledges the initial request and awaits additional data.| Required if you're sending chunked audio data. |
-| `Accept` | If provided, it must be `application/json`. The Speech service provides results in JSON. Some request frameworks provide an incompatible default value. It's good practice to always include `Accept`. | Optional, but recommended. |
-
-### Audio formats
-
-Audio is sent in the body of the HTTP `POST` request. It must be in one of the formats in this table:
-
-| Format | Codec | Bit rate | Sample rate |
-|--|-|-|--|
-| WAV | PCM | 256 kbps | 16 kHz, mono |
-| OGG | OPUS | 256 kpbs | 16 kHz, mono |
-
->[!NOTE]
->The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md).
-
-### Pronunciation assessment parameters
-
-This table lists required and optional parameters for pronunciation assessment:
-
-| Parameter | Description | Required or optional |
-|--|-||
-| `ReferenceText` | The text that the pronunciation will be evaluated against. | Required |
-| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | Optional |
-| `Granularity` | The evaluation granularity. Accepted values are:<br><br> `Phoneme`, which shows the score on the full-text, word, and phoneme levels.<br>`Word`, which shows the score on the full-text and word levels. <br>`FullText`, which shows the score on the full-text level only.<br><br> The default setting is `Phoneme`. | Optional |
-| `Dimension` | Defines the output criteria. Accepted values are:<br><br> `Basic`, which shows the accuracy score only. <br>`Comprehensive`, which shows scores on more dimensions (for example, fluency score and completeness score on the full-text level, and error type on the word level).<br><br> To see definitions of different score dimensions and word error types, see [Response parameters](#response-parameters). The default setting is `Basic`. | Optional |
-| `EnableMiscue` | Enables miscue calculation. With this parameter enabled, the pronounced words will be compared to the reference text. They'll be marked with omission or insertion based on the comparison. Accepted values are `False` and `True`. The default setting is `False`. | Optional |
-| `ScenarioId` | A GUID that indicates a customized point system. | Optional |
-
-Here's example JSON that contains the pronunciation assessment parameters:
-
-```json
-{
- "ReferenceText": "Good morning.",
- "GradingSystem": "HundredMark",
- "Granularity": "FullText",
- "Dimension": "Comprehensive"
-}
-```
-
-The following sample code shows how to build the pronunciation assessment parameters into the `Pronunciation-Assessment` header:
-
-```csharp
-var pronAssessmentParamsJson = $"{{\"ReferenceText\":\"Good morning.\",\"GradingSystem\":\"HundredMark\",\"Granularity\":\"FullText\",\"Dimension\":\"Comprehensive\"}}";
-var pronAssessmentParamsBytes = Encoding.UTF8.GetBytes(pronAssessmentParamsJson);
-var pronAssessmentHeader = Convert.ToBase64String(pronAssessmentParamsBytes);
-```
-
-We strongly recommend streaming (chunked) uploading while you're posting the audio data, which can significantly reduce the latency. To learn how to enable streaming, see the [sample code in various programming languages](https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/PronunciationAssessment).
-
->[!NOTE]
-> For more For more information, see [pronunciation assessment](how-to-pronunciation-assessment.md).
-
-### Sample request
-
-The following sample includes the host name and required headers. It's important to note that the service also expects audio data, which is not included in this sample. As mentioned earlier, chunking is recommended but not required.
-
-```HTTP
-POST speech/recognition/conversation/cognitiveservices/v1?language=en-US&format=detailed HTTP/1.1
-Accept: application/json;text/xml
-Content-Type: audio/wav; codecs=audio/pcm; samplerate=16000
-Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY
-Host: westus.stt.speech.microsoft.com
-Transfer-Encoding: chunked
-Expect: 100-continue
-```
-
-To enable pronunciation assessment, you can add the following header. To learn how to build this header, see [Pronunciation assessment parameters](#pronunciation-assessment-parameters).
-
-```HTTP
-Pronunciation-Assessment: eyJSZWZlcm...
-```
-
-### HTTP status codes
-
-The HTTP status code for each response indicates success or common errors.
-
-| HTTP status code | Description | Possible reasons |
-||-|--|
-| 100 | Continue | The initial request has been accepted. Proceed with sending the rest of the data. (This code is used with chunked transfer.) |
-| 200 | OK | The request was successful. The response body is a JSON object. |
-| 400 | Bad request | The language code wasn't provided, the language isn't supported, or the audio file is invalid (for example). |
-| 401 | Unauthorized | A subscription key or an authorization token is invalid in the specified region, or an endpoint is invalid. |
-| 403 | Forbidden | A subscription key or authorization token is missing. |
-
-### Chunked transfer
-
-Chunked transfer (`Transfer-Encoding: chunked`) can help reduce recognition latency. It allows the Speech service to begin processing the audio file while it's transmitted. The REST API for short audio does not provide partial or interim results.
-
-The following code sample shows how to send audio in chunks. Only the first chunk should contain the audio file's header. `request` is an `HttpWebRequest` object that's connected to the appropriate REST endpoint. `audioFile` is the path to an audio file on disk.
-
-```csharp
-var request = (HttpWebRequest)HttpWebRequest.Create(requestUri);
-request.SendChunked = true;
-request.Accept = @"application/json;text/xml";
-request.Method = "POST";
-request.ProtocolVersion = HttpVersion.Version11;
-request.Host = host;
-request.ContentType = @"audio/wav; codecs=audio/pcm; samplerate=16000";
-request.Headers["Ocp-Apim-Subscription-Key"] = "YOUR_SUBSCRIPTION_KEY";
-request.AllowWriteStreamBuffering = false;
-
-using (var fs = new FileStream(audioFile, FileMode.Open, FileAccess.Read))
-{
- // Open a request stream and write 1,024-byte chunks in the stream one at a time.
- byte[] buffer = null;
- int bytesRead = 0;
- using (var requestStream = request.GetRequestStream())
- {
- // Read 1,024 raw bytes from the input audio file.
- buffer = new Byte[checked((uint)Math.Min(1024, (int)fs.Length))];
- while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) != 0)
- {
- requestStream.Write(buffer, 0, bytesRead);
- }
-
- requestStream.Flush();
- }
-}
-```
-
-### Response parameters
-
-Results are provided as JSON. The `simple` format includes the following top-level fields:
-
-| Parameter | Description |
-|--|--|
-|`RecognitionStatus`|Status, such as `Success` for successful recognition. See the next table.|
-|`DisplayText`|The recognized text after capitalization, punctuation, inverse text normalization, and profanity masking. Present only on success. Inverse text normalization is conversion of spoken text to shorter forms, such as 200 for "two hundred" or "Dr. Smith" for "doctor smith."|
-|`Offset`|The time (in 100-nanosecond units) at which the recognized speech begins in the audio stream.|
-|`Duration`|The duration (in 100-nanosecond units) of the recognized speech in the audio stream.|
-
-The `RecognitionStatus` field might contain these values:
-
-| Status | Description |
-|--|-|
-| `Success` | The recognition was successful, and the `DisplayText` field is present. |
-| `NoMatch` | Speech was detected in the audio stream, but no words from the target language were matched. This status usually means that the recognition language is different from the language that the user is speaking. |
-| `InitialSilenceTimeout` | The start of the audio stream contained only silence, and the service timed out while waiting for speech. |
-| `BabbleTimeout` | The start of the audio stream contained only noise, and the service timed out while waiting for speech. |
-| `Error` | The recognition service encountered an internal error and could not continue. Try again if possible. |
-
-> [!NOTE]
-> If the audio consists only of profanity, and the `profanity` query parameter is set to `remove`, the service does not return a speech result.
-
-The `detailed` format includes additional forms of recognized results.
-When you're using the `detailed` format, `DisplayText` is provided as `Display` for each result in the `NBest` list.
-
-The object in the `NBest` list can include:
-
-| Parameter | Description |
-|--|-|
-| `Confidence` | The confidence score of the entry, from 0.0 (no confidence) to 1.0 (full confidence). |
-| `Lexical` | The lexical form of the recognized text: the actual words recognized. |
-| `ITN` | The inverse-text-normalized (ITN) or canonical form of the recognized text, with phone numbers, numbers, abbreviations ("doctor smith" to "dr smith"), and other transformations applied. |
-| `MaskedITN` | The ITN form with profanity masking applied, if requested. |
-| `Display` | The display form of the recognized text, with punctuation and capitalization added. This parameter is the same as what `DisplayText` provides when the format is set to `simple`. |
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. The accuracy score at the word and full-text levels is aggregated from the accuracy score at the phoneme level. |
-| `FluencyScore` | Fluency of the provided speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
-| `CompletenessScore` | Completeness of the speech, determined by calculating the ratio of pronounced words to reference text input. |
-| `PronScore` | Overall score that indicates the pronunciation quality of the provided speech. This score is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
-| `ErrorType` | Value that indicates whether a word is omitted, inserted, or badly pronounced, compared to `ReferenceText`. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, and `Mispronunciation`. |
-
-### Sample responses
-
-Here's a typical response for `simple` recognition:
-
-```json
-{
- "RecognitionStatus": "Success",
- "DisplayText": "Remind me to buy 5 pencils.",
- "Offset": "1236645672289",
- "Duration": "1236645672289"
-}
-```
-
-Here's a typical response for `detailed` recognition:
-
-```json
-{
- "RecognitionStatus": "Success",
- "Offset": "1236645672289",
- "Duration": "1236645672289",
- "NBest": [
- {
- "Confidence": 0.9052885,
- "Display": "What's the weather like?",
- "ITN": "what's the weather like",
- "Lexical": "what's the weather like",
- "MaskedITN": "what's the weather like"
- },
- {
- "Confidence": 0.92459863,
- "Display": "what is the weather like",
- "ITN": "what is the weather like",
- "Lexical": "what is the weather like",
- "MaskedITN": "what is the weather like"
- }
- ]
-}
-```
-
-Here's a typical response for recognition with pronunciation assessment:
-
-```json
-{
- "RecognitionStatus": "Success",
- "Offset": "400000",
- "Duration": "11000000",
- "NBest": [
- {
- "Confidence" : "0.87",
- "Lexical" : "good morning",
- "ITN" : "good morning",
- "MaskedITN" : "good morning",
- "Display" : "Good morning.",
- "PronScore" : 84.4,
- "AccuracyScore" : 100.0,
- "FluencyScore" : 74.0,
- "CompletenessScore" : 100.0,
- "Words": [
- {
- "Word" : "Good",
- "AccuracyScore" : 100.0,
- "ErrorType" : "None",
- "Offset" : 500000,
- "Duration" : 2700000
- },
- {
- "Word" : "morning",
- "AccuracyScore" : 100.0,
- "ErrorType" : "None",
- "Offset" : 5300000,
- "Duration" : 900000
- }
- ]
- }
- ]
-}
-```
- ## Next steps -- [Create a free Azure account](https://azure.microsoft.com/free/cognitive-services/) - [Customize acoustic models](./how-to-custom-speech-train-model.md) - [Customize language models](./how-to-custom-speech-train-model.md) - [Get familiar with batch transcription](batch-transcription.md)
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md
Speech Services REST API endpoints in Azure Government have the following format
| REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/sts/v1.0/issueToken`
-| [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` |
-| [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` |
+| [Speech-to-text REST API v3.0](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.microsoft.us/<URL_PATH>` |
+| [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.us/<URL_PATH>` |
| [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.us/<URL_PATH>` | Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
Speech Services REST API endpoints in Azure China have the following format:
| REST API type / operation | Endpoint format | |--|--| | Access token | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/sts/v1.0/issueToken`
-| [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` |
-| [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` |
+| [Speech-to-text REST API v3.0](rest-speech-to-text.md) | `https://<REGION_IDENTIFIER>.api.cognitive.azure.cn/<URL_PATH>` |
+| [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) | `https://<REGION_IDENTIFIER>.stt.speech.azure.cn/<URL_PATH>` |
| [Text-to-speech REST API](rest-text-to-speech.md) | `https://<REGION_IDENTIFIER>.tts.speech.azure.cn/<URL_PATH>` | Replace `<REGION_IDENTIFIER>` with the identifier matching the region of your subscription from this table:
cognitive-services Speech Services Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-private-link.md
Speech service has REST APIs for [Speech-to-text](rest-speech-to-text.md) and [T
Speech-to-text has two REST APIs. Each API serves a different purpose, uses different endpoints, and requires a different approach when you're using it in the private-endpoint-enabled scenario. The Speech-to-text REST APIs are:-- [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](./migrate-v2-to-v3.md)-- [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio), which is used for online transcription
+- [Speech-to-text REST API v3.0](rest-speech-to-text.md), which is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). v3.0 is a [successor of v2.0](./migrate-v2-to-v3.md)
+- [Speech-to-text REST API for short audio](rest-speech-to-text-short.md), which is used for online transcription
Usage of the Speech-to-text REST API for short audio and the Text-to-speech REST API in the private endpoint scenario is the same. It's equivalent to the [Speech SDK case](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk) described later in this article.
The next subsections describe both cases.
#### Speech-to-text REST API v3.0
-Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
+Usually, Speech resources use [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the [Speech-to-text REST API v3.0](rest-speech-to-text.md). These resources have the following naming format: <p/>`{region}.api.cognitive.microsoft.com`.
This is a sample request URL:
After you turn on a custom domain name for a Speech resource, you typically repl
#### Speech-to-text REST API for short audio and Text-to-speech REST API
-The [Speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and the [Text-to-speech REST API](rest-text-to-speech.md) use two types of endpoints:
+The [Speech-to-text REST API for short audio](rest-speech-to-text-short.md) and the [Text-to-speech REST API](rest-text-to-speech.md) use two types of endpoints:
- [Cognitive Services regional endpoints](../cognitive-services-custom-subdomains.md#is-there-a-list-of-regional-endpoints) for communicating with the Cognitive Services REST API to obtain an authorization token - Special endpoints for all other operations
The detailed description of the special endpoints and how their URL should be tr
Get familiar with the material in the subsection mentioned in the previous paragraph and see the following example. The example describes the Text-to-speech REST API. Usage of the Speech-to-text REST API for short audio is fully equivalent. > [!NOTE]
-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
+> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in private endpoint scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
> > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
Speech-to-text REST API v3.0 usage is fully equivalent to the case of [private-e
#### Speech-to-text REST API for short audio and Text-to-speech REST API
-In this case, usage of the Speech-to-text REST API for short audio and usage of the Text-to-speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio) and [Text-to-speech REST API](rest-text-to-speech.md) documentation.
+In this case, usage of the Speech-to-text REST API for short audio and usage of the Text-to-speech REST API have no differences from the general case, with one exception. (See the following note.) You should use both APIs as described in the [speech-to-text REST API for short audio](rest-speech-to-text-short.md) and [Text-to-speech REST API](rest-text-to-speech.md) documentation.
> [!NOTE]
-> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
+> When you're using the Speech-to-text REST API for short audio and Text-to-speech REST API in custom domain scenarios, use a subscription key passed through the `Ocp-Apim-Subscription-Key` header. (See details for [Speech-to-text REST API for short audio](rest-speech-to-text-short.md#request-headers) and [Text-to-speech REST API](rest-text-to-speech.md#request-headers))
> > Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
In the following tables, the parameters without the **Adjustable** row aren't ad
#### Online transcription
-You can use online transcription with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text.md#speech-to-text-rest-api-for-short-audio).
+You can use online transcription with the [Speech SDK](speech-sdk.md) or the [speech-to-text REST API for short audio](rest-speech-to-text-short.md).
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
You can use online transcription with the [Speech SDK](speech-sdk.md) or the [sp
| Quota | Free (F0)<sup>1</sup> | Standard (S0) | |--|--|--|
-| [Speech-to-text REST API V2.0 and v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) limit | Not available for F0 | 300 requests per minute |
+| [Speech-to-text REST API V2.0 and v3.0](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute |
| Max audio input file size | N/A | 1 GB | | Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB | | Max blob container size | N/A | 5 GB |
cognitive-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md
When you're using speech-to-text for recognition and transcription in a unique e
## Get started
-To get started with speech-to-text, see the [quickstart](get-started-speech-to-text.md). Speech-to-text is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-speech-to-text.md#pronunciation-assessment-parameters), and the [Speech CLI](spx-overview.md).
+To get started with speech-to-text, see the [quickstart](get-started-speech-to-text.md). Speech-to-text is available via the [Speech SDK](speech-sdk.md), the [REST API](rest-speech-to-text-short.md#pronunciation-assessment-parameters), and the [Speech CLI](spx-overview.md).
## Sample code
Sample code for the Speech SDK is available on GitHub. These samples cover commo
- [Speech-to-text samples (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk) - [Batch transcription samples (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch)-- [Pronunciation assessment samples (REST)](rest-speech-to-text.md#pronunciation-assessment-parameters)
+- [Pronunciation assessment samples (REST)](rest-speech-to-text-short.md#pronunciation-assessment-parameters)
## Customization
Use the following list to find the appropriate Speech SDK reference docs:
For speech-to-text REST APIs, see the following resources: - [REST API: Speech-to-text](rest-speech-to-text.md)-- [REST API: Pronunciation assessment](rest-speech-to-text.md#pronunciation-assessment-parameters)
+- [REST API: Pronunciation assessment](rest-speech-to-text-short.md#pronunciation-assessment-parameters)
- <a href="https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0">REST API: Batch transcription and customization </a> ## Next steps
cognitive-services Swagger Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/swagger-documentation.md
Title: Swagger documentation - Speech service
+ Title: Generate a REST API client library - Speech service
-description: The Swagger documentation can be used to auto-generate SDKs for a number of programming languages. All operations in our service are supported by Swagger
+description: The Swagger documentation can be used to auto-generate SDKs for a number of programming languages.
Last updated 02/16/2021
-# Swagger documentation
+# Generate a REST API client library for the Speech-to-text REST API v3.0
Speech service offers a Swagger specification to interact with a handful of REST APIs used to import data, create models, test model accuracy, create custom endpoints, queue up batch transcriptions, and manage subscriptions. Most operations available through the [Custom Speech area of the Speech Studio](https://aka.ms/speechstudio/customspeech) can be completed programmatically using these APIs. > [!NOTE] > Speech service has several REST APIs for [Speech-to-text](rest-speech-to-text.md) and [Text-to-speech](rest-text-to-speech.md). >
-> However only [Speech-to-text REST API v3.0](rest-speech-to-text.md#speech-to-text-rest-api-v30) and v2.0 are documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs.
+> However only [Speech-to-text REST API v3.0](rest-speech-to-text.md) is documented in the Swagger specification. See the documents referenced in the previous paragraph for the information on all other Speech Services REST APIs.
## Generating code from the Swagger specification
You can use the Python library that you generated with the [Speech service sampl
## Reference documents
-* [Swagger: Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
-* [Speech-to-text REST API](rest-speech-to-text.md)
+* [Speech-to-text REST API v3.0](rest-speech-to-text.md)
* [Text-to-speech REST API](rest-text-to-speech.md) ## Next steps
cognitive-services Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/concepts/evaluation.md
Previously updated : 01/24/2022 Last updated : 04/05/2022 # Evaluation metrics
-Your [dataset is split](../how-to/train-model.md#data-split) into two parts: a set for training, and a set for testing. The training set while building the model and the testing set is used as a blind set to evaluate model performance after training is completed.
+Your [dataset is split](../how-to/train-model.md) into two parts: a set for training, and a set for testing. The training set while building the model and the testing set is used as a blind set to evaluate model performance after training is completed.
Model evaluation is triggered after training is completed successfully. The evaluation process starts by using the trained model to predict user defined classes for files in the test set, and compares them with the provided data tags (which establishes a baseline of truth). The results are returned so you can review the modelΓÇÖs performance. For evaluation, custom text classification uses the following metrics:
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/faq.md
Previously updated : 11/16/2021 Last updated : 04/05/2022
When you're ready to start [using your model to make predictions](#how-do-i-use-
## What is the recommended CI/CD process?
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you train a new model, your dataset is [split](how-to/train-model.md#data-split) randomly into training and testing sets. Because of this, there is no guarantee that the model evaluation is performed on the same test set, so results are not comparable. It is recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
+You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [tag your data](how-to/tag-data.md#tag-your-data) you can determine how your dataset is split into training and testing sets.
## Does a low or high model score guarantee bad or good performance in production?
See the [data selection and schema design](how-to/design-schema.md) article for
## When I retrain my model I get different results, why is this?
-* When you train a new model your dataset is [split](how-to/train-model.md#data-split) randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
+* When you [tag your data](how-to/tag-data.md#tag-your-data) you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
* If you are retraining the same model, your test set will be the same, but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough, which is a factor of how representative and distinct your data is, and the quality of your tagged data.
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/tag-data.md
Previously updated : 11/02/2021 Last updated : 04/05/2022 # Tag text data for training your model
-Before creating a custom text classification model, you need to have tagged data first. If your data is not tagged already, you can tag it in the language studio. Tagged data informs the model how to interpret text, and is used for training and evaluation.
+Before creating a custom text classification model, you need to have tagged data first. If your data isn't tagged already, you can tag it in the language studio. Tagged data informs the model how to interpret text, and is used for training and evaluation.
## Prerequisites
Before you can tag data, you need:
See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-<!--Tagging your data will let you [train your model](train-model.md), [evaluate it](train-model.md), and use it to [classify text](call-api.md).-->
- ## Tag your data After training data is uploaded to your Azure storage account, you will need to tag it, so your model knows which words will be associated with the classes you need. When you tag data in Language Studio (or manually tag your data), these tags will be stored in [the JSON format](../concepts/data-formats.md) that your model will use during training.
As you tag your data, keep in mind:
* In general, more tagged data leads to better results, provided the data is tagged accurately.
-* Although we recommended having around 50 tagged files per class, there is no fixed number that can guarantee your model will perform the best, because model performance also depends on possible ambiguity in your [schema](design-schema.md), and the quality of your tagged data.
+* Although we recommended having around 50 tagged files per class, there's no fixed number that can guarantee your model will perform the best, because model performance also depends on possible ambiguity in your [schema](design-schema.md), and the quality of your tagged data.
Use the following steps to tag your data
Use the following steps to tag your data
1. From the left side menu, select **Tag data**
-3. You can find a list of all .txt files available in your projects to the left. You can select the file you want to start tagging or you can use the Back and Next button from the bottom of the page to navigate.
+3. You can find a list of all .txt files available in your projects to the left. You can select the file you want to start tagging or you can use the **Back** and **Next** button from the bottom of the page to navigate.
4. You can either view all files or only tagged files by changing the view from the **Viewing** drop-down menu.
Use the following steps to tag your data
5. Before you start tagging, add classes to your project from the top-right corner - :::image type="content" source="../media/tag-1.png" alt-text="A screenshot showing the data tagging screen" lightbox="../media/tag-1.png":::
-6. Start tagging your files.
+6. Start tagging your files. In the images below:
+
+ * *Section 1*: is where the content of the text file is displayed.
+
+ * *Section 2*: includes your project's classes and distribution across your files and tags.
+
+ * *Section 3* is the split project data toggle. You can choose to add the selected text file to your training set or the testing set. By default, the toggle is off, and all text files are added to your training set.
+
+ **Single label classification**: your file can only be tagged with one class; you can do so by selecting one of the buttons next to the class you want to tag this file with.
+
+ :::image type="content" source="../media/single.png" alt-text="A screenshot showing the single label classification tag page" lightbox="../media/single.png":::
+
+ **Multi label classification**: your file can be tagged with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to tag this file with.
- * **Single label classification**: your file can only be tagged with one class, you can do so by checking one of the radio buttons next to the class you want to tag this file with.
+ :::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
- :::image type="content" source="../media/tag-single.png" alt-text="A screenshot showing the single label classification menu" lightbox="../media/tag-single.png":::
+For distribution section, you can **View class distribution across** Training and Testing sets.
+
- * **Multi label classification**: your file can be tagged with multiple classes, you can do so by checking all applicable check boxes next to the classes you want to tag this file with.
+To add a text file to a training or testing set, use the buttons choose the set it belongs to.
- :::image type="content" source="../media/tag-multi.png" alt-text="A screenshot showing the multi label classification menu" lightbox="../media/tag-multi.png":::
+> [!TIP]
+> It is recommended to define your testing set.
-While tagging, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, click on Save tags button at the top of the page.
+Your changes will be saved periodically as you add tags. If they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save tags** at the top of the page.
## Remove tags
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/train-model.md
Previously updated : 11/02/2021 Last updated : 04/05/2022
Before you train your model you need:
See the [application development lifecycle](../overview.md#project-development-lifecycle) for more information.
-## Data split
-
-Before you start the training process, files in your dataset are divided into three groups at random:
-
-* The **training set** contains 80% of the files in your dataset. It is the main set that is used to train the model.
-
-* The **test set** contains 20% of the files available in your dataset. This set is used to provide an unbiased [evaluation](../how-to/view-model-evaluation.md) of the model. This set is not introduced to the model during training. The details of correct and incorrect predictions for this set are not shown so that you don't readjust your training data and alter the results.
- ## Train model in Language Studio 1. Go to your project page in [Language Studio](https://aka.ms/LanguageStudio).
Before you start the training process, files in your dataset are divided into th
:::image type="content" source="../media/train-model.png" alt-text="Create a new model" lightbox="../media/train-model.png":::
+If you have enabled the [**Split project data manually** selection](tag-data.md#tag-your-data) when you were tagging your data, you will see two training options:
+
+* **Automatic split the testing**: The data will be randomly split for each class between training and testing sets, according to the percentages you choose. The default value is 80% for training and 20% for testing. To change these values, choose which set you want to change and write the new value.
+
+* **Use a manual split**: Assign each document to either the training or testing set, this required first adding files in the test dataset.
+ 5. Click on the **Train** button.
-6. You can check the status of the training job in the same page. Only successfully completed tasks will generate models.
+6. You can check the status of the training job in the same page. Only successfully completed training jobs will generate models.
You can only have one training job running at a time. You cannot create or start other tasks in the same project.
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/service-limits.md
Use this article to learn about the data and rate limits when using custom text
* All files should be available at the root of your container.
-* Your [training dataset](how-to/train-model.md#data-split) should include at least 10 files and no more than 1,000,000 files.
+* Your [training dataset](how-to/train-model.md) should include at least 10 files and no more than 1,000,000 files.
## API limits
communication-services Detailed Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/detailed-call-flows.md
Communication Services is built primarily on two types of traffic: **real-time m
**Real-time media** is transmitted using the Real-time Transport Protocol (RTP). This protocol supports audio, video, and screen sharing data transmission. This data is sensitive to network latency issues. While it's possible to transmit real-time media using TCP or HTTP, we recommend using UDP as the transport-layer protocol to support high-performance end-user experiences. Media payloads transmitted over RTP are secured using SRTP.
-Users of your Communication Services solution will be connecting to your services from their client devices. Communication between these devices and your servers is handled with **signaling**. For example: call initiation and real-time chat are supported by signaling between devices and your service. Most signaling traffic uses HTTPS REST, though in some scenarios, SIP can be used as a signaling traffic protocol. While this type of traffic is less sensitive to latency, low-latency signaling will the users of your solution a pleasant end-user experience.
+Users of your Communication Services solution will be connecting to your services from their client devices. Communication between these devices and your servers is handled with **signaling**. For example: call initiation and real-time chat are supported by signaling between devices and your service. Most signaling traffic uses HTTPS REST, though in some scenarios, SIP can be used as a signaling traffic protocol. While this type of traffic is less sensitive to latency, low-latency signaling will give the users of your solution a pleasant end-user experience.
Call flows in ACS are based on the Session Description Protocol (SDP) RFC 4566 offer and answer model over HTTPS. Once the callee accepts an incoming call, the caller and callee agree on the session parameters.
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
In the tables below we summarize these areas and availability of REST APIs and S
Development of Calling and Chat applications can be accelerated by the [Azure Communication Services UI library](./ui-library/ui-library-overview.md). The customizable UI library provides open-source UI components for Web and mobile apps, and a Microsoft Teams theme. ## SDKs+ | Assembly | Protocols| Environment | Capabilities| |--|-||-| | Azure Resource Manager | [REST](/rest/api/communication/communicationservice)| Service| Provision and manage Communication Services resources|
Publishing locations for individual SDK packages are detailed below.
| Calling| [npm](https://www.npmjs.com/package/@azure/communication-calling) | [NuGet](https://www.NuGet.org/packages/Azure.Communication.Calling) | -| - | [GitHub](https://github.com/Azure/Communication/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/)| -| |Call Automation||[NuGet](https://www.NuGet.org/packages/Azure.Communication.CallingServer/)||[Maven](https://search.maven.org/artifact/com.azure/azure-communication-callingserver) |Network Traversal| [npm](https://www.npmjs.com/package/@azure/communication-network-traversal)|[NuGet](https://www.NuGet.org/packages/Azure.Communication.NetworkTraversal/) | [PyPi](https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | -|- | - |
-| UI Library| [npm](https://www.npmjs.com/package/@azure/communication-react) | - | - | - | - | - | [GitHub](https://github.com/Azure/communication-ui-library), [Storybook](https://azure.github.io/communication-ui-library/?path=/story/overview--page) |
+| UI Library| [npm](https://www.npmjs.com/package/@azure/communication-react) | - | - | - | [GitHub](https://github.com/Azure/communication-ui-library-ios) | [GitHub](https://github.com/Azure/communication-ui-library-android) | [GitHub](https://github.com/Azure/communication-ui-library), [Storybook](https://azure.github.io/communication-ui-library/?path=/story/overview--page) |
| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html)| -| [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | [docs](/objectivec/communication-services/calling/)| [docs](/java/api/com.azure.android.communication.calling)| -| ### SDK platform support details
Publishing locations for individual SDK packages are detailed below.
Except for Calling, Communication Services packages target .NET Standard 2.0, which supports the platforms listed below. Support via .NET Framework 4.6.1+ - Windows 10, 8.1, 8 and 7 - Windows Server 2012 R2, 2012 and 2008 R2 SP1 Support via .NET Core 2.0:+ - Windows 10 (1607+), 7 SP1+, 8.1 - Windows Server 2008 R2 SP1+ - Max OS X 10.12+
Support via .NET Core 2.0:
- Xamarin Mac 3.8 The Calling package supports UWP apps build with .NET Native or C++/WinRT on:+ - Windows 10 10.0.17763 - Windows Server 2019 10.0.17763 ## REST APIs+ Communication Services APIs are documented alongside other [Azure REST APIs in docs.microsoft.com](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs). ### REST API Throttles+ Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a`429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md). | API| Throttle|
For more information, see the following SDK overviews:
To get started with Azure Communication - [Create an Azure Communication Services resource](../quickstarts/create-communication-resource.md)-- Generate [User Access Tokens](../quickstarts/access-tokens.md)
+- Generate [User Access Tokens](../quickstarts/access-tokens.md)
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
To find your User ID, follow the steps listed below:
![Screenshot of how to copy Azure Active Directory user ID and store it.](./media/troubleshooting/copy-aad-user-id.png)
+## Getting immutable resource ID
+Sometimes you also need to provide immutable resource ID of your Communication Service resource. To find it, follow the steps listed below:
+
+1. Navigate to [Azure portal](https://portal.azure.com) and sign in to the Azure portal using the credentials.
+1. Open your Communication Service resource.
+1. From the left-pane, select **Overview**, and switch to a **JSON view**
+ :::image type="content" source="./media/troubleshooting/switch-communication-resource-to-json.png" alt-text="Screenshot of how to switch Communication Resource overview to a JSON view.":::
+1. From **Resource JSON** page, copy the `immutableResourceId` value, and provide it to your support team.
+ :::image type="content" source="./media/troubleshooting/communication-resource-id-json.png" alt-text="Screenshot of Resource JSON.":::
+ ## Calling SDK error codes The Azure Communication Services Calling SDK uses the following error codes to help you troubleshoot calling issues. These error codes are exposed through the `call.callEndReason` property after a call ends.
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Regulations around the maintenance of personal data require the ability to expor
Currently, Azure Communication Services Call Recording APIs are available in C# and Java. ## Next steps
-Check out the [Call Recoding Quickstart](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn more.
+Check out the [Call Recording Quickstart](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn more.
Learn more about [Call Automation APIs](./call-automation-apis.md).
communication-services Localization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/localization.md
+
+ Title: Localization over the UI Library
+
+description: Use Azure Communication Services Mobile UI library to setup localization
++++ Last updated : 04/03/2022+
+zone_pivot_groups: acs-plat-web-ios-android
+
+#Customer intent: As a developer, I want to setup the localization of my application
++
+# Localization
+
+Localization is a key to making products that can be used across the world and by people who speak different languages. UI Library will provide out of the box support for some languages and capabilities such as RTL. Developers can provide their own localization files to be used for the UI Library.
+
+Learn how to set up the localization correctly using the UI Library in your application.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/access-tokens.md)
+- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md)
++++
+## Next steps
+
+- [Learn more about UI Library](../../quickstarts/ui-library/get-started-composites.md)
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
# About Azure DCasv5/ECasv5-series confidential virtual machines (preview) > [!IMPORTANT]
-> Confidential virtual machines (confidential VMs) in Azure confidential computing is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Azure DCasv5/ECasv5-series confidential virtual machines are currently in Preview. Use is subject to your [Azure subscription](https://azure.microsoft.com/support/legal/) and terms applicable to "Previews" as detailed in the Universal License Terms for Online Services section of the [Microsoft Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage) and the [Microsoft Products and Services Data Protection Addendum](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA) ("DPA").
+ Azure confidential computing offers confidential VMs based on [AMD processors with SEV-SNP technology](virtual-machine-solutions-amd.md). Confidential VMs are for tenants with high security and confidentiality requirements. These VMs provide a strong, hardware-enforced boundary to help meet your security needs. You can use confidential VMs for migrations without making changes to your code, with the platform protecting your VM's state from being read or modified.
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
TCP probes wait for a connection to be established with the server to indicate s
- You can only add one of each probe type per container. - `exec` probes aren't supported. - Port values must be integers; named ports aren't supported.
+- gRPC is not supported.
## Examples
cosmos-db Migrate Data Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/migrate-data-arcion.md
Previously updated : 04/02/2022 Last updated : 04/04/2022
Cassandra API in Azure Cosmos DB has become a great choice for enterprise worklo
There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Apache Cassandra database to Azure Cosmos DB Cassandra API using Arcion.
+> [!NOTE]
+> This offering from Arcion is currently in beta. For more information, please contact them at [Arcion Support](mailto:support@arcion.io)
+ ## Benefits using Arcion for migration ArcionΓÇÖs migration solution follows a step by step approach to migrate complex operational workloads. The following are some of the key aspects of ArcionΓÇÖs zero-downtime migration plan:
cosmos-db Oracle Migrate Cosmos Db Arcion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/oracle-migrate-cosmos-db-arcion.md
Previously updated : 04/02/2022 Last updated : 04/04/2022
Cassandra API in Azure Cosmos DB has become a great choice for enterprise worklo
There are various ways to migrate database workloads from one platform to another. [Arcion](https://www.arcion.io) is a tool that offers a secure and reliable way to perform zero downtime migration from other databases to Azure Cosmos DB. This article describes the steps required to migrate data from Oracle database to Azure Cosmos DB Cassandra API using Arcion.
+> [!NOTE]
+> This offering from Arcion is currently in beta. For more information, please contact them at [Arcion Support](mailto:support@arcion.io)
+ ## Benefits using Arcion for migration ArcionΓÇÖs migration solution follows a step by step approach to migrate complex operational workloads. The following are some of the key aspects of ArcionΓÇÖs zero-downtime migration plan:
cosmos-db Bulk Executor Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/bulk-executor-java.md
Title: Use bulk executor Java library in Azure Cosmos DB to perform bulk import and update operations description: Bulk import and update Azure Cosmos DB documents using bulk executor Java library--++ ms.devlang: java Previously updated : 12/09/2021 Last updated : 03/07/2022
-# Use bulk executor Java library to perform bulk operations on Azure Cosmos DB data
+# Perform bulk operations on Azure Cosmos DB data
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-This tutorial provides instructions on using the Azure Cosmos DB's bulk executor Java library to import, and update Azure Cosmos DB documents. To learn about bulk executor library and how it helps you use massive throughput and storage, see [bulk executor Library overview](../bulk-executor-overview.md) article. In this tutorial, you build a Java application that generates random documents and they are bulk imported into an Azure Cosmos container. After importing, you will bulk update some properties of a document.
+This tutorial provides instructions on performing bulk operations in the [Azure Cosmos DB Java V4 SDK](sql-api-sdk-java-v4.md). This version of the SDK comes with the bulk executor library built-in. If you are using an older version of Java SDK, it's recommended to [migrate to the latest version](migrate-java-v4-sdk.md). Azure Cosmos DB Java V4 SDK is the current recommended solution for Java bulk support.
+
+Currently, the bulk executor library is supported only by Azure Cosmos DB SQL API and Gremlin API accounts. To learn about using bulk executor .NET library with Gremlin API, see [perform bulk operations in Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md).
-> [!IMPORTANT]
-> The [Azure Cosmos DB Java V4 SDK](sql-api-sdk-java-v4.md) comes with the bulk executor library built-in to the SDK. If you are using an older version of Java SDK, it's recommended to [migrate to the latest version](migrate-java-v4-sdk.md). Azure Cosmos DB Java V4 SDK is the current recommended solution for Java bulk support. Currently, the bulk executor library is supported only by Azure Cosmos DB SQL API and Gremlin API accounts. To learn about using bulk executor .NET library with Gremlin API, see [perform bulk operations in Azure Cosmos DB Gremlin API](../graph/bulk-executor-graph-dotnet.md).
->
## Prerequisites
This tutorial provides instructions on using the Azure Cosmos DB's bulk executor
* You can [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription, free of charge and commitments. Or, you can use the [Azure Cosmos DB Emulator](../local-emulator.md) with the `https://localhost:8081` endpoint. The Primary Key is provided in [Authenticating requests](../local-emulator.md#authenticate-requests).
-* [Java Development Kit (JDK) 1.7+](/java/azure/jdk/)
+* [Java Development Kit (JDK) 1.8+](/java/azure/jdk/)
- On Ubuntu, run `apt-get install default-jdk` to install the JDK. - Be sure to set the JAVA_HOME environment variable to point to the folder where the JDK is installed.
This tutorial provides instructions on using the Azure Cosmos DB's bulk executor
## Clone the sample application
-Now let's switch to working with code by downloading a sample Java application from GitHub. This application performs bulk operations on Azure Cosmos DB data. To clone the application, open a command prompt, navigate to the directory where you want to copy the application and run the following command:
+Now let's switch to working with code by downloading a generic samples repository for Java V4 SDK for Azure Cosmos DB from GitHub. These sample applications perform CRUD operations and other common operations on Azure Cosmos DB. To clone the repository, open a command prompt, navigate to the directory where you want to copy the application and run the following command:
```bash
- git clone https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started.git
+ git clone https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples
```
-The cloned repository contains two samples "bulkimport" and "bulkupdate" relative to the "\azure-cosmosdb-bulkexecutor-java-getting-started\samples\bulkexecutor-sample\src\main\java\com\microsoft\azure\cosmosdb\bulkexecutor" folder. The "bulkimport" application generates random documents and imports them to Azure Cosmos DB. The "bulkupdate" application updates some documents in Azure Cosmos DB. In the next sections, we will review the code in each of these sample apps.
+The cloned repository contains a sample `SampleBulkQuickStartAsync.java` in the `/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/bulk/async` folder. The application generates documents and executes operations to bulk create, upsert, replace and delete items in Azure Cosmos DB. In the next sections, we will review the code in the sample app.
-## Bulk import data to Azure Cosmos DB
+## Bulk execution in Azure Cosmos DB
-1. The Azure Cosmos DB's connection strings are read as arguments and assigned to variables defined in CmdLineConfiguration.java file.
+1. The Azure Cosmos DB's connection strings are read as arguments and assigned to variables defined in /`examples/common/AccountSettings.java` file. These environment variables must be set
-2. Next the DocumentClient object is initialized by using the following statements:
+```
+ACCOUNT_HOST=your account hostname;ACCOUNT_KEY=your account primary key
+```
- ```java
- ConnectionPolicy connectionPolicy = new ConnectionPolicy();
- connectionPolicy.setMaxPoolSize(1000);
- DocumentClient client = new DocumentClient(
- HOST,
- MASTER_KEY,
- connectionPolicy,
- ConsistencyLevel.Session)
- ```
+To run the bulk sample, specify its Main Class:
+
+```
+com.azure.cosmos.examples.bulk.async.SampleBulkQuickStartAsync
+```
-3. The DocumentBulkExecutor object is initialized with a high retry value for wait time and throttled requests. And then they are set to 0 to pass congestion control to DocumentBulkExecutor for its lifetime.
+2. The `CosmosAsyncClient` object is initialized by using the following statements:
```java
- // Set client's retry options high for initialization
- client.getConnectionPolicy().getRetryOptions().setMaxRetryWaitTimeInSeconds(30);
- client.getConnectionPolicy().getRetryOptions().setMaxRetryAttemptsOnThrottledRequests(9);
-
- // Builder pattern
- Builder bulkExecutorBuilder = DocumentBulkExecutor.builder().from(
- client,
- DATABASE_NAME,
- COLLECTION_NAME,
- collection.getPartitionKey(),
- offerThroughput) // throughput you want to allocate for bulk import out of the container's total throughput
-
- // Instantiate DocumentBulkExecutor
- DocumentBulkExecutor bulkExecutor = bulkExecutorBuilder.build()
-
- // Set retries to 0 to pass complete control to bulk executor
- client.getConnectionPolicy().getRetryOptions().setMaxRetryWaitTimeInSeconds(0);
- client.getConnectionPolicy().getRetryOptions().setMaxRetryAttemptsOnThrottledRequests(0);
+ client = new CosmosClientBuilder().endpoint(AccountSettings.HOST).key(AccountSettings.MASTER_KEY)
+ .preferredRegions(preferredRegions).contentResponseOnWriteEnabled(true)
+ .consistencyLevel(ConsistencyLevel.SESSION).buildAsyncClient();
```
-4. Call the importAll API that generates random documents to bulk import into an Azure Cosmos container. You can configure the command-line configurations within the CmdLineConfiguration.java file.
- ```java
- BulkImportResponse bulkImportResponse = bulkExecutor.importAll(documents, false, true, null);
- ```
- The bulk import API accepts a collection of JSON-serialized documents and it has the following syntax, for more information, see the [API documentation](/java/api/com.microsoft.azure.documentdb.bulkexecutor):
+3. The sample creates an async database and container. It then creates multiple documents on which bulk operations will be executed. It adds these documents to a `Flux<Family>` reactive stream object:
+
+ ```java
+ createDatabaseIfNotExists();
+ createContainerIfNotExists();
+
+ Family andersenFamilyItem = Families.getAndersenFamilyItem();
+ Family wakefieldFamilyItem = Families.getWakefieldFamilyItem();
+ Family johnsonFamilyItem = Families.getJohnsonFamilyItem();
+ Family smithFamilyItem = Families.getSmithFamilyItem();
+
+ // Setup family items to create
+ Flux<Family> families = Flux.just(andersenFamilyItem, wakefieldFamilyItem, johnsonFamilyItem, smithFamilyItem);
+ ```
++
+4. The sample contains methods for bulk create, upsert, replace, and delete. In each method we map the families documents in the BulkWriter `Flux<Family>` stream to multiple method calls in `CosmosBulkOperations`. These operations are added to another reactive stream object `Flux<CosmosItemOperation>`. The stream is then passed to the `executeBulkOperations` method of the async `container` we created at the beginning, and operations are executed in bulk. See the `bulkCreateItems` method below as an example:
```java
- public BulkImportResponse importAll(
- Collection<String> documents,
- boolean isUpsert,
- boolean disableAutomaticIdGeneration,
- Integer maxConcurrencyPerPartitionRange) throws DocumentClientException;
+ private void bulkCreateItems(Flux<Family> families) {
+ Flux<CosmosItemOperation> cosmosItemOperations =
+ families.map(family -> CosmosBulkOperations.getCreateItemOperation(family,
+ new PartitionKey(family.getLastName())));
+ container.executeBulkOperations(cosmosItemOperations).blockLast();
+ }
```
- The importAll method accepts the following parameters:
+5. There is also a class `BulkWriter.java` in the same directory as the sample application. This class demonstrates how to handle rate limiting (429) and timeout (408) errors that may occur during bulk execution, and retrying those operations effectively. It is implemented in the `bulkCreateItemsSimple()` method in the application.
+
+ ```java
+ private void bulkCreateItemsSimple() {
+ Family andersenFamilyItem = Families.getAndersenFamilyItem();
+ Family wakefieldFamilyItem = Families.getWakefieldFamilyItem();
+ CosmosItemOperation andersonItemOperation = CosmosBulkOperations.getCreateItemOperation(andersenFamilyItem, new PartitionKey(andersenFamilyItem.getLastName()));
+ CosmosItemOperation wakeFieldItemOperation = CosmosBulkOperations.getCreateItemOperation(wakefieldFamilyItem, new PartitionKey(wakefieldFamilyItem.getLastName()));
+ BulkWriter bulkWriter = new BulkWriter(container);
+ bulkWriter.scheduleWrites(andersonItemOperation);
+ bulkWriter.scheduleWrites(wakeFieldItemOperation);
+ bulkWriter.execute().blockLast();
+ }
+ ```
+
+6. Additionally, there are bulk create methods in the sample which illustrate how to add response processing, and set execution options:
+
+ ```java
+ private void bulkCreateItemsWithResponseProcessing(Flux<Family> families) {
+ Flux<CosmosItemOperation> cosmosItemOperations =
+ families.map(family -> CosmosBulkOperations.getCreateItemOperation(family,
+ new PartitionKey(family.getLastName())));
+ container.executeBulkOperations(cosmosItemOperations).flatMap(cosmosBulkOperationResponse -> {
+ CosmosBulkItemResponse cosmosBulkItemResponse = cosmosBulkOperationResponse.getResponse();
+ CosmosItemOperation cosmosItemOperation = cosmosBulkOperationResponse.getOperation();
+
+ if (cosmosBulkOperationResponse.getException() != null) {
+ logger.error("Bulk operation failed", cosmosBulkOperationResponse.getException());
+ } else if (cosmosBulkOperationResponse.getResponse() == null || !cosmosBulkOperationResponse.getResponse().isSuccessStatusCode()) {
+ logger.error("The operation for Item ID: [{}] Item PartitionKey Value: [{}] did not complete successfully with " +
+ "a" + " {} response code.", cosmosItemOperation.<Family>getItem().getId(),
+ cosmosItemOperation.<Family>getItem().getLastName(), cosmosBulkItemResponse.getStatusCode());
+ } else {
+ logger.info("Item ID: [{}] Item PartitionKey Value: [{}]", cosmosItemOperation.<Family>getItem().getId(),
+ cosmosItemOperation.<Family>getItem().getLastName());
+ logger.info("Status Code: {}", String.valueOf(cosmosBulkItemResponse.getStatusCode()));
+ logger.info("Request Charge: {}", String.valueOf(cosmosBulkItemResponse.getRequestCharge()));
+ }
+ return Mono.just(cosmosBulkItemResponse);
+ }).blockLast();
+ }
+
+ private void bulkCreateItemsWithExecutionOptions(Flux<Family> families) {
+ CosmosBulkExecutionOptions bulkExecutionOptions = new CosmosBulkExecutionOptions();
+ ImplementationBridgeHelpers
+ .CosmosBulkExecutionOptionsHelper
+ .getCosmosBulkExecutionOptionsAccessor()
+ .setMaxMicroBatchSize(bulkExecutionOptions, 10);
+ Flux<CosmosItemOperation> cosmosItemOperations =
+ families.map(family -> CosmosBulkOperations.getCreateItemOperation(family,
+ new PartitionKey(family.getLastName())));
+ container.executeBulkOperations(cosmosItemOperations, bulkExecutionOptions).blockLast();
+ }
+ ```
+
+ <!-- The importAll method accepts the following parameters:
|**Parameter** |**Description** | |||
The cloned repository contains two samples "bulkimport" and "bulkupdate" relativ
|List\<Exception> getErrors() | Gets the list of errors if some documents out of the batch supplied to the bulk import API call failed to get inserted. | |List\<Object> getBadInputDocuments() | The list of bad-format documents that were not successfully imported in the bulk import API call. User should fix the documents returned and retry import. Bad-formatted documents include documents whose ID value is not a string (null or any other datatype is considered invalid). |
-5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
+<!-- 5. After you have the bulk import application ready, build the command-line tool from source by using the 'mvn clean package' command. This command generates a jar file in the target folder:
```bash mvn clean package
You can update existing documents by using the BulkUpdateAsync API. In this exam
```bash java -Xmx12G -jar bulkexecutor-sample-1.0-SNAPSHOT-jar-with-dependencies.jar -serviceEndpoint **<Fill in your Azure Cosmos DB's endpoint>* -masterKey **<Fill in your Azure Cosmos DB's primary key>* -databaseId bulkUpdateDb -collectionId bulkUpdateColl -operation update -collectionThroughput 1000000 -partitionKey /profileid -maxConnectionPoolSize 6000 -numberOfDocumentsForEachCheckpoint 1000000 -numberOfCheckpoints 10
- ```
+ ``` -->
## Performance tips
Consider the following points for better performance when using bulk executor li
* Set the JVM's heap size to a large enough number to avoid any memory issue in handling large number of documents. Suggested heap size: max(3 GB, 3 * sizeof(all documents passed to bulk import API in one batch)). * There is a preprocessing time, due to which you will get higher throughput when performing bulk operations with a large number of documents. So, if you want to import 10,000,000 documents, running bulk import 10 times on 10 bulk of documents each of size 1,000,000 is preferable than running bulk import 100 times on 100 bulk of documents each of size 100,000 documents.
-* It is recommended to instantiate a single DocumentBulkExecutor object for the entire application within a single virtual machine that corresponds to a specific Azure Cosmos container.
+* It is recommended to instantiate a single CosmosAsyncClient object for the entire application within a single virtual machine that corresponds to a specific Azure Cosmos container.
* Since a single bulk operation API execution consumes a large chunk of the client machine's CPU and network IO. This happens by spawning multiple tasks internally, avoid spawning multiple concurrent tasks within your application process each executing bulk operation API calls. If a single bulk operation API calls running on a single virtual machine is unable to consume your entire container's throughput (if your container's throughput > 1 million RU/s), it's preferable to create separate virtual machines to concurrently execute bulk operation API calls.
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-processor.md
ms.devlang: csharp Previously updated : 03/10/2022 Last updated : 04/05/2022
There are four main components of implementing the change feed processor:
1. **The compute instance**: A compute instance hosts the change feed processor to listen for changes. Depending on the platform, it could be represented by a VM, a kubernetes pod, an Azure App Service instance, an actual physical machine. It has a unique identifier referenced as the *instance name* throughout this article.
-1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
+1. **The delegate:** The delegate is the code that defines what you, the developer, want to do with each batch of changes that the change feed processor reads.
-To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges that contain items.
-There are two compute instances and the change feed processor is assigning different ranges of partition key values to each instance to maximize compute distribution, each instance has a unique and different name.
-Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container.
+To further understand how these four elements of change feed processor work together, let's look at an example in the following diagram. The monitored container stores documents and uses 'City' as the partition key. We see that the partition key values are distributed in ranges (each range representing a [physical partition](../partitioning-overview.md#physical-partitions)) that contain items.
+There are two compute instances and the change feed processor is assigning different ranges to each instance to maximize compute distribution, each instance has a unique and different name.
+Each range is being read in parallel and its progress is maintained separately from other ranges in the lease container through a *lease* document. The combination of the leases represents the current state of the change feed processor.
:::image type="content" source="./media/change-feed-processor/changefeedprocessor.png" alt-text="Change feed processor example" border="false":::
The change feed processor lets you hook to relevant events in its [life cycle](#
## Deployment unit
-A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
+A single change feed processor deployment unit consists of one or more compute instances with the same `processorName` and lease container configuration but different instance name each. You can have many deployment units where each one has a different business flow for the changes and each deployment unit consisting of one or more instances.
For example, you might have one deployment unit that triggers an external API anytime there is a change in your container. Another deployment unit might move data, in real time, each time there is a change. When a change happens in your monitored container, all your deployment units will get notified.
Moreover, the change feed processor can dynamically adjust to containers scale d
## Change feed and provisioned throughput
-Change feed read operations on the monitored container will consume RUs.
+Change feed read operations on the monitored container will consume [request units](../request-units.md). Make sure your monitored container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors.
-Operations on the lease container consume RUs. The higher the number of instances using the same lease container, the higher the potential RU consumption will be. Remember to monitor your RU consumption on the leases container if you decide to scale and increment the number of instances.
+Operations on the lease container (updating and maintaining state) consume [request units](../request-units.md). The higher the number of instances using the same lease container, the higher the potential request units consumption will be. Make sure your lease container is not experiencing [throttling](troubleshoot-request-rate-too-large.md), otherwise you will experience delays in receiving change feed events on your processors, in some cases where throttling is high, the processors might stop processing completely.
## Starting time
The change feed processor will be initialized and start reading changes from the
> [!NOTE] > These customization options only work to setup the starting point in time of the change feed processor. Once the leases container is initialized for the first time, changing them has no effect.
+## Sharing the lease container
+
+You can share the lease container across multiple [deployment units](#deployment-unit), each deployment unit would be listening to a different monitored container or have a different `processorName`. With this configuration, each deployment unit would maintain an independent state on the lease container. Review the [request unit consumption on the lease container](#change-feed-and-provisioned-throughput) to make sure the provisioned throughput is enough for all the deployment units.
+ ## Where to host the change feed processor The change feed processor can be hosted in any platform that supports long running processes or tasks:
cosmos-db How To Use Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-use-go.md
+
+ Title: Use the Azure Table client library for Go
+description: Store structured data in the cloud using the Azure Table client library for Go.
++
+ms.devlang: golang
+ Last updated : 03/24/2022++++
+# How to use the Azure SDK for Go with Azure Table
+++
+In this article, you'll learn how to create, list, and delete Azure Tables and Table entities with the Azure SDK for Go.
+
+Azure Table allows you to store structured NoSQL data in the cloud by providing you with a key attribute store with a schemaless design. Because Azure Table storage is schemaless, it's easy to adapt your data to the evolving needs of your applications. Access to table's data and API is a fast and cost-effective solution for many applications.
+
+You can use the Table storage or the Azure Cosmos DB to store flexible datasets like user data for web applications, address books, device information. Or other types of metadata your service requires. You can store any number of entities in a table, and a storage account may contain any number of tables, up to the capacity limit of the storage account.
+
+Follow this article to learn how to manage Azure Table storage using the Azure SDK for Go.
+
+## Prerequisites
+
+- An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- **Go installed**: Version 1.17 or [above](https://golang.org/dl/)
+- [Azure CLI](/cli/azure/install-azure-cli)
+
+## Set up your environment
+
+To follow along with this tutorial you'll need an Azure resource group, a storage account, and a table resource. Run the following commands to set up your environment:
+
+1. Create an Azure resource group.
+
+ ```azurecli
+ az group create --name myResourceGroup --location eastus
+ ```
+
+2. Next create an Azure storage account for your new Azure Table.
+
+ ```azurecli
+ az storage account create --name <storageAccountName> --resource-group myResourceGroup --location eastus --sku Standard_LRS
+ ```
+
+3. Create a table resource.
+
+ ```azurecli
+ az storage table create --account-name <storageAccountName> --account-key 'storageKey' --name mytable
+ ```
+
+### Install packages
+
+You'll need two packages to manage Azure Table with Go; [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity), and [aztables](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/data/aztables). The `azidentity` package provides you with a way to authenticate to Azure. And the `aztables` packages give you the ability to manage the tables resource in Azure. Run the following Go commands to install these packages:
+
+```azurecli
+go get github.com/Azure/azure-sdk-for-go/sdk/data/aztables
+go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
+```
+
+To learn more about the ways to authenticate to Azure, check out [Azure authentication with the Azure SDK for Go](/azure/developer/go/azure-sdk-authentication).
++
+## Create the sample application
+
+Once you have the packages installed, you create a sample application that uses the Azure SDK for Go to manage Azure Table. Run the `go mod` command to create a new module named `azTableSample`.
+
+```azurecli
+go mod init azTableSample
+```
+
+Next, create a file called `main.go`, then copy below into it:
+
+```go
+package main
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "os"
+
+ "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ "github.com/Azure/azure-sdk-for-go/sdk/data/aztables"
+)
+
+type InventoryEntity struct {
+ aztables.Entity
+ Price float32
+ Inventory int32
+ ProductName string
+ OnSale bool
+}
+
+type PurchasedEntity struct {
+ aztables.Entity
+ Price float32
+ ProductName string
+ OnSale bool
+}
+
+func getClient() *aztables.Client {
+ accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT")
+ if !ok {
+ panic("AZURE_STORAGE_ACCOUNT environment variable not found")
+ }
+
+ tableName, ok := os.LookupEnv("AZURE_TABLE_NAME")
+ if !ok {
+ panic("AZURE_TABLE_NAME environment variable not found")
+ }
+
+ cred, err := azidentity.NewDefaultAzureCredential(nil)
+ if err != nil {
+ panic(err)
+ }
+ serviceURL := fmt.Sprintf("https://%s.table.core.windows.net/%s", accountName, tableName)
+ client, err := aztables.NewClient(serviceURL, cred, nil)
+ if err != nil {
+ panic(err)
+ }
+ return client
+}
+
+func createTable(client *aztables.Client) {
+ //TODO: Check access policy, Storage Blob Data Contributor role needed
+ _, err := client.Create(context.TODO(), nil)
+ if err != nil {
+ panic(err)
+ }
+}
+
+func addEntity(client *aztables.Client) {
+ myEntity := InventoryEntity{
+ Entity: aztables.Entity{
+ PartitionKey: "pk001",
+ RowKey: "rk001",
+ },
+ Price: 3.99,
+ Inventory: 20,
+ ProductName: "Markers",
+ OnSale: false,
+ }
+
+ marshalled, err := json.Marshal(myEntity)
+ if err != nil {
+ panic(err)
+ }
+
+ _, err = client.AddEntity(context.TODO(), marshalled, nil) // TODO: Check access policy, need Storage Table Data Contributor role
+ if err != nil {
+ panic(err)
+ }
+}
+
+func listEntities(client *aztables.Client) {
+ listPager := client.List(nil)
+ pageCount := 0
+ for listPager.More() {
+ response, err := listPager.NextPage(context.TODO())
+ if err != nil {
+ panic(err)
+ }
+ fmt.Printf("There are %d entities in page #%d\n", len(response.Entities), pageCount)
+ pageCount += 1
+ }
+}
+
+func queryEntity(client *aztables.Client) {
+ filter := fmt.Sprintf("PartitionKey eq '%v' or RowKey eq '%v'", "pk001", "rk001")
+ options := &aztables.ListEntitiesOptions{
+ Filter: &filter,
+ Select: to.StringPtr("RowKey,Price,Inventory,ProductName,OnSale"),
+ Top: to.Int32Ptr(15),
+ }
+
+ pager := client.List(options)
+ for pager.More() {
+ resp, err := pager.NextPage(context.Background())
+ if err != nil {
+ panic(err)
+ }
+ for _, entity := range resp.Entities {
+ var myEntity PurchasedEntity
+ err = json.Unmarshal(entity, &myEntity)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Println("Return custom type [PurchasedEntity]")
+ fmt.Printf("Price: %v; ProductName: %v; OnSale: %v\n", myEntity.Price, myEntity.ProductName, myEntity.OnSale)
+ }
+ }
+}
+
+func deleteEntity(client *aztables.Client) {
+ _, err := client.DeleteEntity(context.TODO(), "pk001", "rk001", nil)
+ if err != nil {
+ panic(err)
+ }
+}
+
+func deleteTable(client *aztables.Client) {
+ _, err := client.Delete(context.TODO(), nil)
+ if err != nil {
+ panic(err)
+ }
+}
+
+func main() {
+
+ fmt.Println("Authenticating...")
+ client := getClient()
+
+ fmt.Println("Creating a table...")
+ createTable(client)
+
+ fmt.Println("Adding an entity to the table...")
+ addEntity(client)
+
+ fmt.Println("Calculating all entities in the table...")
+ listEntities(client)
+
+ fmt.Println("Querying a specific entity...")
+ queryEntity(client)
+
+ fmt.Println("Deleting an entity...")
+ deleteEntity(client)
+
+ fmt.Println("Deleting a table...")
+ deleteTable(client)
+}
+
+```
+
+> [!IMPORTANT]
+> Ensure that the account you authenticated with has the proper accces policy to manage your Azure storage account. To run the above code you're account needs to have at a minimum the Storage Blob Data Contributor role and the Storage Table Data Contributor role.
++
+## Code examples
+
+### Authenticate the client
+
+```go
+// Lookup environment variables
+accountName, ok := os.LookupEnv("AZURE_STORAGE_ACCOUNT")
+if !ok {
+ panic("AZURE_STORAGE_ACCOUNT environment variable not found")
+}
+
+tableName, ok := os.LookupEnv("AZURE_TABLE_NAME")
+if !ok {
+ panic("AZURE_TABLE_NAME environment variable not found")
+}
+
+// Create a credential
+cred, err := azidentity.NewDefaultAzureCredential(nil)
+if err != nil {
+ panic(err)
+}
+
+// Create a table client
+serviceURL := fmt.Sprintf("https://%s.table.core.windows.net/%s", accountName, tableName)
+client, err := aztables.NewClient(serviceURL, cred, nil)
+if err != nil {
+ panic(err)
+}
+```
+
+### Create a table
+
+```go
+// Create a table and discard the response
+_, err := client.Create(context.TODO(), nil)
+if err != nil {
+ panic(err)
+}
+```
+
+### Create an entity
+
+```go
+// Define the table entity as a custom type
+type InventoryEntity struct {
+ aztables.Entity
+ Price float32
+ Inventory int32
+ ProductName string
+ OnSale bool
+}
+
+// Define the entity values
+myEntity := InventoryEntity{
+ Entity: aztables.Entity{
+ PartitionKey: "pk001",
+ RowKey: "rk001",
+ },
+ Price: 3.99,
+ Inventory: 20,
+ ProductName: "Markers",
+ OnSale: false,
+}
+
+// Marshal the entity to JSON
+marshalled, err := json.Marshal(myEntity)
+if err != nil {
+ panic(err)
+}
+
+// Add the entity to the table
+_, err = client.AddEntity(context.TODO(), marshalled, nil) // needs Storage Table Data Contributor role
+if err != nil {
+ panic(err)
+}
+```
+
+### Get an entity
+
+```go
+// Define the new custom type
+type PurchasedEntity struct {
+ aztables.Entity
+ Price float32
+ ProductName string
+ OnSale bool
+}
+
+// Define the query filter and options
+filter := fmt.Sprintf("PartitionKey eq '%v' or RowKey eq '%v'", "pk001", "rk001")
+options := &aztables.ListEntitiesOptions{
+ Filter: &filter,
+ Select: to.StringPtr("RowKey,Price,Inventory,ProductName,OnSale"),
+ Top: to.Int32Ptr(15),
+}
+
+// Query the table for the entity
+pager := client.List(options)
+for pager.More() {
+ resp, err := pager.NextPage(context.Background())
+ if err != nil {
+ panic(err)
+ }
+ for _, entity := range resp.Entities {
+ var myEntity PurchasedEntity
+ err = json.Unmarshal(entity, &myEntity)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Println("Return custom type [PurchasedEntity]")
+ fmt.Printf("Price: %v; ProductName: %v; OnSale: %v\n", myEntity.Price, myEntity.ProductName, myEntity.OnSale)
+ }
+}
+```
+
+### Delete an entity
+
+```go
+_, err := client.DeleteEntity(context.TODO(), "pk001", "rk001", nil)
+if err != nil {
+ panic(err)
+}
+```
+
+### Delete a table
+
+```go
+_, err := client.Delete(context.TODO(), nil)
+if err != nil {
+ panic(err)
+}
+```
+
+## Run the code
+
+All that's left is to run the application. But before you do that, you need to set up your environment variables. Create two environment variables and set them to the appropriate value using the following commands:
+
+# [Bash](#tab/bash)
+
+```bash
+export AZURE_STORAGE_ACCOUNT=<YourStorageAccountName>
+export AZURE_TABLE_NAME=<YourAzureTableName>
+```
+
+# [PowerShell](#tab/powershell)
+
+```powershell
+$env:AZURE_STORAGE_ACCOUNT=<YourStorageAccountName>
+$env:AZURE_TABLE_NAME=<YourAzureTableName>
+```
+++
+Next, run the following `go run` command to run the app:
+
+```bash
+go run main.go
+```
+
+## Clean up resources
+
+Run the following command to delete the resource group and all its remaining resources:
+
+```azurecli
+az group delete --resource-group myResourceGroup
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Import table data to the Table API](table-import.md)
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/manage-automation.md
Title: Manage Azure costs with automation
description: This article explains how you can manage Azure costs with automation. Previously updated : 12/10/2021 Last updated : 04/05/2022
For modern customers with a Microsoft Customer Agreement, use the following call
GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?startDate=2020-08-01&endDate=2020-08-05&$top=1000&api-version=2019-10-01 ```
+> [!NOTE]
+> The `$filter` parameter isn't supported by Microsoft Customer Agreements.
+ ### Get amortized cost details If you need actual costs to show purchases as they're accrued, change the *metric* to `ActualCost` in the following request. To use amortized and actual costs, you must use the `2019-04-01-preview` version. The current API version works the same as the `2019-10-01` version, except for the new type/metric attribute and changed property names. If you have a Microsoft Customer Agreement, your filters are `startDate` and `endDate` in the following example.
data-factory Data Flow Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-tutorials.md
As updates are constantly made to the product, some features have added or diffe
[Delete rows in target when not present in source](https://www.youtube.com/watch?v=9i7qf1vczUw)
+[Incremental data loading with Azure Data Factory and Azure SQL DB](https://youtu.be/6tNWFErnGGU)
+
+[Transform Avro data from Event Hubs using Parse and Flatten](https://youtu.be/F2x7Eg-635o)
+ ## Data flow expressions [Date/Time expressions](https://www.youtube.com/watch?v=uboyCZ25r_E&feature=youtu.be&hd=1)
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Here are some of the metrics emitted by Azure Data Factory version 2.
| SSISPackageExecutionFailed | Failed SSIS package execution metrics | Count | Total | The total number of SSIS package executions that failed within a minute window. | | SSISPackageExecutionSucceeded | Succeeded SSIS package execution metrics | Count | Total | The total number of SSIS package executions that succeeded within a minute window. | | PipelineElapsedTimeRuns | Elapsed time pipeline runs metrics | Count | Total | Number of times, within a minute window, a pipeline runs longer than user-defined expected duration. [(See more.)](tutorial-operationalize-pipelines.md) |
+| IntegrationRuntimeAvailableMemory | Available memory for integration runtime | Byte | Total | The total number of bytes of available memory for the self-hosted integration runtime within a minute window. |
+| IntegrationRuntimeAvailableNodeNumber | Available nodes for integration runtime | Count | Total | The total number of nodes available for the self-hosted integration runtime within a minute window. |
+| IntegrationRuntimeCpuPercentage | CPU utilization for integration runtime | Percent | Total | The percetange of CPU utilization for the self-hosted integration runtime within a minute window. |
+| IntegrationRuntimeAverageTaskPickupDelay | Queue duration for integration runtime | Seconds | Total | The queue duration for the self-hosted integration runtime within a minute window. |
+| IntegrationRuntimeQueueLength | Queue length for integration runtime | Count | Total | The total queue length for the self-hosted integration runtime within a minute window. |
To access the metrics, complete the instructions in [Azure Monitor data platform](../azure-monitor/data-platform.md).
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly.
<tr><td rowspan=5><b>Data Flow</b></td><td>ScriptLines and Parameterized Linked Service support added mapping data flows</td><td>It is now super-easy to detect changes to your data flow script in Git with ScriptLines in your data flow JSON definition. Parameterized Linked Services can now also be used inside your data flows for flexible generic connection patterns.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-mapping-data-flows-adds-scriptlines-and-link-service/ba-p/3249929#M589">Learn more</a></td></tr>
-<tr><td>Flowlets General Availability (GA)</td><td>Flowlets is now generally available to create reusable portions of data flow logic that you can share in other pipelines as inline transformations. Flowlets enable ETL jobs to be composed of custom or common logic components.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr>
+<tr><td>Flowlets General Availability (GA)</td><td>Flowlets is now generally available to create reusable portions of data flow logic that you can share in other pipelines as inline transformations. Flowlets enable ETL jobs to be composed of custom or common logic components.<br><a href="concepts-data-flow-flowlet.md">Learn more</a></td></tr>
<tr><td>Change Feed connectors are available in 5 data flow source transformations</td><td>Change Feed connectors are available in data flow source transformations for Cosmos DB, Blob store, ADLS Gen1, ADLS Gen2, and CDM.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/flowlets-and-change-feed-now-ga-in-azure-data-factory/ba-p/3267450">Learn more</a></td></tr> <tr><td>Data Preview and Debug Improvements in Mapping Data Flows</td><td>A few new exciting features were added to data preview and the debug experience in Mapping Data Flows.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/data-preview-and-debug-improvements-in-mapping-data-flows/ba-p/3268254">Learn more</a></td></tr>
-<tr><td>SFTP connector for Mapping Data Flow</td><td>The SFTP connector is now available for Mapping Data Flows.<br><a href="connector-sftp.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
+<tr><td>SFTP connector for Mapping Data Flow</td><td>SFTP connector is available for Mapping Data Flow as both source and sink.<br><a href="connector-sftp.md?tabs=data-factory#mapping-data-flow-properties">Learn more</a></td></tr>
<tr><td><b>Data Movement</b></td><td>Support Always Encrypted for SQL related connectors in Lookup Activity under Managed VNET</td><td>Always Encrypted is supported for SQL Server, Azure SQL DB, Azure SQL MI, Azure Synapse Analytics in Lookup Activity under Managed VNET.<br><a href="control-flow-lookup-activity.md">Learn more</a></td></tr>
defender-for-iot How To Troubleshoot The Sensor And On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-troubleshoot-the-sensor-and-on-premises-management-console.md
When signing into a preconfigured sensor for the first time, you'll need to perf
:::image type="content" source="media/how-to-create-and-manage-users/recover-password.png" alt-text=" Screenshot of the recover on-premises management console password option.":::
-1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded.
+1. Enter the unique identifier that you received on the **Password recovery** screen and select **Recover**. The `password_recovery.zip` file is downloaded. Do not extract or modify the zip file.
:::image type="content" source="media/how-to-create-and-manage-users/enter-identifier.png" alt-text="Screenshot of the Recover dialog box.":::
event-grid Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/concepts.md
Event Grid topic provides an endpoint where the source sends events. The publish
### System topics System topics are built-in topics provided by Azure services such as Azure Storage, Azure Event Hubs, and Azure Service Bus. You can create system topics in your Azure subscription and subscribe to them. For more information, see [Overview of system topics](system-topics.md).
-### Customer topics
+### Custom topics
Custom topics are application and third-party topics. When you create or are assigned access to a custom topic, you see that custom topic in your subscription. For more information, see [Custom topics](custom-topics.md). When designing your application, you have flexibility when deciding how many topics to create. For large solutions, create a custom topic for each category of related events. For example, consider an application that sends events related to modifying user accounts and processing orders. It's unlikely any event handler wants both categories of events. Create two custom topics and let event handlers subscribe to the one that interests them. For small solutions, you might prefer to send all events to a single topic. Event subscribers can filter for the event types they want. ### Partner topics
When you use a custom topic, events must always be published in an array. This c
## Next steps * For an introduction to Event Grid, see [About Event Grid](overview.md).
-* To quickly get started using Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md).
+* To quickly get started using Event Grid, see [Create and route custom events with Azure Event Grid](custom-event-quickstart.md).
event-grid Onboard Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md
To complete the remaining steps, make sure you have:
1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **All services** from the left navigation pane, then type in **Event Grid Partner Registrations** in the search bar, and select it.
-1. On the **Event Grid Partner Registrations** page, select **+ Add** on the toolbar.
+1. On the **Event Grid Partner Registrations** page, select **+ Create** on the command bar or **Create Event Grid partner registrations** link on the page.
:::image type="content" source="./media/onboard-partner/add-partner-registration-link.png" alt-text="Add partner registration link"::: 1. On the **Create Partner Topic Type Registrations - Basics** page, enter the following information:
Before your users can subscribe to partner topics you create in their Azure subs
Similarly, before your user can use the partner destinations you create in their subscriptions, they'll have to activate partner destinations first. For details, see [Activate a partner destination](deliver-events-to-partner-destinations.md#activate-a-partner-destination). ## Next steps-- [Partner topics overview](./partner-events-overview.md)-- [Partner topics onboarding page](https://aka.ms/gridpartnerform)-- [Auth0 partner topic](auth0-overview.md)-- [How to use the Auth0 partner topic](auth0-how-to.md)+
+See the following articles for more details about the Partner Events feature:
+
+- [Partner Events overview for customers](partner-events-overview.md)
+- [Partner Events overview for partners](partner-events-overview-for-partners.md)
+- [Subscribe to partner events](subscribe-to-partner-events.md)
+- [Subscribe to Auth0 events](auth0-how-to.md)
+- [Deliver events to partner destinations](deliver-events-to-partner-destinations.md)
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
ms.devlang: csharp, javascript
-# Tutorial: Automate resizing uploaded images using Event Grid
+# Tutorial Step 2: Automate resizing uploaded images using Event Grid
[Azure Event Grid](overview.md) is an eventing service for the cloud. Event Grid enables you to create subscriptions to events raised by Azure services or third-party resources.
-This tutorial is part two of a series of Storage tutorials. It extends the [previous Storage tutorial][previous-tutorial] to add serverless automatic thumbnail generation using Azure Event Grid and Azure Functions. Event Grid enables [Azure Functions](../azure-functions/functions-overview.md) to respond to [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md) events and generate thumbnails of uploaded images. An event subscription is created against the Blob storage create event. When a blob is added to a specific Blob storage container, a function endpoint is called. Data passed to the function binding from Event Grid is used to access the blob and generate the thumbnail image.
+This tutorial extends the [Upload image data in the cloud with Azure Storage][previous-tutorial] tutorial to add serverless automatic thumbnail generation using Azure Event Grid and Azure Functions. Event Grid enables [Azure Functions](../azure-functions/functions-overview.md) to respond to [Azure Blob storage](../storage/blobs/storage-blobs-introduction.md) events and generate thumbnails of uploaded images. An event subscription is created against the Blob storage create event. When a blob is added to a specific Blob storage container, a function endpoint is called. Data passed to the function binding from Event Grid is used to access the blob and generate the thumbnail image.
You use the Azure CLI and the Azure portal to add the resizing functionality to an existing image upload app.
az functionapp deployment source config --name $functionapp \
-The image resize function is triggered by HTTP requests sent to it from the Event Grid service. You tell Event Grid that you want to get these notifications at your function's URL by creating an event subscription. For this tutorial you subscribe to blob-created events.
+The image resize function is triggered by HTTP requests sent to it from the Event Grid service. You tell Event Grid that you want to get these notifications at your function's URL by creating an event subscription. For this tutorial, you subscribe to blob-created events.
The data passed to the function from the Event Grid notification includes the URL of the blob. That URL is in turn passed to the input binding to obtain the uploaded image from Blob storage. The function generates a thumbnail image and writes the resulting stream to a separate container in Blob storage.
The function project code is deployed directly from the public sample repository
An event subscription indicates which provider-generated events you want sent to a specific endpoint. In this case, the endpoint is exposed by your function. Use the following steps to create an event subscription that sends notifications to your function in the Azure portal:
-1. In the [Azure portal](https://portal.azure.com), at the top of the page search for and select `Function App` and choose the function app that you just created. Select **Functions** and choose the **Thumbnail** function.
+1. In the [Azure portal](https://portal.azure.com), at the top of the page search for and select `Function App` and choose the function app that you created. Select **Functions** and choose the **Thumbnail** function.
:::image type="content" source="media/resize-images-on-storage-blob-upload-event/choose-thumbnail-function.png" alt-text="Choose the Thumbnail function in the portal":::
An event subscription indicates which provider-generated events you want sent to
1. Switch to the **Filters** tab, and do the following actions: 1. Select **Enable subject filtering** option.
- 1. For **Subject begins with**, enter the following value : **/blobServices/default/containers/images/**.
+ 1. For **Subject begins with**, enter the following value: **/blobServices/default/containers/images/**.
![Specify filter for the event subscription](./media/resize-images-on-storage-blob-upload-event/event-subscription-filter.png)
-1. Select **Create** to add the event subscription. This creates an event subscription that triggers the `Thumbnail` function when a blob is added to the `images` container. The function resizes the images and adds them to the `thumbnails` container.
+1. Select **Create** to add the event subscription to create an event subscription that triggers the `Thumbnail` function when a blob is added to the `images` container. The function resizes the images and adds them to the `thumbnails` container.
Now that the backend services are configured, you test the image resize functionality in the sample web app.
Advance to part three of the Storage tutorial series to learn how to secure acce
+ To learn more about Event Grid, see [An introduction to Azure Event Grid](overview.md). + To try another tutorial that features Azure Functions, see [Create a function that integrates with Azure Logic Apps](../azure-functions/functions-twitter-email.md).
-[previous-tutorial]: ../storage/blobs/storage-upload-process-images.md
+[previous-tutorial]: storage-upload-process-images.md
event-grid Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/storage-upload-process-images.md
+
+ Title: Upload image data in the cloud with Azure Storage
+description: This tutorial creates a web app that stores and displays images from Azure storage. It's a prerequisite for an Event Grid tutorial that's linked at the end of this article.
+++++ Last updated : 04/04/2022++
+ms.devlang: csharp, javascript
+++
+# Step 1: Upload image data in the cloud with Azure Storage
+
+This tutorial is part one of a series. In this tutorial, you'll learn how to deploy a web app. The web app uses the Azure Blob Storage client library to upload images to a storage account. When you're finished, you'll have a web app that stores and displays images from Azure storage.
+
+# [.NET v12 SDK](#tab/dotnet)
++
+# [JavaScript v12 SDK](#tab/javascript)
+
+![Image resizer app in JavaScript]()
++++
+In part one of the series, you learn how to:
+
+> [!div class="checklist"]
+
+> - Create a storage account
+> - Create a container and set permissions
+> - Retrieve an access key
+> - Deploy a web app to Azure
+> - Configure app settings
+> - Interact with the web app
+
+## Prerequisites
+
+To complete this tutorial, you need an Azure subscription. Create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
++
+To install and use the CLI locally, run Azure CLI version 2.0.4 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+## Create a resource group
+
+The following example creates a resource group named `myResourceGroup`.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create a resource group with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+```powershell
+New-AzResourceGroup -Name myResourceGroup -Location southeastasia
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+```azurecli
+az group create --name myResourceGroup --location southeastasia
+```
+++
+## Create a storage account
+
+The sample uploads images to a blob container in an Azure storage account. A storage account provides a unique namespace to store and access your Azure storage data objects.
+
+> [!IMPORTANT]
+> In part 2 of the tutorial, you use Azure Event Grid with Blob storage. Make sure to create your storage account in an Azure region that supports Event Grid. For a list of supported regions, see [Azure products by region](https://azure.microsoft.com/global-infrastructure/services/?products=event-grid&regions=all).
+
+In the following command, replace your own globally unique name for the Blob storage account where you see the `<blob_storage_account>` placeholder.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create a storage account in the resource group you created by using the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) command.
+
+```powershell
+$blobStorageAccount="<blob_storage_account>"
+
+New-AzStorageAccount -ResourceGroupName myResourceGroup -Name $blobStorageAccount -SkuName Standard_LRS -Location southeastasia -Kind StorageV2 -AccessTier Hot
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a storage account in the resource group you created by using the [az storage account create](/cli/azure/storage/account) command.
+
+```azurecli
+blobStorageAccount="<blob_storage_account>"
+
+az storage account create --name $blobStorageAccount --location southeastasia \
+ --resource-group myResourceGroup --sku Standard_LRS --kind StorageV2 --access-tier hot
+```
+++
+## Create Blob storage containers
+
+The app uses two containers in the Blob storage account. Containers are similar to folders and store blobs. The *images* container is where the app uploads full-resolution images. In a later part of the series, an Azure function app uploads resized image thumbnails to the *thumbnail
+
+The *images* container's public access is set to `off`. The *thumbnails* container's public access is set to `container`. The `container` public access setting permits users who visit the web page to view the thumbnails.
+
+# [PowerShell](#tab/azure-powershell)
+
+Get the storage account key by using the [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) command. Then, use this key to create two containers with the [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) command.
+
+```powershell
+$blobStorageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName myResourceGroup -Name $blobStorageAccount).Key1
+$blobStorageContext = New-AzStorageContext -StorageAccountName $blobStorageAccount -StorageAccountKey $blobStorageAccountKey
+
+New-AzStorageContainer -Name images -Context $blobStorageContext
+New-AzStorageContainer -Name thumbnails -Permission Container -Context $blobStorageContext
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Get the storage account key by using the [az storage account keys list](/cli/azure/storage/account/keys) command. Then, use this key to create two containers with the [az storage container create](/cli/azure/storage/container) command.
+
+```azurecli
+blobStorageAccountKey=$(az storage account keys list -g myResourceGroup \
+ -n $blobStorageAccount --query "[0].value" --output tsv)
+
+az storage container create --name images \
+ --account-name $blobStorageAccount \
+ --account-key $blobStorageAccountKey
+
+az storage container create --name thumbnails \
+ --account-name $blobStorageAccount \
+ --account-key $blobStorageAccountKey --public-access container
+```
+++
+Make a note of your Blob storage account name and key. The sample app uses these settings to connect to the storage account to upload the images.
+
+## Create an App Service plan
+
+An [App Service plan](../app-service/overview-hosting-plans.md) specifies the location, size, and features of the web server farm that hosts your app.
+
+The following example creates an App Service plan named `myAppServicePlan` in the **Free** pricing tier:
+
+# [PowerShell](#tab/azure-powershell)
+
+Create an App Service plan with the [New-AzAppServicePlan](/powershell/module/az.websites/new-azappserviceplan) command.
+
+```powershell
+New-AzAppServicePlan -ResourceGroupName myResourceGroup -Name myAppServicePlan -Tier "Free"
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create an App Service plan with the [az appservice plan create](/cli/azure/appservice/plan) command.
+
+```azurecli
+az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku Free
+```
+++
+## Create a web app
+
+The web app provides a hosting space for the sample app code that's deployed from the GitHub sample repository.
+
+In the following command, replace `<web_app>` with a unique name. Valid characters are `a-z`, `0-9`, and `-`. If `<web_app>` isn't unique, you get the error message: *Website with given name `<web_app>` already exists.* The default URL of the web app is `https://<web_app>.azurewebsites.net`.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create a [web app](../app-service/overview.md) in the `myAppServicePlan` App Service plan with the [New-AzWebApp](/powershell/module/az.websites/new-azwebapp) command.
+
+```powershell
+$webapp="<web_app>"
+
+New-AzWebApp -ResourceGroupName myResourceGroup -Name $webapp -AppServicePlan myAppServicePlan
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Create a [web app](../app-service/overview.md) in the `myAppServicePlan` App Service plan with the [az webapp create](/cli/azure/webapp) command.
+
+```azurecli
+webapp="<web_app>"
+
+az webapp create --name $webapp --resource-group myResourceGroup --plan myAppServicePlan
+```
+++
+## Deploy the sample app from the GitHub repository
+
+# [.NET v12 SDK](#tab/dotnet)
+
+App Service supports several ways to deploy content to a web app. In this tutorial, you deploy the web app from a [public GitHub sample repository](https://github.com/Azure-Samples/storage-blob-upload-from-webapp). Configure GitHub deployment to the web app with the [az webapp deployment source config](/cli/azure/webapp/deployment/source) command.
+
+The sample project contains an [ASP.NET MVC](https://www.asp.net/mvc) app. The app accepts an image, saves it to a storage account, and displays images from a thumbnail container. The web app uses the [Azure.Storage](/dotnet/api/azure.storage), [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs), and [Azure.Storage.Blobs.Models](/dotnet/api/azure.storage.blobs.models) namespaces to interact with the Azure Storage service.
+
+```azurecli
+az webapp deployment source config --name $webapp --resource-group myResourceGroup \
+ --branch master --manual-integration \
+ --repo-url https://github.com/Azure-Samples/storage-blob-upload-from-webapp
+```
+
+```powershell
+az webapp deployment source config --name $webapp --resource-group myResourceGroup `
+ --branch master --manual-integration `
+ --repo-url https://github.com/Azure-Samples/storage-blob-upload-from-webapp
+```
+
+# [JavaScript v12 SDK](#tab/javascript)
+
+App Service supports several ways to deploy content to a web app. In this tutorial, you deploy the web app from a [public GitHub sample repository](https://github.com/Azure-Samples/azure-sdk-for-js-storage-blob-stream-nodejs). Configure GitHub deployment to the web app with the [az webapp deployment source config](/cli/azure/webapp/deployment/source) command.
+
+```azurecli
+az webapp deployment source config --name $webapp --resource-group myResourceGroup \
+ --branch master --manual-integration \
+ --repo-url https://github.com/Azure-Samples/azure-sdk-for-js-storage-blob-stream-nodejs
+```
+
+```powershell
+az webapp deployment source config --name $webapp --resource-group myResourceGroup `
+ --branch master --manual-integration `
+ --repo-url https://github.com/Azure-Samples/azure-sdk-for-js-storage-blob-stream-nodejs
+```
+++
+## Configure web app settings
+
+# [.NET v12 SDK](#tab/dotnet)
+
+The sample web app uses the [Azure Storage APIs for .NET](/dotnet/api/overview/azure/storage) to upload images. Storage account credentials are set in the app settings for the web app. Add app settings to the deployed app with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings) or [New-AzStaticWebAppSetting](/powershell/module/az.websites/new-azstaticwebappsetting) command.
+
+```azurecli
+az webapp config appsettings set --name $webapp --resource-group myResourceGroup \
+ --settings AzureStorageConfig__AccountName=$blobStorageAccount \
+ AzureStorageConfig__ImageContainer=images \
+ AzureStorageConfig__ThumbnailContainer=thumbnails \
+ AzureStorageConfig__AccountKey=$blobStorageAccountKey
+```
+
+```powershell
+az webapp config appsettings set --name $webapp --resource-group myResourceGroup `
+ --settings AzureStorageConfig__AccountName=$blobStorageAccount `
+ AzureStorageConfig__ImageContainer=images `
+ AzureStorageConfig__ThumbnailContainer=thumbnails `
+ AzureStorageConfig__AccountKey=$blobStorageAccountKey
+```
+
+# [JavaScript v12 SDK](#tab/javascript)
+
+The sample web app uses the [Azure Storage client library for JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/storage) to upload images. The storage account credentials are set in the app settings for the web app. Add app settings to the deployed app with the [az webapp config appsettings set](/cli/azure/webapp/config/appsettings) or [New-AzStaticWebAppSetting](/powershell/module/az.websites/new-azstaticwebappsetting) command.
+
+```azurecli
+az webapp config appsettings set --name $webapp --resource-group myResourceGroup \
+ --settings AZURE_STORAGE_ACCOUNT_NAME=$blobStorageAccount \
+ AZURE_STORAGE_ACCOUNT_ACCESS_KEY=$blobStorageAccountKey
+```
+
+```powershell
+az webapp config appsettings set --name $webapp --resource-group myResourceGroup `
+ --settings AZURE_STORAGE_ACCOUNT_NAME=$blobStorageAccount `
+ AZURE_STORAGE_ACCOUNT_ACCESS_KEY=$blobStorageAccountKey
+```
+++
+After you deploy and configure the web app, you can test the image upload functionality in the app.
+
+## Upload an image
+
+To test the web app, browse to the URL of your published app. The default URL of the web app is `https://<web_app>.azurewebsites.net`.
+
+# [.NET v12 SDK](#tab/dotnet)
+
+Select the **Upload photos** region to specify and upload a file, or drag a file onto the region. The image disappears if successfully uploaded. The **Generated Thumbnails** section will remain empty until we test it later in this tutorial.
++
+In the sample code, the `UploadFileToStorage` task in the *Storagehelper.cs* file is used to upload the images to the *images* container within the storage account using the [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync) method. The following code sample contains the `UploadFileToStorage` task.
+
+```csharp
+public static async Task<bool> UploadFileToStorage(Stream fileStream, string fileName,
+ AzureStorageConfig _storageConfig)
+{
+ // Create a URI to the blob
+ Uri blobUri = new Uri("https://" +
+ _storageConfig.AccountName +
+ ".blob.core.windows.net/" +
+ _storageConfig.ImageContainer +
+ "/" + fileName);
+
+ // Create StorageSharedKeyCredentials object by reading
+ // the values from the configuration (appsettings.json)
+ StorageSharedKeyCredential storageCredentials =
+ new StorageSharedKeyCredential(_storageConfig.AccountName, _storageConfig.AccountKey);
+
+ // Create the blob client.
+ BlobClient blobClient = new BlobClient(blobUri, storageCredentials);
+
+ // Upload the file
+ await blobClient.UploadAsync(fileStream);
+
+ return await Task.FromResult(true);
+}
+```
+
+The following classes and methods are used in the preceding task:
+
+| Class | Method |
+|-|--|
+| [Uri](/dotnet/api/system.uri) | [Uri constructor](/dotnet/api/system.uri.-ctor) |
+| [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) | [StorageSharedKeyCredential(String, String) constructor](/dotnet/api/azure.storage.storagesharedkeycredential.-ctor) |
+| [BlobClient](/dotnet/api/azure.storage.blobs.blobclient) | [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync) |
+
+# [JavaScript v12 SDK](#tab/javascript)
+
+Select **Choose File** to select a file, then select **Upload Image**. The **Generated Thumbnails** section will remain empty until we test it later in this tutorial.
++
+In the sample code, the `post` route is responsible for uploading the image into a blob container. The route uses the modules to help process the upload:
+
+- [Multer](https://github.com/expressjs/multer) implements the upload strategy for the route handler.
+- [into-stream](https://github.com/sindresorhus/into-stream) converts the buffer into a stream as required by [uploadStream](/javascript/api/%40azure/storage-blob/blockblobclient#uploadstream-readable--number--number--blockblobuploadstreamoptions-).
+
+As the file is sent to the route, the contents of the file stay in memory until the file is uploaded to the blob container.
+
+> [!IMPORTANT]
+> Loading large files into memory may have a negative effect on your web app's performance. If you expect users to post large files, you may want to consider staging files on the web server file system and then scheduling uploads into Blob storage. Once the files are in Blob storage, you can remove them from the server file system.
+
+```javascript
+if (process.env.NODE_ENV !== 'production') {
+ require('dotenv').config();
+}
+
+const {
+ BlobServiceClient,
+ StorageSharedKeyCredential,
+ newPipeline
+} = require('@azure/storage-blob');
+
+const express = require('express');
+const router = express.Router();
+const containerName1 = 'thumbnails';
+const multer = require('multer');
+const inMemoryStorage = multer.memoryStorage();
+const uploadStrategy = multer({ storage: inMemoryStorage }).single('image');
+const getStream = require('into-stream');
+const containerName2 = 'images';
+const ONE_MEGABYTE = 1024 * 1024;
+const uploadOptions = { bufferSize: 4 * ONE_MEGABYTE, maxBuffers: 20 };
+
+const sharedKeyCredential = new StorageSharedKeyCredential(
+ process.env.AZURE_STORAGE_ACCOUNT_NAME,
+ process.env.AZURE_STORAGE_ACCOUNT_ACCESS_KEY);
+const pipeline = newPipeline(sharedKeyCredential);
+
+const blobServiceClient = new BlobServiceClient(
+ `https://${process.env.AZURE_STORAGE_ACCOUNT_NAME}.blob.core.windows.net`,
+ pipeline
+);
+
+const getBlobName = originalName => {
+ // Use a random number to generate a unique file name,
+ // removing "0." from the start of the string.
+ const identifier = Math.random().toString().replace(/0\./, '');
+ return `${identifier}-${originalName}`;
+};
+
+router.get('/', async (req, res, next) => {
+
+ let viewData;
+
+ try {
+ const containerClient = blobServiceClient.getContainerClient(containerName1);
+ const listBlobsResponse = await containerClient.listBlobFlatSegment();
+
+ for await (const blob of listBlobsResponse.segment.blobItems) {
+ console.log(`Blob: ${blob.name}`);
+ }
+
+ viewData = {
+ Title: 'Home',
+ viewName: 'index',
+ accountName: process.env.AZURE_STORAGE_ACCOUNT_NAME,
+ containerName: containerName1
+ };
+
+ if (listBlobsResponse.segment.blobItems.length) {
+ viewData.thumbnails = listBlobsResponse.segment.blobItems;
+ }
+ } catch (err) {
+ viewData = {
+ Title: 'Error',
+ viewName: 'error',
+ message: 'There was an error contacting the blob storage container.',
+ error: err
+ };
+ res.status(500);
+ } finally {
+ res.render(viewData.viewName, viewData);
+ }
+});
+
+router.post('/', uploadStrategy, async (req, res) => {
+ const blobName = getBlobName(req.file.originalname);
+ const stream = getStream(req.file.buffer);
+ const containerClient = blobServiceClient.getContainerClient(containerName2);;
+ const blockBlobClient = containerClient.getBlockBlobClient(blobName);
+
+ try {
+ await blockBlobClient.uploadStream(stream,
+ uploadOptions.bufferSize, uploadOptions.maxBuffers,
+ { blobHTTPHeaders: { blobContentType: "image/jpeg" } });
+ res.render('success', { message: 'File uploaded to Azure Blob Storage.' });
+ } catch (err) {
+ res.render('error', { message: err.message });
+ }
+});
+
+module.exports = router;
+```
+++
+## Verify the image is shown in the storage account
+
+Sign in to the [Azure portal](https://portal.azure.com). From the left menu, select **Storage accounts**, then select the name of your storage account. Select **Containers**, then select the **images** container.
+
+Verify the image is shown in the container.
++
+## Test thumbnail viewing
+
+To test thumbnail viewing, you'll upload an image to the **thumbnails** container to check whether the app can read the **thumbnails** container.
+
+Sign in to the [Azure portal](https://portal.azure.com). From the left menu, select **Storage accounts**, then select the name of your storage account. Select **Containers**, then select the **thumbnails** container. Select **Upload** to open the **Upload blob** pane.
+
+Choose a file with the file picker and select **Upload**.
+
+Navigate back to your app to verify that the image uploaded to the **thumbnails** container is visible.
+
+# [.NET v12 SDK](#tab/dotnet)
+
+![.NET image resizer app with new image displayed](media/storage-upload-process-images/image-resizer-app.png)
+
+# [JavaScript v12 SDK](#tab/javascript)
+
+![Node.js image resizer app with new image displayed](media/storage-upload-process-images/upload-app-nodejs-thumb.png)
+++
+In part two of the series, you automate thumbnail image creation so you don't need this image. In the **thumbnails** container, select the image you uploaded, and select **Delete** to remove the image.
+
+You can enable Content Delivery Network (CDN) to cache content from your Azure storage account. For more information, see [Integrate an Azure storage account with Azure CDN](../cdn/cdn-create-a-storage-account-with-cdn.md).
+
+## Next steps
+
+In part one of the series, you learned how to configure a web app to interact with storage.
+
+Go on to part two of the series to learn about using Event Grid to trigger an Azure function to resize an image.
+
+> [!div class="nextstepaction"]
+> [Use Event Grid to trigger an Azure Function to resize an uploaded image](resize-images-on-storage-blob-upload-event.md)
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Subscribing to the partner topic tells Event Grid where you want your partner ev
1. For **Filter to Event Types**, select types of events that your subscription will receive. 1. For **Endpoint Type**, select an Azure service (Azure Function, Storage Queues, Event Hubs, Service Bus Queue, Service Bus Topic, Hybrid Connections. etc.), Web Hook, or Partner Destination. 1. Click the **Select an endpoint** link. In this example, let's use Azure Event Hubs destination or endpoint.
+
+ :::image type="content" source="./media/subscribe-to-partner-events/select-endpoint.png" lightbox="./media/subscribe-to-partner-events/select-endpoint.png" alt-text="Image showing the configuration of an endpoint for an event subscription.":::
1. On the **Select Event Hub** page, select configurations for the endpoint, and then select **Confirm Selection**.
- :::image type="content" source="./media/subscribe-to-partner-events/select-endpoint.png" lightbox="./media/subscribe-to-partner-events/select-endpoint.png" alt-text="Image showing the configuration of an endpoint for an event subscription.":::
+ :::image type="content" source="./media/subscribe-to-partner-events/select-event-hub.png" lightbox="./media/subscribe-to-partner-events/select-event-hub.png" alt-text="Image showing the configuration of an Event Hubs endpoint.":::
1. Now on the **Create Event Subscription** page, select **Create**. :::image type="content" source="./media/subscribe-to-partner-events/create-event-subscription.png" alt-text="Image showing the Create Event Subscription page with example configurations.":::
+## Next steps
+
+See the following articles for more details about the Partner Events feature:
+- [Partner Events overview for customers](partner-events-overview.md)
+- [Partner Events overview for partners](partner-events-overview-for-partners.md)
+- [Onboard as a partner](onboard-partner.md)
+- [Deliver events to partner destinations](deliver-events-to-partner-destinations.md)
event-hubs Event Hubs Go Get Started Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-go-get-started-send.md
Azure Event Hubs is a Big Data streaming platform and event ingestion service, c
This tutorial describes how to write Go applications to send events to or receive events from an event hub. > [!NOTE]
-> You can download this quickstart as a sample from the [GitHub](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/master/eventhubs), replace `EventHubConnectionString` and `EventHubName` strings with your event hub values, and run it. Alternatively, you can follow the steps in this tutorial to create your own.
+> You can download this quickstart as a sample from the [GitHub](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main/services/eventhubs), replace `EventHubConnectionString` and `EventHubName` strings with your event hub values, and run it. Alternatively, you can follow the steps in this tutorial to create your own.
## Prerequisites
Congratulations! You have now sent messages to an event hub.
State such as leases on partitions and checkpoints in the event stream are shared between receivers using an Azure Storage container. You can create a storage account and container with the Go SDK, but you can also create one by following the instructions in [About Azure storage accounts](../storage/common/storage-account-create.md).
-Samples for creating Storage artifacts with the Go SDK are available in the [Go samples repo](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/master/storage) and in the sample corresponding to this tutorial.
+Samples for creating Storage artifacts with the Go SDK are available in the [Go samples repo](https://github.com/Azure-Samples/azure-sdk-for-go-samples/tree/main/services/storage) and in the sample corresponding to this tutorial.
### Go packages
expressroute Expressroute Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-global-reach.md
ExpressRoute Global Reach is designed to complement your service providerΓÇÖs WA
ExpressRoute Global Reach is supported in the following places. > [!NOTE]
-> To enable ExpressRoute Global Reach between [different geopolitical regions](expressroute-locations-providers.md#locations), your circuits must be **Premium SKU**.
+> * To enable ExpressRoute Global Reach between [different geopolitical regions](expressroute-locations-providers.md#locations), your circuits must be **Premium SKU**.
+> * IPv6 support for ExpressRoute Global Reach is now in Public Preview. See [Enable Global Reach](expressroute-howto-set-global-reach.md) to learn more.
* Australia * Canada
expressroute Expressroute Howto Set Global Reach Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-cli.md
# Configure ExpressRoute Global Reach by using the Azure CLI This article helps you configure Azure ExpressRoute Global Reach by using the Azure CLI. For more information, see [ExpressRoute Global Reach](expressroute-global-reach.md).+
+> [!NOTE]
+> IPv6 support for ExpressRoute Global Reach is now in Public Preview. See [Enable Global Reach](expressroute-howto-set-global-reach.md) for steps to configure this feature using PowerShell.
Before you start configuration, complete the following requirements:
expressroute Expressroute Howto Set Global Reach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach.md
Enable connectivity between your on-premises networks. There are separate sets o
```azurepowershell-interactive Add-AzExpressRouteCircuitConnectionConfig -Name 'Your_connection_name' -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering $ckt_2.Peerings[0].Id -AddressPrefix '__.__.__.__/29' ```
+
+ > [!NOTE]
+ > IPv6 support for ExpressRoute Global Reach is now in Public Preview. To add an IPv6 Global Reach connection, you must specify a /125 IPv6 subnet for *-AddressPrefix* and an *-AddressPrefixType* of *IPv6*.
+
+ ```azurepowershell-interactive
+ Add-AzExpressRouteCircuitConnectionConfig -Name 'Your_connection_name' -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering $ckt_2.Peerings[0].Id -AddressPrefix '__.__.__.__/125' -AddressPrefixType IPv6
+ ```
+ 3. Save the configuration on circuit 1 as follows: ```azurepowershell-interactive
If the two circuits are not in the same Azure subscription, you need authorizati
```azurepowershell-interactive Add-AzExpressRouteCircuitConnectionConfig -Name 'Your_connection_name' -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering "circuit_2_private_peering_id" -AddressPrefix '__.__.__.__/29' -AuthorizationKey '########-####-####-####-############' ```
+
+ > [!NOTE]
+ > IPv6 support for ExpressRoute Global Reach is now in Public Preview. To add an IPv6 Global Reach connection, you must specify a /125 IPv6 subnet for *-AddressPrefix* and an *-AddressPrefixType* of *IPv6*.
+
+ ```azurepowershell-interactive
+ Add-AzExpressRouteCircuitConnectionConfig -Name 'Your_connection_name' -ExpressRouteCircuit $ckt_1 -PeerExpressRouteCircuitPeering $ckt_2.Peerings[0].Id -AddressPrefix '__.__.__.__/125' -AddressPrefixType IPv6 -AuthorizationKey '########-####-####-####-############'
+ ```
+
3. Save the configuration on circuit 1. ```azurepowershell-interactive
Remove-AzExpressRouteCircuitConnectionConfig -Name "Your_connection_name" -Expre
Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt_1 ```
+> [!NOTE]
+> IPv6 support for ExpressRoute Global Reach is now in Public Preview. To delete an IPv6 Global Reach connection, you must specify an *-AddressPrefixType* of *IPv6* like in the following command.
+
+```azurepowershell-interactive
+$ckt_1 = Get-AzExpressRouteCircuit -Name "Your_circuit_1_name" -ResourceGroupName "Your_resource_group"
+Remove-AzExpressRouteCircuitConnectionConfig -Name "Your_connection_name" -ExpressRouteCircuit $ckt_1 -AddressPrefixType IPv6
+Set-AzExpressRouteCircuit -ExpressRouteCircuit $ckt_1
+```
+ You can run the Get operation to verify the status. After the previous operation is complete, you no longer have connectivity between your on-premises network through your ExpressRoute circuits.
hdinsight Hdinsight Analyze Twitter Data Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-analyze-twitter-data-linux.md
description: Learn how to use Apache Hive and Apache Hadoop on HDInsight to tran
Previously updated : 12/16/2019 Last updated : 04/05/2022 # Analyze Twitter data using Apache Hive and Apache Hadoop on HDInsight
hdinsight Hdinsight Hadoop Access Yarn App Logs Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-access-yarn-app-logs-linux.md
description: Learn how to access YARN application logs on a Linux-based HDInsigh
Previously updated : 04/23/2020 Last updated : 04/05/2022 # Access Apache Hadoop YARN application logs on Linux-based HDInsight
hdinsight Hdinsight Hadoop Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-add-storage.md
description: Learn how to add additional Azure Storage accounts to an existing H
Previously updated : 04/27/2020 Last updated : 04/05/2022 # Add additional storage accounts to HDInsight
To work around this problem:
## Next steps
-You've learned how to add additional storage accounts to an existing HDInsight cluster. For more information on script actions, see [Customize Linux-based HDInsight clusters using script action](hdinsight-hadoop-customize-cluster-linux.md)
+You've learned how to add additional storage accounts to an existing HDInsight cluster. For more information on script actions, see [Customize Linux-based HDInsight clusters using script action](hdinsight-hadoop-customize-cluster-linux.md)
hdinsight Apache Spark Analyze Application Insight Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-analyze-application-insight-logs.md
description: Learn how to export Application Insight logs to blob storage, and t
Previously updated : 12/17/2019 Last updated : 04/05/2022 # Analyze Application Insights telemetry logs with Apache Spark on HDInsight
hdinsight Apache Spark Custom Library Website Log Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-custom-library-website-log-analysis.md
description: This notebook demonstrates how to analyze log data using a custom l
Previously updated : 12/27/2019 Last updated : 04/05/2022 # Analyze website logs using a custom Python library with Apache Spark cluster on HDInsight
hdinsight Apache Spark Use With Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-use-with-data-lake-store.md
description: Run Apache Spark jobs to analyze data stored in Azure Data Lake Sto
Previously updated : 06/13/2019 Last updated : 04/05/2022 # Use HDInsight Spark cluster to analyze data in Data Lake Storage Gen1
If you created an HDInsight cluster with Data Lake Storage as additional storage
* [Create a standalone Scala application to run on Apache Spark cluster](apache-spark-create-standalone-application.md) * [Use HDInsight Tools in Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight Spark Linux cluster](apache-spark-intellij-tool-plugin.md) * [Use HDInsight Tools in Azure Toolkit for Eclipse to create Apache Spark applications for HDInsight Spark Linux cluster](apache-spark-eclipse-tool-plugin.md)
-* [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen2.md)
+* [Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters](../hdinsight-hadoop-use-data-lake-storage-gen2.md)
iot-hub Iot Hub Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-customer-data-requests.md
If you use the Azure Monitor integration feature of the Azure IoT Hub service to
Tenant administrators can use the IoT devices blade of the Azure IoT Hub extension in the Azure portal to delete a device, which deletes the data associated with that device.
-It is also possible to perform delete operations for devices using REST APIs. For more information, see [Service - Delete Device](/azure/iot-hub/iot-c-sdk-ref/iothub-registrymanager-h/iothubregistrymanager-deletedevice).
+It is also possible to perform delete operations for devices using REST APIs. For more information, see [Service - Delete Device](/rest/api/iothub/service/devices/delete-identity).
## Exporting customer data Tenant administrators can utilize copy and paste within the IoT devices pane of the Azure IoT Hub extension in the Azure portal to export data associated with a device.
-It is also possible to perform export operations for devices using REST APIs. For more information, see [Service - Get Device](/azure/iot-hub/iot-c-sdk-ref/iothub-registrymanager-h/iothubregistrymanager-getdevice).
+It is also possible to perform export operations for devices using REST APIs. For more information, see [Service - Get Device](/rest/api/iothub/service/devices/get-identity).
> [!NOTE] > When you use Microsoft's enterprise services, Microsoft generates some information, known as system-generated logs. Some Azure IoT Hub system-generated logs are not accessible or exportable by tenant administrators. These logs constitute factual actions conducted within the service and diagnostic data related to individual devices. ## Links to additional documentation
-Full documentation for Azure IoT Hub Service APIs is located at [IoT Hub Service APIs](/rest/api/iothub/service/configuration).
+Full documentation for Azure IoT Hub Service APIs is located at [IoT Hub Service APIs](/rest/api/iothub/service/configuration).
iot-hub Iot Hub Troubleshoot Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-connectivity.md
After you've created a diagnostic setting to route IoT Hub resource logs to Azur
Use the following problem resolution guides for help with the most common errors:
-* [400027 ConnectionForcefullyClosedOnNewConnection](iot-hub-troubleshoot-error-400027-connectionforcefullyclosedonnewconnection.md)
+* [400027 ConnectionForcefullyClosedOnNewConnection](troubleshoot-error-codes.md#400027-connectionforcefullyclosedonnewconnection)
* [404104 DeviceConnectionClosedRemotely](iot-hub-troubleshoot-error-404104-deviceconnectionclosedremotely.md)
AzureDiagnostics
As an IoT solutions developer or operator, you need to be aware of this behavior in order to interpret connect/disconnect events and related errors in logs. If you want to change the token lifespan or renewal behavior for devices, check to see whether the device implements a device twin setting or a device method that makes this possible.
-If you're monitoring device connections with Event Hub, make sure you build in a way of filtering out the periodic disconnects due to SAS token renewal. For example, do not trigger actions based on disconnects as long as the disconnect event is followed by a connect event within a certain time span.
+If you're monitoring device connections with Event Hubs, make sure you build in a way of filtering out the periodic disconnects due to SAS token renewal. For example, do not trigger actions based on disconnects as long as the disconnect event is followed by a connect event within a certain time span.
> [!NOTE] > IoT Hub only supports one active MQTT connection per device. Any new MQTT connection on behalf of the same device ID causes IoT Hub to drop the existing connection.
iot-hub Iot Hub Troubleshoot Error 400027 Connectionforcefullyclosedonnewconnection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-400027-connectionforcefullyclosedonnewconnection.md
- Title: Troubleshooting Azure IoT Hub error 400027 ConnectionForcefullyClosedOnNewConnection
-description: Understand how to fix error 400027 ConnectionForcefullyClosedOnNewConnection
----- Previously updated : 01/30/2020--
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 400027 ConnectionForcefullyClosedOnNewConnection errors.
--
-# 400027 ConnectionForcefullyClosedOnNewConnection
-
-This article describes the causes and solutions for **400027 ConnectionForcefullyClosedOnNewConnection** errors.
-
-## Symptoms
-
-Your device get disconnected with **Communication_Error** as **ConnectionStatusChangeReason** using .NET SDK and MQTT transport type.
-
-Your device-to-cloud twin operation (such as read or patch reported properties) or direct method invocation fails with the error code **400027**.
-
-## Cause
-
-Another client created a new connection to IoT Hub using the same identity, so IoT Hub closed the previous connection. IoT Hub doesn't allow more than one client to connect using the same identity.
-
-## Solution
-
-Ensure that each client connects to IoT Hub using its own identity.
iot-hub Iot Hub Troubleshoot Error 401003 Iothubunauthorized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-401003-iothubunauthorized.md
- Title: Troubleshooting Azure IoT Hub error 401003 IoTHubUnauthorized
-description: Understand how to fix error 401003 IoTHubUnauthorized
----- Previously updated : 11/06/2020--
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 401003 IoTHubUnauthorized errors.
--
-# 401003 IoTHubUnauthorized
-
-This article describes the causes and solutions for **401003 IoTHubUnauthorized** errors.
-
-## Symptoms
-
-### Symptom 1
-
-In logs, you see a pattern of devices disconnecting with **401003 IoTHubUnauthorized**, followed by **404104 DeviceConnectionClosedRemotely**, and then successfully connecting shortly after.
-
-### Symptom 2
-
-Requests to IoT Hub fail with one of the following error messages:
-
-* Authorization header missing
-* IotHub '\*' does not contain the specified device '\*'
-* Authorization rule '\*' does not allow access for '\*'
-* Authentication failed for this device, renew token or certificate and reconnect
-* Thumbprint does not match configuration: Thumbprint: SHA1Hash=\*, SHA2Hash=\*; Configuration: PrimaryThumbprint=\*, SecondaryThumbprint=\*
-* Principal user@example.com is not authorized for GET on /exampleOperation due to no assigned permissions
-
-## Cause
-
-### Cause 1
-
-For MQTT, some SDKs rely on IoT Hub to issue the disconnect when the SAS token expires to know when to refresh it. So,
-
-1. The SAS token expires
-1. IoT Hub notices the expiration, and disconnects the device with **401003 IoTHubUnauthorized**
-1. The device completes the disconnection with **404104 DeviceConnectionClosedRemotely**
-1. The IoT SDK generates a new SAS token
-1. The device reconnects with IoT Hub successfully
-
-### Cause 2
-
-IoT Hub couldn't authenticate the auth header, rule, or key. This could be due to any of the reasons cited in the symptoms.
-
-## Solution
-
-### Solution 1
-
-No action needed if using IoT SDK for connection using the device connection string. IoT SDK regenerates the new token to reconnect on SAS token expiration.
-
-The default token lifespan is 60 minutes across SDKs; however, for some SDKs the token lifespan and the token renewal threshold is configurable. Additionally, the errors generated when a device disconnects and reconnects on token renewal differs for each SDK. To learn more, and for information about how to determine which SDK your device is using in logs, see [MQTT device disconnect behavior with Azure IoT SDKs](iot-hub-troubleshoot-connectivity.md#mqtt-device-disconnect-behavior-with-azure-iot-sdks).
-
-For device developers, if the volume of errors is a concern, switch to the C SDK, which renews the SAS token before expiration. For AMQP, the SAS token can refresh without disconnection.
-
-### Solution 2
-
-In general, the error message presented should explain how to fix the error. If for some reason you don't have access to the error message detail, make sure:
--- The SAS or other security token you use isn't expired.-- For X.509 certificate authentication, the device certificate or the CA certificate associated with the device isn't expired. To learn how to register X.509 CA certificates with IoT Hub, see [Set up X.509 security in your Azure IoT hub](./tutorial-x509-scripts.md).-- For X.509 certificate thumbprint authentication, the thumbprint of the device certificate is registered with IoT Hub.-- The authorization credential is well formed for the protocol that you use. To learn more, see [Control access to IoT Hub](iot-hub-devguide-security.md).-- The authorization rule used has the permission for the operation requested.-- For the last error messages beginning with "principal...", this error can be resolved by assigning the correct level of Azure RBAC permission to the user. For example, an Owner on the IoT Hub can assign the "IoT Hub Data Owner" role, which gives all permissions. Try this role to resolve the lack of permission issue.-
-## Next steps
--- To make authenticating to IoT Hub easier, we recommend using [Azure IoT SDKs](iot-hub-devguide-sdks.md).-- For detail about authentication with IoT Hub, see [Control Access to IoT Hub](iot-hub-devguide-security.md).
iot-hub Iot Hub Troubleshoot Error 403002 Iothubquotaexceeded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-403002-iothubquotaexceeded.md
- Title: Troubleshooting Azure IoT Hub error 403002 IoTHubQuotaExceeded
-description: Understand how to fix error 403002 IoTHubQuotaExceeded
----- Previously updated : 01/30/2020-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 403002 IoTHubQuotaExceeded errors.
--
-# 403002 IoTHubQuotaExceeded
-
-This article describes the causes and solutions for **403002 IoTHubQuotaExceeded** errors.
-
-## Symptoms
-
-All requests to IoT Hub fail with the error **403002 IoTHubQuotaExceeded**. In Azure portal, the IoT hub device list doesn't load.
-
-## Cause
-
-The daily message quota for the IoT hub is exceeded.
-
-## Solution
-
-[Upgrade or increase the number of units on the IoT hub](iot-hub-upgrade.md) or wait for the next UTC day for the daily quota to refresh.
-
-## Next steps
-
-* To understand how operations are counted toward the quota, such as twin queries and direct methods, see [Understand IoT Hub pricing](iot-hub-devguide-pricing.md#charges-per-operation)
-* To set up monitoring for daily quota usage, set up an alert with the metric *Total number of messages used*. For step-by-step instructions, see [Set up metrics and alerts with IoT Hub](tutorial-use-metrics-and-diags.md#set-up-metrics)
iot-hub Iot Hub Troubleshoot Error 403004 Devicemaximumqueuedepthexceeded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-403004-devicemaximumqueuedepthexceeded.md
- Title: Troubleshooting Azure IoT Hub error 403004 DeviceMaximumQueueDepthExceeded
-description: Understand how to fix error 403004 DeviceMaximumQueueDepthExceeded
----- Previously updated : 01/30/2020--
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 403004 DeviceMaximumQueueDepthExceeded errors.
--
-# 403004 DeviceMaximumQueueDepthExceeded
-
-This article describes the causes and solutions for **403004 DeviceMaximumQueueDepthExceeded** errors.
-
-## Symptoms
-
-When trying to send a cloud-to-device message, the request fails with the error **403004** or **DeviceMaximumQueueDepthExceeded**.
-
-## Cause
-
-The underlying cause is that the number of messages enqueued for the device exceeds the [queue limit (50)](./iot-hub-devguide-quotas-throttling.md#other-limits).
-
-The most likely reason that you're running into this limit is because you're using HTTPS to receive the message, which leads to continuous polling using `ReceiveAsync`, resulting in IoT Hub throttling the request.
-
-## Solution
-
-The supported pattern for cloud-to-device messages with HTTPS is intermittently connected devices that check for messages infrequently (less than every 25 minutes). To reduce the likelihood of running into the queue limit, switch to AMQP or MQTT for cloud-to-device messages.
-
-Alternatively, enhance device side logic to complete, reject, or abandon queued messages quickly, shorten the time to live, or consider sending fewer messages. See [C2D message time to live](./iot-hub-devguide-messages-c2d.md#message-expiration-time-to-live).
-
-Lastly, consider using the [Purge Queue API](/azure/iot-hub/iot-c-sdk-ref/iothub-registrymanager-h/iothubregistrymanager-deletedevice) to periodically clean up pending messages before the limit is reached.
iot-hub Iot Hub Troubleshoot Error 403006 Devicemaximumactivefileuploadlimitexceeded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-403006-devicemaximumactivefileuploadlimitexceeded.md
- Title: Troubleshooting Azure IoT Hub error 403006 DeviceMaximumActiveFileUploadLimitExceeded
-description: Understand how to fix error 403006 DeviceMaximumActiveFileUploadLimitExceeded
----- Previously updated : 01/30/2020-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 403006 DeviceMaximumActiveFileUploadLimitExceeded errors.
--
-# 403006 DeviceMaximumActiveFileUploadLimitExceeded
-
-This article describes the causes and solutions for **403006 DeviceMaximumActiveFileUploadLimitExceeded** errors.
-
-## Symptoms
-
-Your file upload request fails with the error code **403006** and a message "Number of active file upload requests cannot exceed 10".
-
-## Cause
-
-Each device client is limited to [10 concurrent file uploads](./iot-hub-devguide-quotas-throttling.md#other-limits).
-
-You can easily exceed the limit if your device doesn't notify IoT Hub when file uploads are completed. This problem is commonly caused by an unreliable device side network.
-
-## Solution
-
-Ensure the device can promptly [notify IoT Hub file upload completion](./iot-hub-devguide-file-upload.md#device-notify-iot-hub-of-a-completed-file-upload). Then, try [reducing the SAS token TTL for file upload configuration](iot-hub-configure-file-upload.md).
-
-## Next steps
-
-To learn more about file uploads, see [Upload files with IoT Hub](./iot-hub-devguide-file-upload.md) and [Configure IoT Hub file uploads using the Azure portal](./iot-hub-configure-file-upload.md).
iot-hub Iot Hub Troubleshoot Error 404001 Devicenotfound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-404001-devicenotfound.md
- Title: Troubleshooting Azure IoT Hub error 404001 DeviceNotFound
-description: Understand how to fix error 404001 DeviceNotFound
---- Previously updated : 07/07/2021-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 404001 DeviceNotFound errors.
--
-# 404001 DeviceNotFound
-
-This article describes the causes and solutions for **404001 DeviceNotFound** errors.
-
-## Symptoms
-
-During a cloud-to-device (C2D) communication, such as C2D message, twin update, or direct method, the operation fails with error **404001 DeviceNotFound**.
-
-## Cause
-
-The operation failed because the device cannot be found by IoT Hub. The device is either not registered or disabled.
-
-## Solution
-
-Register the device ID that you used, then try again.
iot-hub Iot Hub Troubleshoot Error 404103 Devicenotonline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-404103-devicenotonline.md
- Title: Troubleshooting Azure IoT Hub error 404103 DeviceNotOnline
-description: Understand how to fix error 404103 DeviceNotOnline
----- Previously updated : 01/30/2020-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 404103 DeviceNotOnline errors.
--
-# 404103 DeviceNotOnline
-
-This article describes the causes and solutions for **404103 DeviceNotOnline** errors.
-
-## Symptoms
-
-A direct method to a device fails with the error **404103 DeviceNotOnline** even if the device is online.
-
-## Cause
-
-If you know that the device is online and still get the error, it's likely because the direct method callback isn't registered on the device.
-
-## Solution
-
-To configure your device properly for direct method callbacks, see [Handle a direct method on a device](iot-hub-devguide-direct-methods.md#handle-a-direct-method-on-a-device).
iot-hub Iot Hub Troubleshoot Error 404104 Deviceconnectionclosedremotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-404104-deviceconnectionclosedremotely.md
- Title: Troubleshooting Azure IoT Hub error 404104 DeviceConnectionClosedRemotely
-description: Understand how to fix error 404104 DeviceConnectionClosedRemotely
----- Previously updated : 01/30/2020--
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 404104 DeviceConnectionClosedRemotely errors.
--
-# 404104 DeviceConnectionClosedRemotely
-
-This article describes the causes and solutions for **404104 DeviceConnectionClosedRemotely** errors.
-
-## Symptoms
-
-### Symptom 1
-
-Devices disconnect at a regular interval (every 65 minutes, for example) and you see **404104 DeviceConnectionClosedRemotely** in IoT Hub resource logs. Sometimes, you also see **401003 IoTHubUnauthorized** and a successful device connection event less than a minute later.
-
-### Symptom 2
-
-Devices disconnect randomly, and you see **404104 DeviceConnectionClosedRemotely** in IoT Hub resource logs.
-
-### Symptom 3
-
-Many devices disconnect at once, you see a dip in the [Connected devices (connectedDeviceCount) metric](monitor-iot-hub-reference.md), and there are more **404104 DeviceConnectionClosedRemotely** and [500xxx Internal errors](iot-hub-troubleshoot-error-500xxx-internal-errors.md) in Azure Monitor Logs than usual.
-
-## Causes
-
-### Cause 1
-
-The [SAS token used to connect to IoT Hub](iot-hub-dev-guide-sas.md#security-tokens) expired, which causes IoT Hub to disconnect the device. The connection is re-established when the token is refreshed by the device. For example, [the SAS token expires every hour by default for C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md#connection-authentication), which can lead to regular disconnects.
-
-To learn more, see [401003 IoTHubUnauthorized cause](iot-hub-troubleshoot-error-401003-iothubunauthorized.md#cause-1).
-
-### Cause 2
-
-Some possibilities include:
--- The device lost underlying network connectivity longer than the [MQTT keep-alive](iot-hub-mqtt-support.md#default-keep-alive-timeout), resulting in a remote idle timeout. The MQTT keep-alive setting can be different per device.--- The device sent a TCP/IP-level reset but didn't send an application-level `MQTT DISCONNECT`. Basically, the device abruptly closed the underlying socket connection. Sometimes, this issue is caused by bugs in older versions of the Azure IoT SDK.--- The device side application crashed.-
-### Cause 3
-
-IoT Hub might be experiencing a transient issue. See [IoT Hub internal server error cause](iot-hub-troubleshoot-error-500xxx-internal-errors.md#cause).
-
-## Solutions
-
-### Solution 1
-
-See [401003 IoTHubUnauthorized solution 1](iot-hub-troubleshoot-error-401003-iothubunauthorized.md#solution-1)
-
-### Solution 2
--- Make sure the device has good connectivity to IoT Hub by [testing the connection](tutorial-connectivity.md). If the network is unreliable or intermittent, we don't recommend increasing the keep-alive value because it could result in detection (via Azure Monitor alerts, for example) taking longer. --- Use the latest versions of the [IoT SDKs](iot-hub-devguide-sdks.md).-
-### Solution 3
-
-See [solutions to IoT Hub internal server errors](iot-hub-troubleshoot-error-500xxx-internal-errors.md#solution).
-
-## Next steps
-
-We recommend using Azure IoT device SDKs to manage connections reliably. To learn more, see [Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs](iot-hub-reliability-features-in-sdks.md)
iot-hub Iot Hub Troubleshoot Error 409001 Devicealreadyexists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-409001-devicealreadyexists.md
- Title: Troubleshooting Azure IoT Hub error 409001 DeviceAlreadyExists
-description: Understand how to fix error 409001 DeviceAlreadyExists
---- Previously updated : 07/07/2021-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 409001 DeviceAlreadyExists errors.
--
-# 409001 DeviceAlreadyExists
-
-This article describes the causes and solutions for **409001 DeviceAlreadyExists** errors.
-
-## Symptoms
-
-When trying to register a device in IoT Hub, the request fails with the error **409001 DeviceAlreadyExists**.
-
-## Cause
-
-There's already a device with the same device ID in the IoT hub.
-
-## Solution
-
-Use a different device ID and try again.
iot-hub Iot Hub Troubleshoot Error 409002 Linkcreationconflict https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-409002-linkcreationconflict.md
- Title: Troubleshooting Azure IoT Hub error 409002 LinkCreationConflict
-description: Understand how to fix error 409002 LinkCreationConflict
---- Previously updated : 07/07/2021--
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 409002 LinkCreationConflict errors.
--
-# 409002 LinkCreationConflict
-
-This article describes the causes and solutions for **409002 LinkCreationConflict** errors.
-
-## Symptoms
-
-You see the error **409002 LinkCreationConflict** in logs along with device disconnection or cloud-to-device message failure.
-
-<!-- When using AMQP? -->
-
-## Cause
-
-Generally, this error happens when IoT Hub detects a client has more than one connection. In fact, when a new connection request arrives for a device with an existing connection, IoT Hub closes the existing connection with this error.
-
-### Cause 1
-
-In the most common case, a separate issue (such as [404104 DeviceConnectionClosedRemotely](iot-hub-troubleshoot-error-404104-deviceconnectionclosedremotely.md)) causes the device to disconnect. The device tries to reestablish the connection immediately, but IoT Hub still considers the device connected. IoT Hub closes the previous connection and logs this error.
-
-### Cause 2
-
-Faulty device-side logic causes the device to establish the connection when one is already open.
-
-## Solution
-
-This error usually appears as a side effect of a different, transient issue, so look for other errors in the logs to troubleshoot further. Otherwise, make sure to issue a new connection request only if the connection drops.
iot-hub Iot Hub Troubleshoot Error 412002 Devicemessagelocklost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-412002-devicemessagelocklost.md
- Title: Troubleshooting Azure IoT Hub error 412002 DeviceMessageLockLost
-description: Understand how to fix error 412002 DeviceMessageLockLost
----- Previously updated : 01/30/2020-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 412002 DeviceMessageLockLost errors.
--
-# 412002 DeviceMessageLockLost
-
-This article describes the causes and solutions for **412002 DeviceMessageLockLost** errors.
-
-## Symptoms
-
-When trying to send a cloud-to-device message, the request fails with the error **412002 DeviceMessageLockLost**.
-
-## Cause
-
-When a device receives a cloud-to-device message from the queue (for example, using [`ReceiveAsync()`](/dotnet/api/microsoft.azure.devices.client.deviceclient.receiveasync)) the message is locked by IoT Hub for a lock timeout duration of one minute. If the device tries to complete the message after the lock timeout expires, IoT Hub throws this exception.
-
-## Solution
-
-If IoT Hub doesn't get the notification within the one-minute lock timeout duration, it sets the message back to *Enqueued* state. The device can attempt to receive the message again. To prevent the error from happening in the future, implement device side logic to complete the message within one minute of receiving the message. This one-minute time-out can't be changed.
iot-hub Iot Hub Troubleshoot Error 429001 Throttlingexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-429001-throttlingexception.md
- Title: Troubleshooting Azure IoT Hub error 429001 ThrottlingException
-description: Understand how to fix error 429001 ThrottlingException
----- Previously updated : 01/30/2020-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 429001 ThrottlingException errors.
--
-# 429001 ThrottlingException
-
-This article describes the causes and solutions for **429001 ThrottlingException** errors.
-
-## Symptoms
-
-Your requests to IoT Hub fail with the error **429001 ThrottlingException**.
-
-## Cause
-
-IoT Hub [throttling limits](./iot-hub-devguide-quotas-throttling.md) have been exceeded for the requested operation.
-
-## Solution
-
-Check if you're hitting the throttling limit by comparing your *Telemetry message send attempts* metric against the limits specified above. You can also check the *Number of throttling errors* metric. For information about these metrics, see [Device telemetry metrics](monitor-iot-hub-reference.md#device-telemetry-metrics). For information about how use metrics to help you monitor your IoT hub, see [Monitor IoT Hub](monitor-iot-hub.md).
-
-IoT Hub returns 429 ThrottlingException only after the limit has been violated for too long a period. This is done so that your messages aren't dropped if your IoT hub gets burst traffic. In the meantime, IoT Hub processes the messages at the operation throttle rate, which might be slow if there's too much traffic in the backlog. To learn more, see [IoT Hub traffic shaping](./iot-hub-devguide-quotas-throttling.md#traffic-shaping).
-
-## Next steps
-
-Consider [scaling up your IoT Hub](./iot-hub-scaling.md) if you're running into quota or throttling limits.
iot-hub Iot Hub Troubleshoot Error 500Xxx Internal Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-500xxx-internal-errors.md
- Title: Troubleshooting Azure IoT Hub 500xxx Internal errors
-description: Understand how to fix 500xxx Internal errors
----- Previously updated : 01/30/2020-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 500xxx Internal errors.
--
-# 500xxx Internal errors
-
-This article describes the causes and solutions for **500xxx Internal errors**.
-
-## Symptoms
-
-Your request to IoT Hub fails with an error that begins with 500 and/or some sort of "server error". Some possibilities are:
-
-* **500001 ServerError**: IoT Hub ran into a server-side issue.
-
-* **500008 GenericTimeout**: IoT Hub couldn't complete the connection request before timing out.
-
-* **ServiceUnavailable (no error code)**: IoT Hub encountered an internal error.
-
-* **InternalServerError (no error code)**: IoT Hub encountered an internal error.
-
-## Cause
-
-There can be a number of causes for a 500xxx error response. In all cases, the issue is most likely transient. While the IoT Hub team works hard to maintain [the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/), small subsets of IoT Hub nodes can occasionally experience transient faults. When your device tries to connect to a node that's having issues, you receive this error.
-
-## Solution
-
-To mitigate 500xxx errors, issue a retry from the device. To [automatically manage retries](./iot-hub-reliability-features-in-sdks.md#connection-and-retry), make sure you use the latest version of the [Azure IoT SDKs](./iot-hub-devguide-sdks.md). For best practice on transient fault handling and retries, see [Transient fault handling](/azure/architecture/best-practices/transient-faults). If the problem persists, check [Resource Health](./iot-hub-azure-service-health-integration.md#check-health-of-an-iot-hub-with-azure-resource-health) and [Azure Status](https://status.azure.com/) to see if IoT Hub has a known problem. You can also use the [manual failover feature](./tutorial-manual-failover.md). If there are no known problems and the issue continues, [contact support](https://azure.microsoft.com/support/options/) for further investigation.
iot-hub Iot Hub Troubleshoot Error 503003 Partitionnotfound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-503003-partitionnotfound.md
- Title: Troubleshooting Azure IoT Hub error 503003 PartitionNotFound
-description: Understand how to fix error 503003 PartitionNotFound
---- Previously updated : 07/07/2021-
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 503003 PartitionNotFound errors.
--
-# 503003 PartitionNotFound
-
-This article describes the causes and solutions for **503003 PartitionNotFound** errors.
-
-## Symptoms
-
-Requests to IoT Hub fail with the error **503003 PartitionNotFound**.
-
-## Cause
-
-This error is internal to IoT Hub and is likely transient. See [IoT Hub internal server error cause](iot-hub-troubleshoot-error-500xxx-internal-errors.md#cause).
-
-## Solution
-
-See [solutions to IoT Hub internal server errors](iot-hub-troubleshoot-error-500xxx-internal-errors.md#solution).
iot-hub Iot Hub Troubleshoot Error 504101 Gatewaytimeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-troubleshoot-error-504101-gatewaytimeout.md
- Title: Troubleshooting Azure IoT Hub error 504101 GatewayTimeout
-description: Understand how to fix error 504101 GatewayTimeout
----- Previously updated : 01/30/2020--
-#Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 504101 GatewayTimeout errors.
--
-# 504101 GatewayTimeout
-
-This article describes the causes and solutions for **504101 GatewayTimeout** errors.
-
-## Symptoms
-
-When trying to invoke a direct method from IoT Hub to a device, the request fails with the error **504101 GatewayTimeout**.
-
-## Cause
-
-### Cause 1
-
-IoT Hub encountered an error and couldn't confirm if the direct method completed before timing out.
-
-### Cause 2
-
-When using an earlier version of the Azure IoT C# SDK (<1.19.0), the AMQP link between the device and IoT Hub can be dropped silently because of a bug.
-
-## Solution
-
-### Solution 1
-
-Issue a retry.
-
-### Solution 2
-
-Upgrade to the latest version of the Azure IOT C# SDK.
iot-hub Troubleshoot Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/troubleshoot-error-codes.md
+
+ Title: Troubleshooting Azure IoT Hub error codes
+description: Understand how to fix errors reported by Azure IoT Hub
+++++ Last updated : 04/04/2022++++
+# Understand and resolve Azure IoT Hub errors
+
+This article describes the causes and solutions for common error codes that you might encounter while using IoT Hub.
+
+## 400027 ConnectionForcefullyClosedOnNewConnection
+
+You may see the **40027** error if your device disconnects and reports **Communication_Error** as the **ConnectionStatusChangeReason** using .NET SDK and MQTT transport type. Or, your device-to-cloud twin operation (such as read or patch reported properties) or direct method invocation fails with the error code **400027**.
+
+This error occurs when another client creates a new connection to IoT Hub using the same identity, so IoT Hub closes the previous connection. IoT Hub doesn't allow more than one client to connect using the same identity.
+
+To resolve this error, ensure that each client connects to IoT Hub using its own identity.
+
+## 401003 IoTHubUnauthorized
+
+In logs, you may see a pattern of devices disconnecting with **401003 IoTHubUnauthorized**, followed by **404104 DeviceConnectionClosedRemotely**, and then successfully connecting shortly after.
+
+Or, requests to IoT Hub fail with one of the following error messages:
+
+* Authorization header missing
+* IotHub '\*' does not contain the specified device '\*'
+* Authorization rule '\*' does not allow access for '\*'
+* Authentication failed for this device, renew token or certificate and reconnect
+* Thumbprint does not match configuration: Thumbprint: SHA1Hash=\*, SHA2Hash=\*; Configuration: PrimaryThumbprint=\*, SecondaryThumbprint=\*
+* Principal user@example.com is not authorized for GET on /exampleOperation due to no assigned permissions
+
+This error occurs because, for MQTT, some SDKs rely on IoT Hub to issue the disconnect when the SAS token expires to know when to refresh it. So,
+
+1. The SAS token expires
+1. IoT Hub notices the expiration, and disconnects the device with **401003 IoTHubUnauthorized**
+1. The device completes the disconnection with **404104 DeviceConnectionClosedRemotely**
+1. The IoT SDK generates a new SAS token
+1. The device reconnects with IoT Hub successfully
+
+Or, IoT Hub couldn't authenticate the auth header, rule, or key. This could be due to any of the reasons cited in the symptoms.
+
+To resolve this error, no action is needed if using IoT SDK for connection using the device connection string. IoT SDK regenerates the new token to reconnect on SAS token expiration.
+
+The default token lifespan is 60 minutes across SDKs; however, for some SDKs the token lifespan and the token renewal threshold is configurable. Additionally, the errors generated when a device disconnects and reconnects on token renewal differs for each SDK. To learn more, and for information about how to determine which SDK your device is using in logs, see [MQTT device disconnect behavior with Azure IoT SDKs](iot-hub-troubleshoot-connectivity.md#mqtt-device-disconnect-behavior-with-azure-iot-sdks).
+
+For device developers, if the volume of errors is a concern, switch to the C SDK, which renews the SAS token before expiration. For AMQP, the SAS token can refresh without disconnection.
+
+In general, the error message presented should explain how to fix the error. If for some reason you don't have access to the error message detail, make sure:
+
+* The SAS or other security token you use isn't expired.
+* For X.509 certificate authentication, the device certificate or the CA certificate associated with the device isn't expired. To learn how to register X.509 CA certificates with IoT Hub, see [Set up X.509 security in your Azure IoT hub](tutorial-x509-scripts.md).
+* For X.509 certificate thumbprint authentication, the thumbprint of the device certificate is registered with IoT Hub.
+* The authorization credential is well formed for the protocol that you use. To learn more, see [Control access to IoT Hub](iot-hub-devguide-security.md).
+* The authorization rule used has the permission for the operation requested.
+* For the last error messages beginning with "principal...", this error can be resolved by assigning the correct level of Azure RBAC permission to the user. For example, an Owner on the IoT Hub can assign the "IoT Hub Data Owner" role, which gives all permissions. Try this role to resolve the lack of permission issue.
+
+## 403002 IoTHubQuotaExceeded
+
+You may see requests to IoT Hub fail with the error **403002 IoTHubQuotaExceeded**. And in Azure portal, the IoT hub device list doesn't load.
+
+This error occurs when the daily message quota for the IoT hub is exceeded.
+
+To resolve this error:
+
+* [Upgrade or increase the number of units on the IoT hub](iot-hub-upgrade.md) or wait for the next UTC day for the daily quota to refresh.
+* To understand how operations are counted toward the quota, such as twin queries and direct methods, see [Understand IoT Hub pricing](iot-hub-devguide-pricing.md#charges-per-operation).
+* To set up monitoring for daily quota usage, set up an alert with the metric *Total number of messages used*. For step-by-step instructions, see [Set up metrics and alerts with IoT Hub](tutorial-use-metrics-and-diags.md#set-up-metrics).
+
+## 403004 DeviceMaximumQueueDepthExceeded
+
+When trying to send a cloud-to-device message, you may see that the request fails with the error **403004** or **DeviceMaximumQueueDepthExceeded**.
+
+The underlying cause of this error is that the number of messages enqueued for the device exceeds the [queue limit](iot-hub-devguide-quotas-throttling.md#other-limits).
+
+The most likely reason that you're running into this limit is because you're using HTTPS to receive the message, which leads to continuous polling using `ReceiveAsync`, resulting in IoT Hub throttling the request.
+
+The supported pattern for cloud-to-device messages with HTTPS is intermittently connected devices that check for messages infrequently (less than every 25 minutes). To reduce the likelihood of running into the queue limit, switch to AMQP or MQTT for cloud-to-device messages.
+
+Alternatively, enhance device side logic to complete, reject, or abandon queued messages quickly, shorten the time to live, or consider sending fewer messages. See [C2D message time to live](./iot-hub-devguide-messages-c2d.md#message-expiration-time-to-live).
+
+Lastly, consider using the [Purge Queue API](/azure/iot-hub/iot-c-sdk-ref/iothub-registrymanager-h/iothubregistrymanager-deletedevice) to periodically clean up pending messages before the limit is reached.
+
+## 403006 DeviceMaximumActiveFileUploadLimitExceeded
+
+You may see that your file upload request fails with the error code **403006** and a message "Number of active file upload requests cannot exceed 10".
+
+This error occurs because each device client is limited for [concurrent file uploads](iot-hub-devguide-quotas-throttling.md#other-limits). You can easily exceed the limit if your device doesn't notify IoT Hub when file uploads are completed. This problem is commonly caused by an unreliable device side network.
+
+To resolve this error, ensure that the device can promptly [notify IoT Hub file upload completion](iot-hub-devguide-file-upload.md#device-notify-iot-hub-of-a-completed-file-upload). Then, try [reducing the SAS token TTL for file upload configuration](iot-hub-configure-file-upload.md).
+
+## 404001 DeviceNotFound
+
+During a cloud-to-device (C2D) communication, such as C2D message, twin update, or direct method, you may see that the operation fails with error **404001 DeviceNotFound**.
+
+The operation failed because the device cannot be found by IoT Hub. The device either is not registered or is disabled.
+
+To resolve this error, register the device ID that you used, then try again.
+
+## 404103 DeviceNotOnline
+
+You may see that a direct method to a device fails with the error **404103 DeviceNotOnline** even if the device is online.
+
+If you know that the device is online and still get the error, then the error likely occurred because the direct method callback isn't registered on the device.
+
+To configure your device properly for direct method callbacks, see [Handle a direct method on a device](iot-hub-devguide-direct-methods.md#handle-a-direct-method-on-a-device).
+
+## 404104 DeviceConnectionClosedRemotely
+
+You may see that devices disconnect at a regular interval (every 65 minutes, for example) and you see **404104 DeviceConnectionClosedRemotely** in IoT Hub resource logs. Sometimes, you also see **401003 IoTHubUnauthorized** and a successful device connection event less than a minute later.
+
+Or, devices disconnect randomly, and you see **404104 DeviceConnectionClosedRemotely** in IoT Hub resource logs.
+
+Or, many devices disconnect at once, you see a dip in the [Connected devices (connectedDeviceCount) metric](monitor-iot-hub-reference.md), and there are more **404104 DeviceConnectionClosedRemotely** and [500xxx Internal errors](#500xxx-internal-errors) in Azure Monitor Logs than usual.
+
+This error can occur because the [SAS token used to connect to IoT Hub](iot-hub-dev-guide-sas.md#security-tokens) expired, which causes IoT Hub to disconnect the device. The connection is re-established when the token is refreshed by the device. For example, [the SAS token expires every hour by default for C SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/connection_and_messaging_reliability.md#connection-authentication), which can lead to regular disconnects. To learn more, see [401003 IoTHubUnauthorized](#401003-iothubunauthorized).
+
+Some other possibilities include:
+
+* The device lost underlying network connectivity longer than the [MQTT keep-alive](iot-hub-mqtt-support.md#default-keep-alive-timeout), resulting in a remote idle timeout. The MQTT keep-alive setting can be different per device.
+* The device sent a TCP/IP-level reset but didn't send an application-level `MQTT DISCONNECT`. Basically, the device abruptly closed the underlying socket connection. Sometimes, this issue is caused by bugs in older versions of the Azure IoT SDK.
+* The device side application crashed.
+
+Or, IoT Hub might be experiencing a transient issue. See [IoT Hub internal server error](#500xxx-internal-errors).
+
+To resolve this error:
+
+* See the guidance for [error 401003 IoTHubUnauthorized](#401003-iothubunauthorized).
+* Make sure the device has good connectivity to IoT Hub by [testing the connection](tutorial-connectivity.md). If the network is unreliable or intermittent, we don't recommend increasing the keep-alive value because it could result in detection (via Azure Monitor alerts, for example) taking longer.
+* Use the latest versions of the [IoT SDKs](iot-hub-devguide-sdks.md).
+* See the guidance for [IoT Hub internal server errors](#500xxx-internal-errors).
+
+We recommend using Azure IoT device SDKs to manage connections reliably. To learn more, see [Manage connectivity and reliable messaging by using Azure IoT Hub device SDKs](iot-hub-reliability-features-in-sdks.md)
+
+## 409001 DeviceAlreadyExists
+
+When trying to register a device in IoT Hub, you may see that the request fails with the error **409001 DeviceAlreadyExists**.
+
+This error occurs because there's already a device with the same device ID in the IoT hub.
+
+To resolve this error, use a different device ID and try again.
+
+## 409002 LinkCreationConflict
+
+You may see the error **409002 LinkCreationConflict** in logs along with device disconnection or cloud-to-device message failure.
+
+<!-- When using AMQP? -->
+
+Generally, this error happens when IoT Hub detects a client has more than one connection. In fact, when a new connection request arrives for a device with an existing connection, IoT Hub closes the existing connection with this error.
+
+In the most common case, a separate issue (such as [404104 DeviceConnectionClosedRemotely](#404104-deviceconnectionclosedremotely)) causes the device to disconnect. The device tries to reestablish the connection immediately, but IoT Hub still considers the device connected. IoT Hub closes the previous connection and logs this error.
+
+Or, faulty device-side logic causes the device to establish the connection when one is already open.
+
+To resolve this error, look for other errors in the logs that you can troubleshoot because this error usually appears as a side effect of a different, transient issue. Otherwise, make sure to issue a new connection request only if the connection drops.
+
+## 412002 DeviceMessageLockLost
+
+When trying to send a cloud-to-device message, you may see that the request fails with the error **412002 DeviceMessageLockLost**.
+
+This error occurs because when a device receives a cloud-to-device message from the queue (for example, using [`ReceiveAsync()`](/dotnet/api/microsoft.azure.devices.client.deviceclient.receiveasync)) the message is locked by IoT Hub for a lock timeout duration of one minute. If the device tries to complete the message after the lock timeout expires, IoT Hub throws this exception.
+
+If IoT Hub doesn't get the notification within the one-minute lock timeout duration, it sets the message back to *Enqueued* state. The device can attempt to receive the message again. To prevent the error from happening in the future, implement device side logic to complete the message within one minute of receiving the message. This one-minute time-out can't be changed.
+
+## 429001 ThrottlingException
+
+You may see that your requests to IoT Hub fail with the error **429001 ThrottlingException**.
+
+This error occurs when IoT Hub [throttling limits](iot-hub-devguide-quotas-throttling.md) have been exceeded for the requested operation.
+
+To resolve this error, check if you're hitting the throttling limit by comparing your *Telemetry message send attempts* metric against the limits specified above. You can also check the *Number of throttling errors* metric. For information about these metrics, see [Device telemetry metrics](monitor-iot-hub-reference.md#device-telemetry-metrics). For information about how use metrics to help you monitor your IoT hub, see [Monitor IoT Hub](monitor-iot-hub.md).
+
+IoT Hub returns 429 ThrottlingException only after the limit has been violated for too long a period. This is done so that your messages aren't dropped if your IoT hub gets burst traffic. In the meantime, IoT Hub processes the messages at the operation throttle rate, which might be slow if there's too much traffic in the backlog. To learn more, see [IoT Hub traffic shaping](iot-hub-devguide-quotas-throttling.md#traffic-shaping).
+
+Consider [scaling up your IoT Hub](iot-hub-scaling.md) if you're running into quota or throttling limits.
+
+## 500xxx Internal errors
+
+You may see that your request to IoT Hub fails with an error that begins with 500 and/or some sort of "server error". Some possibilities are:
+
+* **500001 ServerError**: IoT Hub ran into a server-side issue.
+
+* **500008 GenericTimeout**: IoT Hub couldn't complete the connection request before timing out.
+
+* **ServiceUnavailable (no error code)**: IoT Hub encountered an internal error.
+
+* **InternalServerError (no error code)**: IoT Hub encountered an internal error.
+
+There can be a number of causes for a 500xxx error response. In all cases, the issue is most likely transient. While the IoT Hub team works hard to maintain [the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/), small subsets of IoT Hub nodes can occasionally experience transient faults. When your device tries to connect to a node that's having issues, you receive this error.
+
+To mitigate 500xxx errors, issue a retry from the device. To [automatically manage retries](iot-hub-reliability-features-in-sdks.md#connection-and-retry), make sure you use the latest version of the [Azure IoT SDKs](iot-hub-devguide-sdks.md). For best practice on transient fault handling and retries, see [Transient fault handling](/azure/architecture/best-practices/transient-faults).
+
+If the problem persists, check [Resource Health](iot-hub-azure-service-health-integration.md#check-health-of-an-iot-hub-with-azure-resource-health) and [Azure Status](https://status.azure.com/) to see if IoT Hub has a known problem. You can also use the [manual failover feature](tutorial-manual-failover.md).
+
+If there are no known problems and the issue continues, [contact support](https://azure.microsoft.com/support/options/) for further investigation.
+
+## 503003 PartitionNotFound
+
+You may see that requests to IoT Hub fail with the error **503003 PartitionNotFound**.
+
+This error is internal to IoT Hub and is likely transient. See [IoT Hub internal server errors](#500xxx-internal-errors).
+
+To resolve this error, see [IoT Hub internal server errors](#500xxx-internal-errors).
+
+## 504101 GatewayTimeout
+
+When trying to invoke a direct method from IoT Hub to a device, you may see that the request fails with the error **504101 GatewayTimeout**.
+
+This error occurs because IoT Hub encountered an error and couldn't confirm if the direct method completed before timing out. Or, when using an earlier version of the Azure IoT C# SDK (<1.19.0), the AMQP link between the device and IoT Hub can be dropped silently because of a bug.
+
+To resolve this error, issue a retry or upgrade to the latest version of the Azure IOT C# SDK.
key-vault Disaster Recovery Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/disaster-recovery-guide.md
You must provide the following inputs to create a Managed HSM resource:
- The Azure location. - A list of initial administrators.
-The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **West US 2** location, with **the current signed in user** as the only administrator.
+The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **West US 3** location, with **the current signed in user** as the only administrator.
```azurecli-interactive oid=$(az ad signed-in-user show --query objectId -o tsv)
-az keyvault create --hsm-name "ContosoMHSM2" --resource-group "ContosoResourceGroup" --location "westus2" --administrators $oid
+az keyvault create --hsm-name "ContosoMHSM2" --resource-group "ContosoResourceGroup" --location "westus3" --administrators $oid
``` > [!NOTE]
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md
az login
## Create a resource group
-A resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *ContosoResourceGroup* in the *westus2* location.
+A resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named *ContosoResourceGroup* in the *westus3* location.
```azurecli-interactive
-az group create --name "ContosoResourceGroup" --location westus2
+az group create --name "ContosoResourceGroup" --location westus3
``` ## Create a Managed HSM
You need to provide following inputs to create a Managed HSM resource:
- Azure location. - A list of initial administrators.
-The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **West US 2** location, with **the current signed in user** as the only administrator, with **7 days retention period** for soft-delete. Read more about [Managed HSM soft-delete](soft-delete-overview.md)
+The following example creates an HSM named **ContosoMHSM**, in the resource group **ContosoResourceGroup**, residing in the **West US 3** location, with **the current signed in user** as the only administrator, with **7 days retention period** for soft-delete. Read more about [Managed HSM soft-delete](soft-delete-overview.md)
```azurecli-interactive oid=$(az ad signed-in-user show --query objectId -o tsv)
-az keyvault create --hsm-name "ContosoMHSM" --resource-group "ContosoResourceGroup" --location "westus2" --administrators $oid --retention-days 7
+az keyvault create --hsm-name "ContosoMHSM" --resource-group "ContosoResourceGroup" --location "westus3" --administrators $oid --retention-days 7
``` > [!NOTE]
key-vault Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-powershell.md
Login-AzAccount
## Create a resource group
-A resource group is a logical container into which Azure resources are deployed and managed. Use the Azure PowerShell [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a resource group named *myResourceGroup* in the *westus2* location.
+A resource group is a logical container into which Azure resources are deployed and managed. Use the Azure PowerShell [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a resource group named *myResourceGroup* in the *westus3* location.
```azurepowershell-interactive
-New-AzResourceGroup -Name "myResourceGroup" -Location "westus2"
+New-AzResourceGroup -Name "myResourceGroup" -Location "westus3"
``` ## Get your principal ID
Use the Azure PowerShell [New-AzKeyVaultManagedHsm](/powershell/module/az.keyvau
> Each Managed HSM must have a unique name. Replace \<your-unique-managed-hsm-name\> with the name of your Managed HSM in the following examples. - Resource group name: **myResourceGroup**.-- The location: **West US 2**.
+- The location: **West US 3**.
- Your principal ID: Pass the Azure Active Directory principal ID that you obtained in the last section to the "Administrator" parameter. ```azurepowershell-interactive
-New-AzKeyVaultManagedHsm -Name "<your-unique-managed-hsm-name>" -ResourceGroupName "myResourceGroup" -Location "westus2" -Administrator "<your-principal-ID>"
+New-AzKeyVaultManagedHsm -Name "<your-unique-managed-hsm-name>" -ResourceGroupName "myResourceGroup" -Location "westus3" -Administrator "<your-principal-ID>"
``` > [!NOTE] > The create command can take a few minutes. Once it returns successfully you are ready to activate your HSM.
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-template.md
You may also need your tenant ID. To find it, use the Azure CLI [az ad user show
- **Subscription**: Select an Azure subscription. - **Resource group**: Select **Create new**, enter a unique name for the resource group, and then select **OK**.
- - **Location**: Select a location. For example, **West US 2**.
+ - **Location**: Select a location. For example, **West US 3**.
- **managedHSMName**: Enter a name for your Managed HSM. - **Tenant ID**: The template function automatically retrieves your tenant ID; don't change the default value. If there is no value, enter the Tenant ID that you retrieved in [Prerequisites](#prerequisites). * **initialAdminObjectIds**: Enter the Object ID that you retrieved in [Prerequisites](#prerequisites).
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 03/07/2022 Last updated : 04/04/2022 # Limits and configuration reference for Azure Logic Apps
If your workflow uses [managed connectors](../connectors/managed.md), such as th
Before you set up your firewall with IP addresses, review these considerations:
-* If your logic app workflows run in single-tenant Azure Logic Apps, you need to find the fully qualified domain names (FQDNs) for your connections. For more information, review the corresponding sections in these topics:
+* To help you simplify any security rules that you want to create, you can optionally use [service tags](../virtual-network/service-tags-overview.md) instead, rather than specify IP address prefixes for each region. These tags represent a group of IP address prefixes from a specific Azure service and work across the regions where the Azure Logic Apps service is available:
- * [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup)
- * [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
+ * **LogicAppsManagement**: Represents the inbound IP address prefixes for the Azure Logic Apps service.
-* If your logic app workflows run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), make sure that you [open these ports too](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#network-ports-for-ise).
+ * **LogicApps**: Represents the outbound IP address prefixes for the Azure Logic Apps service.
-* To help you simplify any security rules that you want to create, you can optionally use [service tags](../virtual-network/service-tags-overview.md) instead, rather than specify IP address prefixes for each region. These tags work across the regions where the Logic Apps service is available:
+ * **AzureConnectors**: Represents the IP address prefixes for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, such as Azure Storage or Azure Event Hubs.
- * **LogicAppsManagement**: Represents the inbound IP address prefixes for the Logic Apps service.
+* For Standard logic app workflows that run in single-tenant Azure Logic Apps, you have to allow access for any trigger or action connections in your workflows. You can allow traffic from [service tags](../virtual-network/service-tags-overview.md) and use the same level of restrictions or policies as Azure App Service. You also need to find and use the fully qualified domain names (FQDNs) for your connections. For more information, review the corresponding sections in the following documentation:
- * **LogicApps**: Represents the outbound IP address prefixes for the Logic Apps service.
+ * [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup)
+ * [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
- * **AzureConnectors**: Represents the IP address prefixes for managed connectors that make inbound webhook callbacks to the Logic Apps service and outbound calls to their respective services, such as Azure Storage or Azure Event Hubs.
+* For Consumption logic app workflows that run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), make sure that you [open these ports too](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#network-ports-for-ise).
* If your logic apps have problems accessing Azure storage accounts that use [firewalls and firewall rules](../storage/common/storage-network-security.md), you have [various other options to enable access](../connectors/connectors-create-api-azureblobstorage.md#access-storage-accounts-behind-firewalls).
For Azure Logic Apps to receive incoming communication through your firewall, yo
> > Some managed connectors make inbound webhook callbacks to the Azure Logic Apps service. For these managed connectors, you can optionally use the > **AzureConnectors** service tag for these managed connectors, rather than specify inbound managed connector IP address prefixes for each region.
-> These tags work across the regions where the Logic Apps service is available.
+> These tags work across the regions where the Azure Logic Apps service is available.
>
-> The following connectors make inbound webhook callbacks to the Logic Apps service:
+> The following connectors make inbound webhook callbacks to the Azure Logic Apps service:
> > Adobe Creative Cloud, Adobe Sign, Adobe Sign Demo, Adobe Sign Preview, Adobe Sign Stage, Microsoft Sentinel, Business Central, Calendly, > Common Data Service, DocuSign, DocuSign Demo, Dynamics 365 for Fin & Ops, LiveChat, Office 365 Outlook, Outlook.com, Parserr, SAP*, > Shifts for Microsoft Teams, Teamwork Projects, Typeform > > \* **SAP**: The return caller depends on whether the deployment environment is either multi-tenant Azure or ISE. In the
-> multi-tenant environment, the on-premises data gateway makes the call back to the Logic Apps service. In an ISE, the SAP
-> connector makes the call back to the Logic Apps service.
+> multi-tenant environment, the on-premises data gateway makes the call back to the Azure Logic Apps service. In an ISE, the SAP
+> connector makes the call back to the Azure Logic Apps service.
<a name="multi-tenant-inbound"></a>
Also, if your workflow also uses any [managed connectors](../connectors/managed.
> To help reduce complexity when you create security rules, you can optionally use the [service tag](../virtual-network/service-tags-overview.md), > **LogicApps**, rather than specify outbound Logic Apps IP address prefixes for each region. Optionally, you can also use the **AzureConnectors** > service tag for managed connectors that make outbound calls to their respective services, such as Azure Storage or Azure Event Hubs, rather than
-> specify outbound managed connector IP address prefixes for each region. These tags work across the regions where the Logic Apps service is available.
+> specify outbound managed connector IP address prefixes for each region. These tags work across the regions where the Azure Logic Apps service is available.
<a name="multi-tenant-outbound"></a>
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/single-tenant-overview-compare.md
ms.suite: integration Previously updated : 02/25/2022 Last updated : 04/04/2022
For the **Logic App (Standard)** resource, these capabilities have changed, or t
## Strict network and firewall traffic permissions
-If your environment has strict network requirements or firewalls that limit traffic, you have to allow access for any trigger or action connections in your logic app workflows. To find the fully qualified domain names (FQDNs) for these connections, review the corresponding sections in these topics:
+If your environment has strict network requirements or firewalls that limit traffic, you have to allow access for any trigger or action connections in your workflows. You can optionally allow traffic from [service tags](../virtual-network/service-tags-overview.md) and use the same level of restrictions or policies as Azure App Service. You also need to find and use the fully qualified domain names (FQDNs) for your connections. For more information, review the corresponding sections in the following documentation:
-* [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
* [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup)
+* [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup)
## Next steps
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
Previously updated : 10/21/2021 Last updated : 04/05/2022
Once the terminal is opened, you have access to a full Git client and can clone
We recommend that you clone the repository into your users directory so that others will not make collisions directly on your working branch.
+> [!TIP]
+> There is a performance difference between cloning to the local file system of the compute instance or cloning to the mounted filesystem (mounted as the `~/cloudfiles/code` directory). In general, cloning to the local filesystem will have better performance than to the mounted filesystem. However, the local filesystem is lost if you delete and recreate the compute instance. The mounted filesystem is kept if you delete and recreate the compute instance.
+ You can clone any Git repository you can authenticate to (GitHub, Azure Repos, BitBucket, etc.) For more information about cloning, see the guide on [how to use Git CLI](https://guides.github.com/introduction/git-handbook/).
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
The following YAML example is located at `endpoints/online/managed/managed-ident
* Defines the name by which you want to refer to the endpoint, `my-sai-endpoint`. * Specifies the type of authorization to use to access the endpoint, `auth-mode: key`. This YAML example, `2-sai-deployment.yml`,
This YAML example, `2-sai-deployment.yml`,
* Indicates that the endpoint has an associated deployment called `blue`. * Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use. # [User-assigned managed identity](#tab/user-identity)
The following YAML example is located at `endpoints/online/managed/managed-ident
* Specifies the type of authorization to use to access the endpoint, `auth-mode: key`. * Indicates the identity type to use, `type: user_assigned` This YAML example, `2-sai-deployment.yml`,
This YAML example, `2-sai-deployment.yml`,
* Indicates that the endpoint has an associated deployment called `blue`. * Configures the details of the deployment such as, which model to deploy and which environment and scoring script to use.
Configure the variable names for the workspace, workspace location, and the endp
The following code exports these values as environment variables in your endpoint: Next, specify what you want to name your blob storage account, blob container, and file. These variable names are defined here, and are referred to in `az storage account create` and `az storage container create` commands in the next section. The following code exports those values as environment variables: After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the system-assigned managed identity that's generated upon endpoint creation.
After these variables are exported, create a text file locally. When the endpoin
Decide on the name of your endpoint, workspace, workspace location and export that value as an environment variable: Next, specify what you want to name your blob storage account, blob container, and file. These variable names are defined here, and are referred to in `az storage account create` and `az storage container create` commands in the next section. After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the user-assigned managed identity used in the endpoint. Decide on the name of your user identity name, and export that value as an environment variable:
When you [create an online endpoint](#create-an-online-endpoint), a system-assig
To create a user-assigned managed identity, use the following:
This is the storage account and blob container that you'll give the online endpo
First, create a storage account. Next, create the blob container in the storage account. Then, upload your text file to the blob container. # [User-assigned managed identity](#tab/user-identity) First, create a storage account. You can also retrieve an existing storage account ID with the following. Next, create the blob container in the storage account. Then, upload file in container.
When you create an online endpoint, a system-assigned managed identity is create
>[!IMPORTANT] > System assigned managed identities are immutable and can't be changed once created. Check the status of the endpoint with the following. If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md). # [User-assigned managed identity](#tab/user-identity) Check the status of the endpoint with the following. If you encounter any issues, see [Troubleshooting online endpoints deployment and scoring (preview)](how-to-troubleshoot-managed-online-endpoints.md).
You can allow the online endpoint permission to access your storage via its syst
Retrieve the system-assigned managed identity that was created for your endpoint. From here, you can give the system-assigned managed identity permission to access your storage. # [User-assigned managed identity](#tab/user-identity) Retrieve user-assigned managed identity client ID. Retrieve the user-assigned managed identity ID. Get the container registry associated with workspace. Retrieve the default storage of the workspace. Give permission of storage account to the user-assigned managed identity. Give permission of container registry to user assigned managed identity. Give permission of default workspace storage to user-assigned managed identity.
Give permission of default workspace storage to user-assigned managed identity.
Refer to the following script to understand how to use your identity token to access Azure resources, in this scenario, the storage account created in previous sections. ## Create a deployment with your configuration
Create a deployment that's associated with the online endpoint. [Learn more abou
# [System-assigned managed identity](#tab/system-identity) >[!NOTE] > The value of the `--name` argument may override the `name` key inside the YAML file. Check the status of the deployment. To refine the above query to only return specific data, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
To refine the above query to only return specific data, see [Query Azure CLI com
To check the init method output, see the deployment log with the following code. # [User-assigned managed identity](#tab/user-identity) >[!Note] > The value of the `--name` argument may override the `name` key inside the YAML file. Once the command executes, you can check the status of the deployment. To refine the above query to only return specific data, see [Query Azure CLI command output](/cli/azure/query-azure-cli). > [!NOTE] > The init method in the scoring script reads the file from your storage account using the system assigned managed identity token. To check the init method output, see the deployment log with the following code.
When your deployment completes, the model, the environment, and the endpoint ar
Once your online endpoint is deployed, confirm its operation. Details of inferencing vary from model to model. For this guide, the JSON query parameters look like: To call your endpoint, run: # [System-assigned managed identity](#tab/system-identity) # [User-assigned managed identity](#tab/user-identity)
If you don't plan to continue using the deployed online endpoint and storage, de
# [System-assigned managed identity](#tab/system-identity) # [User-assigned managed identity](#tab/user-identity)
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
You should receive a JSON dictionary with information about the pipeline job, in
Open `ComponentA.yaml` to see how the first component is defined: In the current preview, only components of type `command` are supported. The `name` is the unique identifier and used in Studio to describe the component, and `display_name` is used for a display-friendly name. The `version` key-value pair allows you to evolve your pipeline components while maintaining reproducibility with older versions.
For more information on components and their specification, see [What is an Azur
In the example directory, the `pipeline.yaml` file looks like the following code: If you open the job's URL in Studio (the value of `services.Studio.endpoint` from the `job create` command when creating a job or `job show` after the job has been created), you'll see a graph representation of your pipeline:
Each of these phases may have multiple components. For instance, the data prepar
The `pipeline.yml` begins with the mandatory `type: pipeline` key-value pair. Then, it defines inputs and outputs as follows: As described previously, these entries specify the input data to the pipeline, in this case the dataset in `./data`, and the intermediate and final outputs of the pipeline, which are stored in separate paths. The names within these input and output entries become values in the `inputs` and `outputs` entries of the individual jobs: Notice how `parent.jobs.train-job.outputs.model_output` is used as an input to both the prediction job and the scoring job, as shown in the following diagram:
Click on a component. You'll see some basic information about the component, suc
In the `1b_e2e_registered_components` directory, open the `pipeline.yml` file. The keys and values in the `inputs` and `outputs` dictionaries are similar to those already discussed. The only significant difference is the value of the `command` values in the `jobs.<JOB_NAME>.component` entries. The `component` value is of the form `azureml:<JOB_NAME>:<COMPONENT_VERSION>`. The `train-job` definition, for instance, specifies the latest version of the registered component `Train` should be used: ## Caching & reuse
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
To deploy using these files, you can use either the studio or the Azure CLI.
To create a deployment from the CLI, you'll need the Azure CLI with the ML v2 extension. Run the following command to confirm that you've both: If you receive an error message or you don't see `Extensions: ml` in the response, follow the steps at [Install and set up the CLI (v2)](how-to-configure-cli.md). Sign in: If you've access to multiple Azure subscriptions, you can set your active subscription: Set the default resource group and workspace to where you wish to create the deployment: ## Put the scoring file in its own directory
To create an online endpoint from the command line, you'll need to create an *en
__automl_endpoint.yml__ __automl_deployment.yml__ You'll need to modify this file to use the files you downloaded from the AutoML Models page.
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
In this article, you learn how to use the new REST APIs to:
> [!NOTE] > Batch endpoint names need to be unique at the Azure region level. For example, there can be only one batch endpoint with the name mybatchendpoint in westus2. ## Azure Machine Learning batch endpoints
In the following REST API calls, we use `SUBSCRIPTION_ID`, `RESOURCE_GROUP`, `LO
Administrative REST requests a [service principal authentication token](how-to-manage-rest.md#retrieve-a-service-principal-authentication-token). Replace `TOKEN` with your own value. You can retrieve this token with the following command: The service provider uses the `api-version` argument to ensure compatibility. The `api-version` argument varies from service to service. Set the API version as a variable to accommodate future versions: ### Create compute Batch scoring runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster where you can run batch scoring workflows. Create a compute cluster: > [!TIP] > If you want to use an existing compute instead, you must specify the full Azure Resource Manager ID when [creating the batch deployment](#create-batch-deployment). The full ID uses the format `/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.MachineLearningServices/workspaces/$WORKSPACE/computes/<your-compute-name>`.
To register the model and code, first they need to be uploaded to a storage acco
You can use the tool [jq](https://stedolan.github.io/jq/) to parse the JSON result and get the required values. You can also use the Azure portal to find the same information: ### Upload & register code Now that you have the datastore, you can upload the scoring script. Use the Azure Storage CLI to upload a blob into your default container: > [!TIP] > You can also use other methods to upload, such as the Azure portal or [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/). Once you upload your code, you can specify your code with a PUT request: ### Upload and register model Similar to the code, Upload the model files: Now, register the model: ### Create environment The deployment needs to run in an environment that has the required dependencies. Create the environment with a PUT request. Use a docker image from Microsoft Container Registry. You can configure the docker image with `image` and add conda dependencies with `condaFile`. Run the following code to read the `condaFile` defined in json. The source file is at `/cli/endpoints/batch/mnist/environment/conda.json` in the example repository: Now, run the following snippet to create an environment: ## Deploy with batch endpoints
Next, create the batch endpoint, a deployment, and set the default deployment.
Create the batch endpoint: ### Create batch deployment Create a batch deployment under the endpoint: ### Set the default batch deployment under the endpoint There's only one default batch deployment under one endpoint, which will be used when invoke to run batch scoring job. ## Run batch scoring
Invoking a batch endpoint triggers a batch scoring job. A job `id` is returned i
Get the scoring uri and access token to invoke the batch endpoint. First get the scoring uri: Get the batch endpoint access token: Now, invoke the batch endpoint to start a batch scoring job. The following example scores data publicly available in the cloud: If your data is stored in an Azure Machine Learning registered datastore, you can invoke the batch endpoint with a dataset. The following code creates a new dataset: Next, reference the dataset when invoking the batch endpoint: In the previous code snippet, a custom output location is provided by using `datastoreId`, `path`, and `outputFileName`. These settings allow you to configure where to store the batch scoring results.
In the previous code snippet, a custom output location is provided by using `dat
For this example, the output is stored in the default blob storage for the workspace. The folder name is the same as the endpoint name, and the file name is randomly generated by the following code: ### Check the batch scoring job
Batch scoring jobs usually take some time to process the entire set of inputs. M
> [!TIP] > The example invokes the default deployment of the batch endpoint. To invoke a non-default deployment, use the `azureml-model-deployment` HTTP header and set the value to the deployment name. For example, using a parameter of `--header "azureml-model-deployment: $DEPLOYMENT_NAME"` with curl. ### Check batch scoring results
For information on checking the results, see [Check batch scoring results](how-t
If you aren't going use the batch endpoint, you should delete it with the below command (it deletes the batch endpoint and all the underlying deployments): ## Next steps
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
cd azureml-examples/cli
Define environment variables: ## Download a TensorFlow model Download and unzip a model that divides an input by two and adds 2 to the result: ## Run a TF Serving image locally to test that it works Use docker to run your image locally for testing: ### Check that you can send liveness and scoring requests to the image First, check that the container is "alive," meaning that the process inside the container is still running. You should get a 200 (OK) response. Then, check that you can get predictions about unlabeled data: ### Stop the image Now that you've tested locally, stop the image: ## Create a YAML file for your endpoint and deployment
You can configure your cloud deployment using YAML. Take a look at the sample YA
__tfserving-endpoint.yml__ __tfserving-deployment.yml__ There are a few important concepts to notice in this YAML:
az ml online-deployment create --name tfserving-deployment -f endpoints/online/c
Once your deployment completes, see if you can make a scoring request to the deployed endpoint. ### Delete endpoint and model
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
To set your endpoint name, choose one of the following commands, depending on yo
For Unix, run this command: > [!NOTE] > Endpoint names must be unique within an Azure region. For example, in the Azure `westus2` region, there can be only one endpoint with the name `my-endpoint`.
For Unix, run this command:
The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* file: > [!NOTE] > For a full description of the YAML, see [Online endpoint (preview) YAML reference](reference-yaml-endpoint-online.md).
The example contains all the files needed to deploy a model on an online endpoin
The following snippet shows the *endpoints/online/managed/sample/blue-deployment.yml* file, with all the required inputs: The table describes the attributes of a `deployment`:
To save time debugging, we *highly recommend* that you test-run your endpoint lo
First create the endpoint. Optionally, for a local endpoint, you can skip this step and directly create the deployment (next step), which will, in turn, create the required metadata. This is useful for development and testing purposes. Now, create a deployment named `blue` under the endpoint. The `--local` flag directs the CLI to deploy the endpoint in the Docker environment.
The `--local` flag directs the CLI to deploy the endpoint in the Docker environm
Check the status to see whether the model was deployed without error: The output should appear similar to the following JSON. Note that the `provisioning_state` is `Succeeded`.
The output should appear similar to the following JSON. Note that the `provision
Invoke the endpoint to score the model by using the convenience command `invoke` and passing query parameters that are stored in a JSON file: If you want to use a REST client (like curl), you must have the scoring URI. To get the scoring URI, run `az ml online-endpoint show --local -n $ENDPOINT_NAME`. In the returned data, find the `scoring_uri` attribute. Sample curl based commands are available later in this doc.
If you want to use a REST client (like curl), you must have the scoring URI. To
In the example *score.py* file, the `run()` method logs some output to the console. You can view this output by using the `get-logs` command again: ## Deploy your online endpoint to Azure
Next, deploy your online endpoint to Azure.
To create the endpoint in the cloud, run the following code: To create the deployment named `blue` under the endpoint, run the following code: This deployment might take up to 15 minutes, depending on whether the underlying environment or image is being built for the first time. Subsequent deployments that use the same environment will finish processing more quickly.
This deployment might take up to 15 minutes, depending on whether the underlying
The `show` command contains information in `provisioning_status` for endpoint and deployment: You can list all the endpoints in the workspace in a table format by using the `list` command:
az ml online-endpoint list --output table
Check the logs to see whether the model was deployed without error: By default, logs are pulled from inference-server. To see the logs from storage-initializer (it mounts assets like model and code to the container), add the `--container storage-initializer` flag.
By default, logs are pulled from inference-server. To see the logs from storage-
You can use either the `invoke` command or a REST client of your choice to invoke the endpoint and score some data: The following example shows how to get the key used to authenticate to the endpoint: Next, use curl to score data. Notice we use `show` and `get-credentials` commands to get the authentication credentials. Also notice that we're using the `--query` flag to filter attributes to only what we need. To learn more about `--query`, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
To understand how `update` works:
1. Because you modified the `init()` function (`init()` runs when the endpoint is created or updated), the message `Updated successfully` will be in the logs. Retrieve the logs by running:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint.sh" ID="get_logs" :::
The `update` command also works with local deployments. Use the same `az ml online-deployment update` command with the `--local` flag.
The logs might take up to an hour to connect. After an hour, send some scoring r
If you aren't going use the deployment, you should delete it by running the following code (it deletes the endpoint and all the underlying deployments): ## Next steps
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Provides a MLflow base image/curated environment that contains,
In this code snippets used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint: ## Deploy using CLI (v2)
This example shows how you can deploy an MLflow model to an online endpoint usin
__create-endpoint.yaml__
- :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/mlflow/create-endpoint.yaml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
1. To create a new endpoint using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
1. Create a YAML configuration file for the deployment. The following example configures a deployment of the `sklearn-diabetes` model to the endpoint created in the previous step:
This example shows how you can deploy an MLflow model to an online endpoint usin
__sklearn-deployment.yaml__
- :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
1. To create the deployment using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
### Invoke the endpoint Once your deployment completes, use the following command to make a scoring request to the deployed endpoint. The [sample-request-sklearn.json](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/mlflow/sample-request-sklearn.json) file used in this command is located in the `/cli/endpoints/online/mlflow` directory of the azure-examples repo: **sample-request-sklearn.json** The response will be similar to the following text:
The response will be similar to the following text:
Once you're done with the endpoint, use the following command to delete it: ## Deploy using Azure Machine Learning studio
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
This section shows how you can deploy Triton to managed online endpoint using th
1. Use the following command to set the name of the endpoint that will be created. In this example, a random name is created for the endpoint:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="set_endpoint_name":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="set_endpoint_name":::
1. Install Python requirements using the following commands:
This section shows how you can deploy Triton to managed online endpoint using th
__create-managed-endpoint.yaml__
- :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/triton/single-model/create-managed-endpoint.yaml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/triton/single-model/create-managed-endpoint.yaml":::
1. To create a new endpoint using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="create_endpoint":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_endpoint":::
1. Create a YAML configuration file for the deployment. The following example configures a deployment named __blue__ to the endpoint created in the previous step. The one used in the following commands is located at `/cli/endpoints/online/triton/single-model/create-managed-deployment.yml` in the azureml-examples repo you cloned earlier:
This section shows how you can deploy Triton to managed online endpoint using th
> > This deployment uses a Standard_NC6s_v3 VM. You may need to request a quota increase for your subscription before you can use this VM. For more information, see [NCv3-series](../virtual-machines/ncv3-series.md).
- :::code language="yaml" source="~/azureml-examples-march-cli-preview/cli/endpoints/online/triton/single-model/create-managed-deployment.yaml":::
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/triton/single-model/create-managed-deployment.yaml":::
1. To create the deployment using the YAML configuration, use the following command:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="create_deployment":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="create_deployment":::
### Invoke your endpoint
Once your deployment completes, use the following command to make a scoring requ
1. To get the endpoint scoring uri, use the following command:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="get_scoring_uri":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_scoring_uri":::
1. To get an authentication token, use the following command:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="get_token":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="get_token":::
1. To score data with the endpoint, use the following command. It submits the image of a peacock (https://aka.ms/peacock-pic) to the endpoint:
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/deploy-triton-managed-online-endpoint.sh" ID="check_scoring_of_model":::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-triton-managed-online-endpoint.sh" ID="check_scoring_of_model":::
The response from the script is similar to the following text:
Once your deployment completes, use the following command to make a scoring requ
Once you're done with the endpoint, use the following command to delete it: Use the following command to delete your model:
machine-learning How To Train Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-cli.md
Using `--depth 1` clones only the latest commit to the repository, which reduces
You can create an Azure Machine Learning compute cluster from the command line. For instance, the following commands will create one cluster named `cpu-cluster` and one named `gpu-cluster`. You are not charged for compute at this point as `cpu-cluster` and `gpu-cluster` will remain at zero nodes until a job is submitted. Learn more about how to [manage and optimize cost for AmlCompute](how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
Then create an Azure Machine Learning data asset from the local directory, which
Optionally, remove the local file and directory: Registered data assets can be used as inputs to job using the `path` field for a job input. The format is `azureml:<data_name>:<data_version>`, so for the CIFAR-10 dataset just created, it is `azureml:cifar-10-example:1`. You can optionally use the `azureml:<data_name>@latest` syntax instead if you want to reference the latest version of the data asset. Azure ML will resolve that reference to the explicit version. With the data asset in place, you can author a distributed PyTorch job to train our model: And run it:
machine-learning How To Train With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-rest.md
The LightGBM example needs to run in a LightGBM environment. Create the environm
You can configure the docker image with `Docker` and add conda dependencies with `condaFile`: ### Datastore
AZURE_STORAGE_KEY=$(az storage account keys list --account-name $AZURE_STORAGE_A
Now that you have the datastore, you can create a dataset. For this example, use the common dataset `iris.csv`. ### Code
az storage blob upload-batch -d $AZUREML_DEFAULT_CONTAINER/src \
Once you upload your code, you can specify your code with a PUT request and reference the url through `codeUri`. ## Submit a training job
Now that your assets are in place, you can run the LightGBM job, which outputs a
Use the following commands to submit the training job: ## Submit a hyperparameter sweep job
Azure Machine Learning also lets you efficiently tune training hyperparameters.
To create a sweep job with the same LightGBM example, use the following commands: ## Next steps
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
Set your endpoint name. Replace `YOUR_ENDPOINT_NAME` with a unique name within a
For Unix, run this command: For Windows, run this command:
set ENDPOINT_NAME="<YOUR_ENDPOINT_NAME>"
Batch endpoint runs only on cloud computing resources, not locally. The cloud computing resource is a reusable virtual computer cluster. Run the following code to create an Azure Machine Learning compute cluster. The following examples in this article use the compute created here named `batch-cluster`. Adjust as needed and reference your compute using `azureml:<your-compute-name>`. > [!NOTE] > You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch s
The following YAML file defines a batch endpoint, which you can include in the CLI command for [batch endpoint creation](#create-a-batch-endpoint). In the repository, this file is located at `/cli/endpoints/batch/batch-endpoint.yml`. The following table describes the key properties of the endpoint YAML. For the full batch endpoint YAML schema, see [CLI (v2) batch endpoint YAML schema](./reference-yaml-endpoint-batch.md).
For more information about how to reference an Azure ML entity, see [Referencing
The example repository contains all the required files. The following YAML file defines a batch deployment with all the required inputs and optional settings. You can include this file in your CLI command to [create your batch deployment](#create-a-batch-deployment). In the repository, this file is located at `/cli/endpoints/batch/nonmlflow-deployment.yml`. The following table describes the key properties of the deployment YAML. For the full batch deployment YAML schema, see [CLI (v2) batch deployment YAML schema](./reference-yaml-deployment-batch.md).
Now, let's deploy the model with batch endpoints and run batch scoring.
The simplest way to create a batch endpoint is to run the following code providing only a `--name`. You can also create a batch endpoint using a YAML file. Add `--file` parameter in above command and specify the YAML file path.
You can also create a batch endpoint using a YAML file. Add `--file` parameter i
Run the following code to create a batch deployment named `nonmlflowdp` under the batch endpoint and set it as the default deployment. > [!TIP] > The `--set-default` parameter sets the newly created deployment as the default deployment of the endpoint. It's a convenient way to create a new default deployment of the endpoint, especially for the first deployment creation. As a best practice for production scenarios, you may want to create a new deployment without setting it as default, verify it, and update the default deployment later. For more information, see the [Deploy a new model](#deploy-a-new-model) section.
Use `show` to check endpoint and deployment details.
To check a batch deployment, run the following code: To check a batch endpoint, run the following code. As the newly created deployment is set as the default deployment, you should see `nonmlflowdp` in `defaults.deployment_name` from the response. ### Invoke the batch endpoint to start a batch scoring job
There are three options to specify the data inputs in CLI `invoke`.
The example uses publicly available data in a folder from `https://pipelinedata.blob.core.windows.net/sampledata/mnist`, which contains thousands of hand-written digits. Name of the batch scoring job will be returned from the invoke response. Run the following code to invoke the batch endpoint using this data. `--query name` is added to only return the job name from the invoke response, and it will be used later to [Monitor batch scoring job execution progress](#monitor-batch-scoring-job-execution-progress) and [Check batch scoring results](#check-batch-scoring-results). Remove `--query name -o tsv` if you want to see the full invoke response. For more information on the `--query` parameter, see [Query Azure CLI command output](/cli/azure/query-azure-cli).
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/batch-score.sh" ID="start_batch_scoring_job" :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="start_batch_scoring_job" :::
* __Option 2: Registered dataset__
Some settings can be overwritten when invoke to make best use of the compute res
To specify the output location and overwrite settings when invoke, run the following code. The example stores the outputs in a folder with the same name as the endpoint in the workspace's default blob storage, and also uses a random file name to ensure the output location uniqueness. The code should work in Unix. Replace with your own unique folder and file name. ### Monitor batch scoring job execution progress
Batch scoring jobs usually take some time to process the entire set of inputs.
You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`. ### Check batch scoring results
Follow the below steps to view the scoring results in Azure Storage Explorer whe
1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
- :::code language="azurecli" source="~/azureml-examples-march-cli-preview/cli/batch-score.sh" ID="show_job_in_studio" :::
+ :::code language="azurecli" source="~/azureml-examples-main/cli/batch-score.sh" ID="show_job_in_studio" :::
1. In the graph of the run, select the `batchscoring` step. 1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
Once you have a batch endpoint, you can continue to refine your model and add ne
To create a new batch deployment under the existing batch endpoint but not set it as the default deployment, run the following code: Notice that `--set-default` is not used. If you `show` the batch endpoint again, you should see no change of the `defaults.deployment_name`.
The example uses a model (`/cli/endpoints/batch/autolog_nyc_taxi`) trained and t
Below is the YAML file the example uses to deploy an MLflow model, which only contains the minimum required properties. The source file in repository is `/cli/endpoints/batch/mlflow-deployment.yml`. > [!NOTE] > `scoring_script` and `environment` auto generation only supports Python Function model flavor and column-based model signature.
Below is the YAML file the example uses to deploy an MLflow model, which only co
To test the new non-default deployment, run the following code. The example uses a different model that accepts a publicly available csv file from `https://pipelinedata.blob.core.windows.net/sampledata/nytaxi/taxi-tip-data.csv`. Notice `--deployment-name` is used to specify the new deployment name. This parameter allows you to `invoke` a non-default deployment, and it will not update the default deployment of the batch endpoint.
Notice `--deployment-name` is used to specify the new deployment name. This para
To update the default batch deployment of the endpoint, run the following code: Now, if you `show` the batch endpoint again, you should see `defaults.deployment_name` is set to `mlflowdp`. You can `invoke` the batch endpoint directly without the `--deployment-name` parameter.
If you want to update the deployment (for example, update code, model, environme
If you aren't going to use the old batch deployment, you should delete it by running the following code. `--yes` is used to confirm the deletion. Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs will not be deleted. ## Next steps
object-anchors Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/concepts/sdk-overview.md
ObjectModel model = await observer.LoadObjectModelAsync(modelAsBytes);
The application creates a query to detect instances of that model within a space. ```cs
+#if WINDOWS_UWP || DOTNETWINRT_PRESENT
+#define SPATIALCOORDINATESYSTEM_API_PRESENT
+#endif
+ using Microsoft.Azure.ObjectAnchors; using Microsoft.Azure.ObjectAnchors.SpatialGraph; using Microsoft.Azure.ObjectAnchors.Unity;
using UnityEngine;
// Get the coordinate system. SpatialGraphCoordinateSystem? coordinateSystem = null;
-#if WINDOWS_UWP
+#if SPATIALCOORDINATESYSTEM_API_PRESENT
SpatialCoordinateSystem worldOrigin = ObjectAnchorsWorldManager.WorldOrigin; if (worldOrigin != null) {
foreach (ObjectInstance instance in detectedObjects)
// Supported modes: // "LowLatencyCoarsePosition" - Consumes less CPU cycles thus fast to // update the state.
- // "HighLatencyAccuratePosition" - (Not yet implemented) Consumes more CPU
+ // "HighLatencyAccuratePosition" - Uses the device's camera and consumes more CPU
// cycles thus potentially taking longer // time to update the state. // "Paused" - Stops to update the state until mode
In the state changed event, we can query the latest state or dispose an instance
```cs using Microsoft.Azure.ObjectAnchors;
-var InstanceChangedHandler = new Windows.Foundation.TypedEventHandler<ObjectInstance, ObjectInstanceChangedEventArgs>((sender, args) =>
+void InstanceChangedHandler(object sender, ObjectInstanceChangedEventArgs args)
{ // Try to query the current instance state.
- ObjectInstanceState? state = sender.TryGetCurrentState();
+ ObjectInstanceState state = sender.TryGetCurrentState();
- if (state.HasValue)
+ if (state != null)
{
- // Process latest state via state.Value.
- // An object pose includes scale, rotation and translation, applied in
+ // Process latest state.
+ // An object pose includes rotation and translation, applied in
// the same order to the object model in the centered coordinate system. } else
var InstanceChangedHandler = new Windows.Foundation.TypedEventHandler<ObjectInst
// This object instance is lost for tracking, and will never be recovered. // The caller can detach the Changed event handler from this instance // and dispose it.
- sender.Dispose();
}
-});
+}
``` Also, an application can optionally record one or multiple diagnostics sessions for offline debugging.
object-anchors Unity Remoting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/unity-remoting.md
+
+ Title: 'Quickstart: Using Unity Remoting with Azure Object Anchors'
+description: In this quickstart, you learn how to enable Unity Remoting in a project that uses Object Anchors.
+++ Last updated : 04/04/2022++++
+# Quickstart: Using Unity Remoting with Azure Object Anchors
+In this quickstart, you'll learn how to use Unity Remoting with [Azure Object Anchors](../overview.md) to enable a more efficient
+inner-loop for application development. With Unity Remoting, you can use Play Mode in the Unity Editor to preview your
+changes in real time without waiting through a full build and deployment cycle. The latest versions of Unity Remoting
+and the Object Anchors SDK support using Object Anchors while in Play Mode, so you can detect real physical objects
+while running inside the Unity Editor.
+
+## Prerequisites
+To complete this quickstart, make sure you have:
+* All prerequisites from either the [Unity HoloLens](get-started-unity-hololens.md) or the [Unity HoloLens with MRTK](get-started-unity-hololens-mrtk.md) quickstarts.
+* Reviewed the general instructions for <a href="/windows/mixed-reality/develop/native/holographic-remoting-overview">Holographic remoting</a>.
+* Followed the <a href="/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool" target="_blank">Mixed Reality Feature Tool</a> documentation to set up the tool and learn how to use it.
+
+### Minimum component versions
+
+|Component |Unity 2019 |Unity 2020 |
+|--|-|-|
+|Unity Editor | 2019.4.36f1 | 2020.3.30f1 |
+|Windows Mixed Reality XR Plugin | 2.9.2 | 4.6.2 |
+|Holographic Remoting Player | 2.7.5 | 2.7.5 |
+|Azure Object Anchors SDK | 0.19.0 | 0.19.0 |
+|Mixed Reality WinRT Projections | 0.5.2009 | 0.5.2009 |
+
+## One-time setup
+1. On your HoloLens, install version 2.7.5 or newer of the [Holographic Remoting Player](https://www.microsoft.com/p/holographic-remoting-player/9nblggh4sv40) via the Microsoft Store.
+1. In the <a href="/windows/mixed-reality/develop/unity/welcome-to-mr-feature-tool" target="_blank">Mixed Reality Feature Tool</a>, under the **Platform Support** section, install the **Mixed Reality WinRT Projections** feature package, version 0.5.2009 or newer, into your Unity project folder.
+1. In the Unity **Package Manager** window, ensure that the **Windows XR Plugin** is updated to version 2.9.2 or newer for Unity 2019, or version 4.6.2 or newer for Unity 2020.
+1. In the Unity **Project Settings** window, click on the **XR Plug-in Management** section, select the **PC Standalone** tab, and ensure that the box for **Windows Mixed Reality** is checked, as well as **Initialize XR on Startup**.
+1. Open the **Windows XR Plugin Remoting** window from the **Window/XR** menu, select **Remote to Device** from the drop-down, and enter your device's IP address in the **Remote Machine** box.
+1. Place .ou model files in `%USERPROFILE%\AppData\LocalLow\<companyname>\<productname>` where `<companyname>` and `<productname>` match the values in the **Player** section of your project's **Project Settings** (e.g. `Microsoft\AOABasicApp`). (See the **Windows Editor and Standalone Player** section of [Unity - Scripting API: Application.persistentDataPath](https://docs.unity3d.com/ScriptReference/Application-persistentDataPath.html).)
+
+## Using Remoting with Object Anchors
+1. Open your project in the Unity Editor.
+1. Launch the **Holographic Remoting Player** app on your HoloLens.
+1. *Before* entering **Play Mode** for the first time, *uncheck* the **Connect on Play** checkbox, and manually connect to the HoloLens by pressing **Connect**.
+ 1. Enter **Play Mode** to finish initializing the connection.
+ 1. After this, you may reenable **Connect on Play** for the remainder of the session.
+1. Enter and exit Play Mode as needed; iterate on changes in the Editor; use Visual Studio to debug script execution, and all the normal Unity development activities you're used to in Play Mode!
+
+## Known limitations
+* Some Object Anchors SDK features are not supported since they rely on access to the HoloLens cameras which is not currently available via Remoting. These include <a href="/dotnet/api/microsoft.azure.objectanchors.objectobservationmode">Active Observation Mode</a> and <a href="/dotnet/api/microsoft.azure.objectanchors.objectinstancetrackingmode">High Accuracy Tracking Mode</a>.
+* The Object Anchors SDK currently only supports Unity Remoting while using the **Windows Mixed Reality XR Plugin**. If the **OpenXR XR Plugin** is used, <a href="/dotnet/api/microsoft.azure.objectanchors.objectobserver.issupported">`ObjectObserver.IsSupported`</a> will return `false` in **Play Mode** and other APIs may throw exceptions.
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
Before deploying the demo application on OpenShift we will deploy the database s
To deploy the application, we are going to use the JBoss EAP Helm Charts already available in ARO. We also need to supply the desired configuration, for example, the database user, the database password, the driver version we want to use, and the connection information used by the data source. Since this information contains sensitive information, we will use [OpenShift Secret objects](https://docs.openshift.com/container-platform/4.8/nodes/pods/nodes-pods-secrets.html#nodes-pods-secrets-about_nodes-pods-secrets) to store it. > [!NOTE]
-> You can also use the [JBoss EAP Operator](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/eap-operator-for-automating-application-deployment-on-openshift_default) to deploy this example, however, notice that the JBoss EAP Operator will deploy the application as `StatefulSets`. Use the JBoss EAP Operator if your application requires one or more one of the following.
+> You can also use the [JBoss EAP Operator](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/getting_started_with_jboss_eap_for_openshift_container_platform/eap-operator-for-automating-application-deployment-on-openshift_default) to deploy this example, however, notice that the JBoss EAP Operator will deploy the application as `StatefulSets`. Use the JBoss EAP Operator if your application requires one or more one of the following.
> > * Stable, unique network identifiers. > * Stable, persistent storage.
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
This article describes how to create and manage a self-hosted integration runtim
Installation of the self-hosted integration runtime on a domain controller isn't supported.
+> [!IMPORTANT]
+> Scanning some data sources requires additional setup on the self-hosted integration runtime machine. For example, JDK, Visual C++ Redistributable, or specific driver.
+> For your source, **[refer to each source article for prerequisite details.](azure-purview-connector-overview.md)**
+> Any requirements will be listed in the **Prerequisites** section.
+ - Self-hosted integration runtime requires a 64-bit Operating System with .NET Framework 4.7.2 or above. See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details. - The recommended minimum configuration for the self-hosted integration runtime machine is a 2-GHz processor with 4 cores, 8 GB of RAM, and 80 GB of available hard drive space. For the details of system requirements, see [Download](https://www.microsoft.com/download/details.aspx?id=39717). - If the host machine hibernates, the self-hosted integration runtime doesn't respond to data requests. Configure an appropriate power plan on the computer before you install the self-hosted integration runtime. If the machine is configured to hibernate, the self-hosted integration runtime installer prompts with a message. - You must be an administrator on the machine to successfully install and configure the self-hosted integration runtime. - Scan runs happen with a specific frequency per the schedule you've set up. Processor and RAM usage on the machine follows the same pattern with peak and idle times. Resource usage also depends heavily on the amount of data that is scanned. When multiple scan jobs are in progress, you see resource usage goes up during peak times.-- Scanning some data sources requires additional setup on the self-hosted integration runtime machine. For example, JDK, Visual C++ Redistributable, or specific driver. Refer to [each source article](azure-purview-connector-overview.md) for prerequisite details. > [!IMPORTANT] > If you use the Self-Hosted Integration runtime to scan Parquet files, you need to install the **64-bit JRE 8 (Java Runtime Environment) or OpenJDK** on your IR machine. Check our [Java Runtime Environment section at the bottom of the page](#java-runtime-environment-installation) for an installation guide.
To create and set up a self-hosted integration runtime, use the following proced
4. Enter a name for your IR, and select Create.
-5. On the **Integration Runtime settings** page, follow the steps under the **Manual setup** section. You will have to download the integration runtime from the download site onto a VM or machine where you intend to run it.
+5. On the **Integration Runtime settings** page, follow the steps under the **Manual setup** section. You'll have to download the integration runtime from the download site onto a VM or machine where you intend to run it.
:::image type="content" source="media/manage-integration-runtimes/integration-runtime-settings.png" alt-text="get key":::
You can delete a self-hosted integration runtime by navigating to **Integration
## Service account for Self-hosted integration runtime
-The default logon service account of self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
+The default sign in service account of self-hosted integration runtime is **NT SERVICE\DIAHostService**. You can see it in **Services -> Integration Runtime Service -> Properties -> Log on**.
:::image type="content" source="../data-factory/media/create-self-hosted-integration-runtime/shir-service-account.png" alt-text="Service account for self-hosted integration runtime":::
Here are the domains and outbound ports that you need to allow at both **corpora
| Domain names | Outbound ports | Description | | -- | -- | - |
-| `*.frontend.clouddatahub.net` | 443 | Required to connect to the Azure Purview service. Currently wildcard is required as there is no dedicated resource. |
-| `*.servicebus.windows.net` | 443 | Required for setting up scan on Azure Purview Studio. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. Currently wildcard is required as there is no dedicated resource. |
+| `*.frontend.clouddatahub.net` | 443 | Required to connect to the Azure Purview service. Currently wildcard is required as there's no dedicated resource. |
+| `*.servicebus.windows.net` | 443 | Required for setting up scan on Azure Purview Studio. This endpoint is used for interactive authoring from UI, for example, test connection, browse folder list and table list to scope scan. Currently wildcard is required as there's no dedicated resource. |
| `<purview_account>.purview.azure.com` | 443 | Required to connect to Azure Purview service. | | `<managed_storage_account>.blob.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Blob storage account. | | `<managed_storage_account>.queue.core.windows.net` | 443 | Required to connect to the Azure Purview managed Azure Queue storage account. | | `download.microsoft.com` | 443 | Required to download the self-hosted integration runtime updates. If you have disabled auto-update, you can skip configuring this domain. | | `login.windows.net`<br>`login.microsoftonline.com` | 443 | Required to sign in to the Azure Active Directory. |
-Depending on the sources you want to scan, you also need to allow additional domains and outbound ports for other Azure or external sources. A few examples are provided here:
+Depending on the sources you want to scan, you also need to allow other domains and outbound ports for other Azure or external sources. A few examples are provided here:
| Domain names | Outbound ports | Description | | -- | -- | - |
For some cloud data stores such as Azure SQL Database and Azure Storage, you nee
## Proxy server considerations
-If your corporate network environment uses a proxy server to access the internet, configure the self-hosted integration runtime to use appropriate proxy settings. You can set the proxy during the initial registration phase or after it is being registered.
+If your corporate network environment uses a proxy server to access the internet, configure the self-hosted integration runtime to use appropriate proxy settings. You can set the proxy during the initial registration phase or after it's being registered.
:::image type="content" source="media/manage-integration-runtimes/self-hosted-proxy.png" alt-text="Specify the proxy":::
-When configured, the self-hosted integration runtime uses the proxy server to connect to the services which use HTTP or HTTPS protocol. This is why you select **Change link** during initial setup.
+When configured, the self-hosted integration runtime uses the proxy server to connect to the services that use HTTP or HTTPS protocol. This is why you select **Change link** during initial setup.
:::image type="content" source="media/manage-integration-runtimes/set-http-proxy.png" alt-text="Set the proxy":::
The following procedure provides instructions for updating the **diahost.exe.con
</defaultProxy> </system.net> ```
- The proxy tag allows additional properties to specify required settings like `scriptLocation`. See [\<proxy\> Element (Network Settings)](/dotnet/framework/configure-apps/file-schema/network/proxy-element-network-settings) for syntax.
+ The proxy tag allows other properties to specify required settings like `scriptLocation`. See [\<proxy\> Element (Network Settings)](/dotnet/framework/configure-apps/file-schema/network/proxy-element-network-settings) for syntax.
```xml <proxy autoDetect="true|false|unspecified" bypassonlocal="true|false|unspecified" proxyaddress="uriString" scriptLocation="uriString" usesystemdefault="true|false|unspecified "/>
If you see error messages like the following ones, the likely reason is improper
## Java Runtime Environment Installation
-If you scan Parquet files using the self-hosted integration runtime with Azure Purview, you will need to install either the Java Runtime Environment or OpenJDK on your self-hosted IR machine.
+If you scan Parquet files using the self-hosted integration runtime with Azure Purview, you'll need to install either the Java Runtime Environment or OpenJDK on your self-hosted IR machine.
When scanning Parquet files using the self-hosted IR, the service locates the Java runtime by firstly checking the registry *`(SOFTWARE\JavaSoft\Java Runtime Environment\{Current Version}\JavaHome)`* for JRE, if not found, secondly checking system variable *`JAVA_HOME`* for OpenJDK.
purview Quickstart ARM Create Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-ARM-create-azure-purview.md
+
+ Title: 'Quickstart: Create an Azure Purview account using an ARM Template'
+description: This Quickstart describes how to create an Azure Purview account using an ARM Template.
++ Last updated : 04/05/2022+++++
+# Quickstart: Create an Azure Purview account using an ARM template
+
+This quickstart describes the steps to deploy an Azure Purview account using an Azure Resource Manager (ARM) template.
+
+After you have created an Azure Purview account you can begin registering your data sources and using Azure Purview to understand and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, Azure Purview creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end data linage. Data consumers are able to discover data across your organization and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about Azure Purview, [see our overview page](overview.md). For more information about deploying Azure Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+
+To deploy an Azure Purview account to your subscription using an ARM template, follow the guide below.
++
+## Deploy a custom template
+
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal where you can customize values and deploy.
+The template will deploy an Azure Purview account into a new or existing resource group in your subscription.
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.azurepurview%2Fazure-purview-deployment%2Fazuredeploy.json)
++
+## Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/data-share-share-storage-account/).
+
+<! Below link needs to be updated to Purview quickstart, which I'm currently working on. >
+
+The following resources are defined in the template:
+
+* Microsoft.Purview/accounts
+
+The template performs the following tasks:
+
+* Creates an Azure Purview account in the specified resource group.
+
+## Open Azure Purview Studio
+
+After your Azure Purview account is created, you'll use the Azure Purview Studio to access and manage it. There are two ways to open Azure Purview Studio:
+
+* Open your Azure Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Azure Purview Studio" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Azure Purview account overview page, with the Azure Purview Studio tile highlighted.":::
+
+* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Azure Purview account, and sign in to your workspace.
+
+## Get started with your Purview resource
+
+After deployment, the first activities are usually:
+
+* [Create a collection](quickstart-create-collection.md)
+* [Register a resource](azure-purview-connector-overview.md)
+* [Scan the resource](concept-scans-and-ingestion.md)
+
+At this time, these actions aren't able to be taken through an Azure Resource Manager template. Follow the guides above to get started!
+
+## Clean up resources
+
+To clean up the resources deployed in this quickstart, delete the resource group, which deletes all resources in the group.
+You can delete the resources either through the Azure portal, or using the PowerShell script below.
+
+```azurepowershell-interactive
+$resourceGroupName = Read-Host -Prompt "Enter the resource group name"
+Remove-AzResourceGroup -Name $resourceGroupName
+Write-Host "Press [ENTER] to continue..."
+```
+
+## Next steps
+
+In this quickstart, you learned how to create an Azure Purview account and how to access it through the Azure Purview Studio.
+
+Next, you can create a user-assigned managed identity (UAMI) that will enable your new Azure Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
+
+To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
+
+Follow these next articles to learn how to navigate the Azure Purview Studio, create a collection, and grant access to Azure Purview:
+
+> [!div class="nextstepaction"]
+> [Using the Azure Purview Studio](use-azure-purview-studio.md)
+> [Create a collection](quickstart-create-collection.md)
+> [Add users to your Azure Purview account](catalog-permissions.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
It's important to register the data source in Azure Purview before setting up a
If your database server has a firewall enabled, you'll need to update the firewall to allow access in one of two ways:
-1. Allow Azure connections through the firewall.
-1. Install a Self-Hosted Integration Runtime and give it access through the firewall.
+1. [Allow Azure connections through the firewall](#allow-azure-connections).
+1. [Install a Self-Hosted Integration Runtime and give it access through the firewall](#self-hosted-integration-runtime).
#### Allow Azure Connections
Select your method of authentication from the tabs below for steps to authentica
> [!Note] > Only the server-level principal login (created by the provisioning process) or members of the `loginmanager` database role in the master database can create new logins. It takes about **15 minutes** after granting permission, the Azure Purview account should have the appropriate permissions to be able to scan the resource(s).
-1. You'll need a SQL login with at least `db_datareader` permissions to be able to access the information Azure Purview needs to scan the database. You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign in for Azure SQL Database. You'll need to save the **username** and **password** for the next steps.
+1. You'll need a SQL login with at least `db_datareader` permissions to be able to access the information Azure Purview needs to scan the database. You can follow the instructions in [CREATE LOGIN](/sql/t-sql/statements/create-login-transact-sql?view=azuresqldb-current&preserve-view=true#examples-1) to create a sign-in for Azure SQL Database. You'll need to save the **username** and **password** for the next steps.
1. Navigate to your key vault in the Azure portal.
The service principal needs permission to get metadata for the database, schemas
1. Navigate to your key vault in the Azure portal
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-key-vault.png" alt-text="Screenshot that shows the key vault to add a secret for for Service Principal.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-key-vault.png" alt-text="Screenshot that shows the key vault to add a secret for Service Principal.":::
1. Select **Settings > Secrets** and select **+ Generate/Import**
The service principal needs permission to get metadata for the database, schemas
1. Then, [create a new credential](manage-credentials.md#create-a-new-credential).
- :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-credentials.png" alt-text="Screenshot that shows the key vault option to add a credentials for Service Principal.":::
+ :::image type="content" source="media/register-scan-azure-sql-database/register-scan-azure-sql-db-credentials.png" alt-text="Screenshot that shows the key vault option to add a credential for Service Principal.":::
1. The **Service Principal ID** will be the **Application ID** of your service principal. The **Secret name** will be the name of the secret you created in the previous steps.
purview Register Scan Cassandra Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-cassandra-source.md
When setting up scan, you can choose to scan an entire Cassandra instance, or sc
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
-If your data store is publicly accessible, you can use the managed Azure integration runtime for scan without additional settings. Otherwise, if your data store limits access from on-premises network, private network or specific IPs, you need to configure a self-hosted integration runtime to connect to it:
-* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717).
- For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
+**If your data store is not publically accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
## Register
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-db2.md
This article outlines how to register Db2, and how to authenticate and interact
|||||||| | [Yes](#register)| [Yes](#scan)| No | [Yes](#scan) | No | No| [Yes](#lineage)|
-The supported IBM Db2 versions are Db2 for LUW 9.7 to 11.x. Db2 for z/OS (mainframe) and iSeries (AS/400) are not supported now.
+The supported IBM Db2 versions are Db2 for LUW 9.7 to 11.x. Db2 for z/OS (mainframe) and iSeries (AS/400) aren't supported now.
When scanning IBM Db2 source, Azure Purview supports:
When setting up scan, you can choose to scan an entire Db2 database, or scope th
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.12.7984.1.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Manually download a Db2 JDBC driver from [here](https://www.ibm.com/support/pages/db2-jdbc-driver-versions-and-downloads) onto your virtual machine where self-hosted integration runtime is running.
+ * Manually download a Db2 JDBC driver from [here](https://www.ibm.com/support/pages/db2-jdbc-driver-versions-and-downloads) onto your virtual machine where self-hosted integration runtime is running.
- > [!Note]
- > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
* The Db2 user must have the CONNECT permission. Azure Purview connects to the syscat tables in IBM Db2 environment when importing metadata.
The supported authentication type for a Db2 source is **Basic authentication**.
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
* Contain C or * Equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters aren't acceptable.
1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Erwin Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-erwin-source.md
When setting up scan, you can choose to scan an entire erwin Mart server, or sco
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). > [!IMPORTANT] > Make sure to install the self-hosted integration runtime and the Erwin Data Modeler software on the same machine where erwin Mart instance is running.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
## Register
Follow the steps below to scan erwin Mart servers to automatically identify asse
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up on the VM where erwin Mart instance is running. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up on the VM where erwin Mart instance is running. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime.
1. Navigate to **Sources**. 1. Select the registered **erwin** Mart.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Google Bigquery Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-google-bigquery-source.md
When setting up scan, you can choose to scan an entire Google BigQuery project,
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Download and unzip BigQuery's JDBC driver on the machine where your self-hosted integration runtime is running. You can find the driver [here](https://cloud.google.com/bigquery/providers/simba-drivers).
+ * Download and unzip BigQuery's JDBC driver on the machine where your self-hosted integration runtime is running. You can find the driver [here](https://cloud.google.com/bigquery/providers/simba-drivers).
- > [!Note]
- > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
## Register
Follow the steps below to scan a Google BigQuery project to automatically identi
### Create and run scan
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md).
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md).
1. Navigate to **Sources**.
Follow the steps below to scan a Google BigQuery project to automatically identi
* contain C or * equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters aren't acceptable.
1. **Maximum memory available**: Maximum memory (in GB) available on your VM to be used by scanning processes. This is dependent on the size of Google BigQuery project to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
When setting up scan, you can choose to scan an entire Hive metastore database,
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md).
-* Ensure that [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is running.
+ * Ensure that [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is running.
-* Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
-* Download the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar); version 3.0.3 is not supported.
+ * Download the Hive Metastore database's JDBC driver on the machine where your self-hosted integration runtime is running. For example, if the database is *mssql*, download [Microsoft's JDBC driver for SQL Server](/sql/connect/jdbc/download-microsoft-jdbc-driver-for-sql-server). If you scan Azure Databricks's Hive Metastore, download the MariaDB Connector/J version 2.7.5 from [here](https://dlm.mariadb.com/1965742/Connectors/java/connector-java-2.7.5/mariadb-java-client-2.7.5.jar); version 3.0.3 isn't supported.
- > [!Note]
- > The driver should be accessible to all accounts in the machine. Don't install it in a user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the machine. Don't install it in a user account.
## Register
Use the following steps to scan Hive Metastore databases to automatically identi
* Contain C or * Equal D
- Usage of `NOT` and special characters is not acceptable.
+ Usage of `NOT` and special characters isn't acceptable.
1. **Maximum memory available**: Maximum memory (in gigabytes) available on the customer's machine for the scanning processes to use. This value is dependent on the size of Hive Metastore database to be scanned.
purview Register Scan Looker Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-looker-source.md
When setting up scan, you can choose to scan an entire Looker server, or scope t
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
If your data store is publicly accessible, you can use the managed Azure integration runtime for scan without additional settings. Otherwise, if your data store limits access from on-premises network, private network or specific IPs, you need to configure a self-hosted integration runtime to connect to it: * Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
## Register
Follow the steps below to scan Looker to automatically identify assets and class
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up on the VM where erwin Mart instance is running. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up on the VM where erwin Mart instance is running. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
To understand more on credentials, refer to the link [here](manage-credentials.md)
- 1. **Project filter** -- Scope your scan by providing a semicolon separated list of Looker projects. This option is used to select looks and dashboards by their parent project.
+ 1. **Project filter** - Scope your scan by providing a semicolon separated list of Looker projects. This option is used to select looks and dashboards by their parent project.
1. **Maximum memory available** (applicable when using self-hosted integration runtime): Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of erwin Mart to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-mysql.md
When setting up scan, you can choose to scan an entire MySQL server, or scope th
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See [Azure Purview Permissions page](catalog-permissions.md) for details.
-If your data store is publicly accessible, you can use the managed Azure integration runtime for scan without additional settings. Otherwise, if your data store limits access from on-premises network, private network or specific IPs, you need to configure a self-hosted integration runtime to connect to it:
+**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
### Required permissions for scan
The supported authentication type for a MySQL source is **Basic authentication**
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
* Contain C or * Equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters aren't acceptable.
1. **Maximum memory available** (applicable when using self-hosted integration runtime): Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of MySQL source to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Oracle Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-oracle-source.md
This article outlines how to register Oracle, and how to authenticate and intera
\* *Besides the lineage on assets within the data source, lineage is also supported if dataset is used as a source/sink in [Data Factory](how-to-link-azure-data-factory.md) or [Synapse pipeline](how-to-lineage-azure-synapse-analytics.md).*
-The supported Oracle server versions are 6i to 19c. Proxy server is not supported when scanning Oracle source.
+The supported Oracle server versions are 6i to 19c. Proxy server isn't supported when scanning Oracle source.
When scanning Oracle source, Azure Purview supports:
When scanning Oracle source, Azure Purview supports:
When setting up scan, you can choose to scan an entire Oracle server, or scope the scan to a subset of schemas matching the given name(s) or name pattern(s).
-Currently, the Oracle service name is not captured in the metadata or hierarchy.
+Currently, the Oracle service name isn't captured in the metadata or hierarchy.
## Prerequisites
Currently, the Oracle service name is not captured in the metadata or hierarchy.
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Manually download an Oracle JDBC driver from [here](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) onto your virtual machine where self-hosted integration runtime is running.
+ * Manually download an Oracle JDBC driver from [here](https://www.oracle.com/database/technologies/appdev/jdbc-downloads.html) onto your virtual machine where self-hosted integration runtime is running.
- > [!Note]
- > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
## Register
This section describes how to register Oracle in Azure Purview using the [Azure
A read-only access to system tables is required.
-The user should have permission to create a session as well as role SELECT\_CATALOG\_ROLE assigned. Alternatively, the user may have SELECT permission granted for every individual system table that this connector queries metadata from:
+The user should have permission to create a session and role SELECT\_CATALOG\_ROLE assigned. Alternatively, the user may have SELECT permission granted for every individual system table that this connector queries metadata from:
```sql grant create session to [user];
Follow the steps below to scan Oracle to automatically identify assets and class
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
* Contain C or * Equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters aren't acceptable.
1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location. > [!Note] > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
- 1. **Stored procedure details**: Controls the amount of details imported from stored procedures:
+ 1. **Stored procedure details**: Controls the number of details imported from stored procedures:
- Signature: The name and parameters of stored procedures. - Code, signature: The name, parameters and code of stored procedures. - Lineage, code, signature: The name, parameters and code of stored procedures, and the data lineage derived from the code.
- - None: Stored procedure details are not included.
+ - None: Stored procedure details aren't included.
1. **Maximum memory available**: Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Oracle source to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-postgresql.md
When setting up scan, you can choose to scan an entire PostgreSQL database, or s
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
-If your data store is publicly accessible, you can use the managed Azure integration runtime for scan without additional settings. Otherwise, if your data store limits access from on-premises network, private network or specific IPs, you need to configure a self-hosted integration runtime to connect to it:
+**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
### Required permissions for scan
The supported authentication type for a PostgreSQL source is **Basic authenticat
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
* Contain C or * Equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters aren't acceptable.
1. **Maximum memory available** (applicable when using self-hosted integration runtime): Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of PostgreSQL source to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-salesforce.md
When setting up scan, you can choose to scan an entire Salesforce organization,
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7953.1.
-You can use the managed Azure integration runtime for scan - make sure to provide the security token to authenticate to Salesforce, learn more from the credential configuration in [Scan](#scan) section. Otherwise, if you want the scan to be initiated from a Salesforce trusted IP range for your organization, you can configure a self-hosted integration runtime to connect to it:
+ You can use the managed Azure integration runtime for scan - make sure to provide the security token to authenticate to Salesforce, learn more from the credential configuration in [Scan](#scan) section. Otherwise, if you want the scan to be initiated from a Salesforce trusted IP range for your organization, you can configure a self-hosted integration runtime to connect to it:
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Ensure the self-hosted integration runtime machine's IP is within the [trusted IP ranges for your organization](https://help.salesforce.com/s/articleView?id=sf.security_networkaccess.htm&type=5) set on Salesforce.
+ * Ensure the self-hosted integration runtime machine's IP is within the [trusted IP ranges for your organization](https://help.salesforce.com/s/articleView?id=sf.security_networkaccess.htm&type=5) set on Salesforce.
### Required permissions for scan
-In the event that users will be submitting Salesforce Documents, certain security settings must be configured to allow this access on Standard Objects and Custom Objects. To configure permissions:
+If users will be submitting Salesforce Documents, certain security settings must be configured to allow this access on Standard Objects and Custom Objects. To configure permissions:
-- Within Salesforce, click on Setup and then click on Manage Users.-- Under the Manage Users tree click on Profiles.-- Once the Profiles appear on the right, select which Profile you want to edit and click on the Edit link next to the corresponding profile.
+- Within Salesforce, select Setup and then select Manage Users.
+- Under the Manage Users tree select Profiles.
+- Once the Profiles appear on the right, select which Profile you want to edit and select the Edit link next to the corresponding profile.
For Standard Objects, ensure that the "Documents" section has the Read permissions selected. For Custom Objects, ensure that the Read permissions selected for each custom objects.
The supported authentication type for a Salesforce source is **Consumer key auth
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Sap Bw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-bw.md
This article outlines how to register SAP Business Warehouse (BW), and how to au
|||||||| | [Yes](#register)| [Yes](#scan)| No | No | No | No| No|
-The supported SAP BW versions are 7.3 to 7.5. SAP BW4/HANA is not supported.
+The supported SAP BW versions are 7.3 to 7.5. SAP BW4/HANA isn't supported.
When scanning SAP BW source, Azure Purview supports extracting technical metadata including:
When scanning SAP BW source, Azure Purview supports extracting technical metadat
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.15.8079.1.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Make sure the Java Connector is available on your machine where self-hosted integration runtime is installed. Make sure that you use the correct JCo distribution for your environment, and the **sapjco3.jar** and **sapjco3.dll** files are available.
+ * The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Make sure the Java Connector is available on your machine where self-hosted integration runtime is installed. Make sure that you use the correct JCo distribution for your environment, and the **sapjco3.jar** and **sapjco3.dll** files are available.
- > [!Note]
- > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
Follow the steps below to scan SAP BW to automatically identify assets and class
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Search Data Catalog](how-to-search-catalog.md) - [Data insights in Azure Purview](concept-insights.md)
purview Register Scan Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sap-hana.md
When setting up scan, you can choose to scan an entire SAP HANA database, or sco
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.13.8013.1.
-* Ensure that [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is running.
+ * Ensure that [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the machine where the self-hosted integration runtime is running.
-* Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
-* Download the SAP HANA JDBC driver ([JAR ngdbc](https://mvnrepository.com/artifact/com.sap.cloud.db.jdbc/ngdbc)) on the machine where your self-hosted integration runtime is running.
+ * Download the SAP HANA JDBC driver ([JAR ngdbc](https://mvnrepository.com/artifact/com.sap.cloud.db.jdbc/ngdbc)) on the machine where your self-hosted integration runtime is running.
- > [!Note]
- > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the machine. Don't put it in a path under user account.
### Required permissions for scan
The supported authentication type for a SAP HANA source is **Basic authenticatio
* Contain C or * Equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters aren't acceptable.
- 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running. This should be the path to valid JAR folder location. Do not include the name of the driver in the path.
+ 1. **Driver location**: Specify the path to the JDBC driver location in your machine where self-host integration runtime is running. This should be the path to valid JAR folder location. Don't include the name of the driver in the path.
1. **Maximum memory available**: Maximum memory (in gigabytes) available on the customer's machine for the scanning processes to use. This value is dependent on the size of SAP HANA database to be scanned.
purview Register Scan Sapecc Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-sapecc-source.md
When scanning SAP ECC source, Azure Purview supports:
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). >[!NOTE] >Scanning SAP ECC is a memory intensive operation, you are recommended to install Self-hosted Integration Runtime on a machine with at least 128GB RAM.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Download the 64-bit [SAP Connector for Microsoft .NET 3.0](https://support.sap.com/en/product/connectors/msnet.html) from SAP\'s website and install it on the self-hosted integration runtime machine. During installation, make sure you select the **Install Assemblies to GAC** option in the **Optional setup steps** window.
+ * Download the 64-bit [SAP Connector for Microsoft .NET 3.0](https://support.sap.com/en/product/connectors/msnet.html) from SAP\'s website and install it on the self-hosted integration runtime machine. During installation, make sure you select the **Install Assemblies to GAC** option in the **Optional setup steps** window.
- :::image type="content" source="media/register-scan-saps4hana-source/requirement.png" alt-text="pre-requisite" border="true":::
+ :::image type="content" source="media/register-scan-saps4hana-source/requirement.png" alt-text="pre-requisite" border="true":::
-* The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you are using the correct JCo distribution for your environment. For example: on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available.
+ * The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you're using the correct JCo distribution for your environment. For example: on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available.
- > [!Note]
- > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Do not install it in a user account.
-* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You will need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
+* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
* STFC_CONNECTION (check connectivity) * RFC_SYSTEM_INFO (check system information)
Follow the steps below to scan SAP ECC to automatically identify assets and clas
### Create and run scan
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
1. Navigate to **Sources**
Follow the steps below to scan SAP ECC to automatically identify assets and clas
1. **JCo library path**: The directory path where the JCo libraries are located.
- 1. **Maximum memory available:** Maximum memory (in GB) available on the Self-hosted Integration Runtime machine to be used by scanning processes. This is dependent on the size of SAP ECC source to be scanned. It's recommended to provide large available memory e.g. 100.
+ 1. **Maximum memory available:** Maximum memory (in GB) available on the Self-hosted Integration Runtime machine to be used by scanning processes. This is dependent on the size of SAP ECC source to be scanned. It's recommended to provide large available memory, for example, 100.
:::image type="content" source="media/register-scan-sapecc-source/scan-sapecc.png" alt-text="scan SAPECC" border="true":::
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Saps4hana Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-saps4hana-source.md
When scanning SAP S/4HANA source, Azure Purview supports:
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). >[!NOTE] >Scanning SAP S/4HANA is a memory intensive operation, you are recommended to install Self-hosted Integration Runtime on a machine with at least 128GB RAM.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* Download the 64-bit [SAP Connector for Microsoft .NET 3.0](https://support.sap.com/en/product/connectors/msnet.html) from SAP\'s website and install it on the self-hosted integration runtime machine. During installation, make sure you select the **Install Assemblies to GAC** option in the **Optional setup steps** window.
+ * Download the 64-bit [SAP Connector for Microsoft .NET 3.0](https://support.sap.com/en/product/connectors/msnet.html) from SAP\'s website and install it on the self-hosted integration runtime machine. During installation, make sure you select the **Install Assemblies to GAC** option in the **Optional setup steps** window.
- :::image type="content" source="media/register-scan-saps4hana-source/requirement.png" alt-text="pre-requisite" border="true":::
+ :::image type="content" source="media/register-scan-saps4hana-source/requirement.png" alt-text="pre-requisite" border="true":::
-* The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Hence make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you are using the correct JCo distribution for your environment. For example, on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available.
+ * The connector reads metadata from SAP using the [SAP Java Connector (JCo)](https://support.sap.com/en/product/connectors/jco.html) 3.0 API. Hence make sure the Java Connector is available on your virtual machine where self-hosted integration runtime is installed. Make sure that you're using the correct JCo distribution for your environment. For example, on a Microsoft Windows machine, make sure the sapjco3.jar and sapjco3.dll files are available.
- > [!Note]
- >The driver should be accessible to all accounts in the VM. Do not install it in a user account.
+ > [!Note]
+ >The driver should be accessible to all accounts in the VM. Do not install it in a user account.
-* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You will need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
+* Deploy the metadata extraction ABAP function module on the SAP server by following the steps mentioned in [ABAP functions deployment guide](abap-functions-deployment-guide.md). You'll need an ABAP developer account to create the RFC function module on the SAP server. The user account requires sufficient permissions to connect to the SAP server and execute the following RFC function modules:
* STFC_CONNECTION (check connectivity) * RFC_SYSTEM_INFO (check system information)
Follow the steps below to scan SAP S/4HANA to automatically identify assets and
### Create and run scan
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime
1. Navigate to **Sources.**
Follow the steps below to scan SAP S/4HANA to automatically identify assets and
1. **JCo library path**: Specify the path to the folder where the JCo libraries are located.
- 1. **Maximum memory available:** Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of SAP S/4HANA source to be scanned. It's recommended to provide large available memory e.g. 100.
+ 1. **Maximum memory available:** Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of SAP S/4HANA source to be scanned. It's recommended to provide large available memory, for example, 100.
:::image type="content" source="media/register-scan-saps4hana-source/scan-saps-4-hana.png" alt-text="scan SAP S/4HANA" border="true":::
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-snowflake.md
When setting up scan, you can choose to scan one or more Snowflake database(s) e
* You need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
-If your data store is publicly accessible, you can use the managed Azure integration runtime for scan without additional settings. Otherwise, if your data store limits access from specific IPs, you need to configure a self-hosted integration runtime to connect to it:
+**If your data store is not publicly accessible** (if your data store limits access from on-premises network, private network or specific IPs, etc.) you need to configure a self-hosted integration runtime to connect to it:
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md). The minimal supported Self-hosted Integration Runtime version is 5.11.7971.2.
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase/jdk11-archive-downloads.html) is installed on the machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
### Required permissions for scan
The supported authentication type for a Snowflake source is **Basic authenticati
To create and run a new scan, do the following:
-1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
+1. In the Management Center, select Integration runtimes. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to create a self-hosted integration runtime.
1. Navigate to **Sources**.
To create and run a new scan, do the following:
* Contain C or * Equal D
- Usage of NOT and special characters are not acceptable.
+ Usage of NOT and special characters aren't acceptable.
1. **Maximum memory available** (applicable when using self-hosted integration runtime): Maximum memory (in GB) available on customer's VM to be used by scanning processes. It's dependent on the size of Snowflake source to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
- Check your account identifer in the source registration step. Don't include `https://` part at the front. - Make sure the warehouse name and database name are in capital case on the scan setup page. - Check your key vault. Make sure there are no typos in the password.-- Check the credential you set up in Azure Purview. The user you specify must have a default role with the necessary access rights to both the warehouse and the database you are trying to scan. See [Required permissions for scan](#required-permissions-for-scan). USE `DESCRIBE USER;` to verify the default role of the user you've specified for Azure Purview.
+- Check the credential you set up in Azure Purview. The user you specify must have a default role with the necessary access rights to both the warehouse and the database you're trying to scan. See [Required permissions for scan](#required-permissions-for-scan). USE `DESCRIBE USER;` to verify the default role of the user you've specified for Azure Purview.
- Use Query History in Snowflake to see if any activity is coming across. - If there's a problem with the account identifer or password, you won't see any activity. - If there's a problem with the default role, you should at least see a `USE WAREHOUSE . . .` statement.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
purview Register Scan Teradata Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-teradata-source.md
To retrieve data types of view columns, Azure Purview issues a prepare statement
* An active [Azure Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Azure Purview Studio. See our [Azure Purview Permissions page](catalog-permissions.md) for details.
* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [the create and configure a self-hosted integration runtime guide](manage-integration-runtimes.md).
-* Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) is installed on the virtual machine where the self-hosted integration runtime is installed.
-* Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
+ * Ensure Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](https://www.microsoft.com/download/details.aspx?id=30679).
-* You will have to manually download Teradata's JDBC Driver on your virtual machine where self-hosted integration runtime is running. The executable JAR file can be downloaded from the Teradata [website](https://downloads.teradata.com/).
+ * You'll have to manually download Teradata's JDBC Driver on your virtual machine where self-hosted integration runtime is running. The executable JAR file can be downloaded from the Teradata [website](https://downloads.teradata.com/).
- > [!Note]
- > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
+ > [!Note]
+ > The driver should be accessible to all accounts in the VM. Please do not install in a user account.
## Register
Follow the steps below to scan Teradata to automatically identify assets and cla
### Create and run scan
-1. In the Management Center, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it is not set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime
+1. In the Management Center, select **Integration runtimes**. Make sure a self-hosted integration runtime is set up. If it isn't set up, use the steps mentioned [here](./manage-integration-runtimes.md) to set up a self-hosted integration runtime
1. Select the **Data Map** tab on the left pane in the [Azure Purview Studio](https://web.purview.azure.com/resource/).
Follow the steps below to scan Teradata to automatically identify assets and cla
* Contain C or * Equal D
- Usage of NOT and special characters are not acceptable
+ Usage of NOT and special characters aren't acceptable
1. **Driver location**: Specify the path to the JDBC driver location in your VM where self-host integration runtime is running. This should be the path to valid JAR folder location.
- 1. **Stored procedure details**: Controls the amount of details imported from stored procedures:
+ 1. **Stored procedure details**: Controls the number of details imported from stored procedures:
- Signature: The name and parameters of stored procedures. - Code, signature: The name, parameters and code of stored procedures. - Lineage, code, signature: The name, parameters and code of stored procedures, and the data lineage derived from the code.
- - None: Stored procedure details are not included.
+ - None: Stored procedure details aren't included.
1. **Maximum memory available:** Maximum memory (in GB) available on customer's VM to be used by scanning processes. This is dependent on the size of Teradata source to be scanned.
Go to the asset -> lineage tab, you can see the asset relationship when applicab
## Next steps
-Now that you have registered your source, follow the below guides to learn more about Azure Purview and your data.
+Now that you've registered your source, follow the below guides to learn more about Azure Purview and your data.
- [Data insights in Azure Purview](concept-insights.md) - [Lineage in Azure Purview](catalog-lineage-user-guide.md)
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-blob-storage.md
Blob indexers are frequently used for both [AI enrichment](cognitive-search-conc
+ [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), Standard performance (general-purpose v2).
-+ [Access tiers](../storage/blobs/access-tiers-overview.md) for Blob Storage include hot, cool, and archive. Only hot and cool can be accessed by search indexers.
++ [Access tiers](../storage/blobs/access-tiers-overview.md) for Blob Storage include Hot, Cool, and Archive. Only Hot and Cool can be accessed by search indexers.
-+ Blobs containing text. If blobs contain binary data or unstructured text, consider adding [AI enrichment](cognitive-search-concept-intro.md) for image and natural language processing. Blob content canΓÇÖt exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
++ Blobs providing text content and metadata. If blobs contain binary content or unstructured text, consider adding [AI enrichment](cognitive-search-concept-intro.md) for image and natural language processing. Blob content canΓÇÖt exceed the [indexer limits](search-limits-quotas-capacity.md#indexer-limits) for your search service tier.
-+ Read permissions on Azure Storage. A "full access" connection string includes a key that grants access to the content, but if you're using Azure roles instead, make sure the [search service managed identity](search-howto-managed-identities-data-sources.md) has **Storage Blob Data Reader** permissions.
++ A supported network configuration and data access. At a minimum, you'll need read permissions in Azure Storage. A storage connection string that includes an access key will give you read access to storage content. If instead you're using Azure AD logins and roles, make sure the [search service's managed identity](search-howto-managed-identities-data-sources.md) has **Storage Blob Data Reader** permissions.
-+ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to send REST calls that create the data source, index, and indexer.
+ By default, both search and storage accept requests from public IP addresses. If network security isn't an immediate concern, you can index blob data using just the connection string and read permissions. When you're ready to add network protections, see [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md) for guidance about data access.
+++ A REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md) to make the requests described in this article. <a name="SupportedFormats"></a>
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
Previously updated : 03/22/2022 Last updated : 03/30/2022 # Connect a search service to other Azure resources using a managed identity
You can configure an Azure Cognitive Search service to connect to other Azure re
## Supported scenarios
-Cognitive Search can use a system-assigned or user-assigned managed identity on outbound connections to Azure resources. A system managed identity is indicated when a connection string is the unique resource ID of an Azure AD-aware service or application. A user managed identity is specified through an "identity" property.
+Cognitive Search can use a system-assigned or user-assigned managed identity on outbound connections to Azure resources. A system managed identity is indicated when a connection string is the unique resource ID of an Azure AD-aware service or application. A user-assigned managed identity is specified through an "identity" property.
-| Scenario | System managed identity | User managed identity (preview) |
+A search service uses Azure Storage as an indexer data source and as a data sink for debug sessions, enrichment caching, and knowledge store. For search features that write back to storage, the managed identity needs a contributor role assignment as described in the ["Assign a role"](#assign-a-role) section.
+
+| Scenario | System managed identity | User-assigned managed identity (preview) |
|-|-||
-| [Indexer connections to supported Azure data sources](search-indexer-overview.md) | Yes | Yes |
+| [Indexer connections to supported Azure data sources](search-indexer-overview.md) <sup>1</sup>| Yes | Yes |
| [Azure Key Vault for customer-managed keys](search-security-manage-encryption-keys.md) | Yes | Yes |
-| [Debug sessions (hosted in Azure Storage)](cognitive-search-debug-session.md) | Yes | No |
-| [Enrichment cache (hosted in Azure Storage)](search-howto-incremental-index.md)| Yes <sup>1,</sup> <sup>2</sup>| Yes |
-| [Knowledge Store (hosted in Azure Storage)](knowledge-store-create-rest.md) | Yes <sup>2</sup>| Yes |
+| [Debug sessions (hosted in Azure Storage)](cognitive-search-debug-session.md) <sup>1</sup> | Yes | No |
+| [Enrichment cache (hosted in Azure Storage)](search-howto-incremental-index.md) <sup>1,</sup> <sup>2</sup> | Yes | Yes |
+| [Knowledge Store (hosted in Azure Storage)](knowledge-store-create-rest.md) <sup>1</sup>| Yes | Yes |
| [Custom skills (hosted in Azure Functions or equivalent)](cognitive-search-custom-skill-interface.md) | Yes | Yes |
-<sup>1</sup> The Import data wizard doesn't currently accept a managed identity connection string for enrichment cache, but after the wizard completes, you can update the connection string in indexer JSON definition to specify the managed identity, and then rerun the indexer.
-
-<sup>2</sup> If your indexer has an attached skillset that writes back to Azure Storage (for example, it creates a knowledge store or caches enriched content), a managed identity won't work if the storage account is behind a firewall or has IP restrictions. This is a known limitation that will be lifted when managed identity support for skillset scenarios becomes generally available. The solution is to use a full access connection string instead of a managed identity if Azure Storage is behind a firewall.
-
-Debug sessions, enrichment cache, and knowledge store are features that write to Blob Storage. Assign a managed identity to the **Storage Blob Data Contributor** role to support these features.
+<sup>1</sup> For connectivity between search and storage, your network security configuration imposes constraints on which type of managed identity you can use. Only a system managed identity can be used for a same-region connection to storage via the trusted service exception or resource instance rule. See [Access to a network-protected storage account](search-indexer-securing-resources.md#access-to-a-network-protected-storage-account) for details.
-Knowledge store will also write to Table Storage. Assign a managed identity to the **Storage Table Data Contributor** role to support table projections.
+<sup>2</sup> One method for specifying an enrichment cache is in the Import data wizard. Currently, the wizard doesn't accept a managed identity connection string for enrichment cache. However, after the wizard completes, you can update the connection string in the indexer JSON definition to specify either a system or user-assigned managed identity, and then rerun the indexer.
## Create a system managed identity
A system-assigned managed identity is unique to your search service and bound to
See [Create or Update Service (Management REST API)](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchcreateorupdateservicewithidentity).
-You can use the Management REST API instead of the portal to assign a user managed identity. Be sure to use the [2021-04-01-preview management API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchcreateorupdateservicewithidentity) for this task.
+You can use the Management REST API instead of the portal to assign a user-assigned managed identity. Be sure to use the [2021-04-01-preview management API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchcreateorupdateservicewithidentity) for this task.
1. Formulate a request to [Create or Update a search service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update).
You can use the Management REST API instead of the portal to assign a user manag
### [**Azure PowerShell**](#tab/ps-sys)
-See [Create a search service with a system assigned managed identity (Azure PowerShell](search-manage-powershell.md#create-a-service-with-a-system-assigned-managed-identity).
+See [Create a search service with a system-assigned managed identity (Azure PowerShell](search-manage-powershell.md#create-a-service-with-a-system-assigned-managed-identity).
### [**Azure CLI**](#tab/cli-sys)
-See [Create a search service with a system assigned managed identity (Azure CLI)](search-manage-azure-cli.md#create-a-service-with-a-system-assigned-managed-identity).
+See [Create a search service with a system-assigned managed identity (Azure CLI)](search-manage-azure-cli.md#create-a-service-with-a-system-assigned-managed-identity).
-## Create a user managed identity (preview)
+## Create a user-assigned managed identity (preview)
A user-assigned managed identity is a resource on Azure. It's useful if you need more granularity in role assignments because you can create separate identities for different applications and scenarios. > [!IMPORTANT]
->This feature is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> This feature is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). User-assigned managed identities aren't currently supported for connections to a network-protected storage account. The search request currently requires a public IP address.
### [**Azure portal**](#tab/portal-user)
A user-assigned managed identity is a resource on Azure. It's useful if you need
1. Select **Create** and wait for the resource to finish deploying.
- In the next several steps, you'll assign the user managed identity to your search service.
+ In the next several steps, you'll assign the user-assigned managed identity to your search service.
1. In your search service page, under **Settings**, select **Identity**.
A user-assigned managed identity is a resource on Azure. It's useful if you need
### [**REST API**](#tab/rest-user)
-You can use the Management REST API instead of the portal to assign a user managed identity. Be sure to use the [2021-04-01-preview management API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) for this task.
+You can use the Management REST API instead of the portal to assign a user-assigned managed identity. Be sure to use the [2021-04-01-preview management API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) for this task.
1. Formulate a request to [Create or Update a search service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update).
You can use the Management REST API instead of the portal to assign a user manag
If your Azure resource is behind a firewall, make sure there's an inbound rule that admits requests from your search service.
-+ For same-region connections to Azure Blob Storage or Azure Data Lake Storage Gen2, use the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) to admit requests.
++ For same-region connections to Azure Blob Storage or Azure Data Lake Storage Gen2, use a system managed identity and the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md). Optionally, you can configure a [resource instance rule (preview)](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview) to admit requests.
-+ For all other resources and connections, [configure an IP firewall rule](search-indexer-howto-access-ip-restricted.md) that admits requests from Search. See [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md) for more detail.
++ For all other resources and connections, [configure an IP firewall rule](search-indexer-howto-access-ip-restricted.md) that admits requests from Search. See [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md) for details. ## Assign a role
search Search Howto Managed Identities Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-storage.md
Previously updated : 03/10/2022 Last updated : 03/30/2022 # Set up a connection to an Azure Storage account using a managed identity
This article assumes familiarity with indexer concepts and configuration. If you
For a code example in C#, see [Index Data Lake Gen2 using Azure AD](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/data-lake-gen2-acl-indexing/README.md) on GitHub. > [!NOTE]
-> If your indexer has an attached skillset that writes back to Azure Storage (for example, it creates a knowledge store or caches enriched content), a managed identity won't work if the storage account is behind a firewall or has IP restrictions. This is a known limitation that will be lifted when managed identity support for skillset scenarios becomes generally available. The solution is to use a full access connection string instead of a managed identity if Azure Storage is behind a firewall.
+> If storage is network-protected and in the same region as your search service, you must use a system-assigned managed identity and either one of the following network options: [connect as a trusted service](search-indexer-howto-access-trusted-service-exception.md), or [connect using the resource instance rule (preview)](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview).
## Prerequisites
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
On behalf of an indexer, a search service will issue outbound calls to an extern
This article explains how to find the IP address of your search service and configure an inbound IP rule on an Azure Storage account. While specific to Azure Storage, this approach also works for other Azure resources that use IP firewall rules for data access, such as Cosmos DB and Azure SQL.
-## Prerequisites
-
-The storage account and the search service must be in different regions. If your setup doesn't permit this, try the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) or [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview).
+> [!NOTE]
+> A storage account and your search service must be in different regions if you want to define IP firewall rules. If your setup doesn't permit this, try the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) or [resource instance rule](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview) instead.
## Get a search service IP address
search Search Indexer Howto Access Trusted Service Exception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-trusted-service-exception.md
Previously updated : 03/10/2022 Last updated : 03/30/2022 # Make indexer connections to Azure Storage as a trusted service
-Indexers in an Azure Cognitive Search service that access blob data in Azure Storage accounts can make use of the [trusted service exception](../storage/common/storage-network-security.md#exceptions) capability to securely access data. This mechanism offers customers who are unable to grant [indexer access using IP firewall rules](search-indexer-howto-access-ip-restricted.md) a simple, secure, and free alternative for accessing data in storage accounts.
+In Azure Cognitive Search, indexers that access Azure blobs can use the [trusted service exception](../storage/common/storage-network-security.md#exceptions) to securely access data. This mechanism offers customers who are unable to grant [indexer access using IP firewall rules](search-indexer-howto-access-ip-restricted.md) a simple, secure, and free alternative for accessing data in storage accounts.
-## Prerequisites
+## Prerequisites
-+ A search service with a [system-assigned managed identity](search-howto-managed-identities-data-sources.md).
++ A search service with a [**system-assigned managed identity**](search-howto-managed-identities-data-sources.md).
-+ Azure Storage with the **Allow trusted Microsoft services to access this storage account** option.
++ A storage account with the **Allow trusted Microsoft services to access this storage account** network option.
-+ Content in Azure Blob Storage or Azure Data Lake Storage Gen2 (ADLS Gen2) that you want to index.
++ Content in Azure Blob Storage or Azure Data Lake Storage Gen2 (ADLS Gen2) that you want to index or enrich.+++ Optionally, containers or tables in Azure Storage for AI enrichment write-back operations, such as creating a knowledge store, debug session, or enrichment cache.+++ An Azure role assignment. A system managed identity is an Azure AD login. It needs either a **Storage Blob Data Reader** or **Storage Blob Data Contributor** role assignment, depending on whether write access is needed. > [!NOTE]
-> This capability is limited to blobs and ADLS Gen2 on Azure Storage. The trusted service exception is not supported for indexer connections to Azure Table Storage and Azure File Storage. It's also not currently supported for indexers that invoke skillsets that write to Azure Storage (knowledge store, enrichment cache, or debug sessions).
+> In Cognitive Search, a trusted service connection is limited to blobs and ADLS Gen2 on Azure Storage. It's unsupported for indexer connections to Azure Table Storage and Azure File Storage.
+>
+> A trusted service connection must use a system managed identity. A user-assigned managed identity isn't currently supported for this scenario.
## Check service identity 1. [Sign in to Azure portal](https://portal.azure.com) and [find your search service](https://ms.portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
-1. On the **Identity** page, make sure that a system assigned identity is enabled. User-assigned managed identities, currently in preview, won't work for a trusted service connection.
+1. On the **Identity** page, make sure that a system assigned identity is enabled. Remember that user-assigned managed identities, currently in preview, won't work for a trusted service connection.
:::image type="content" source="media/search-managed-identities/system-assigned-identity-object-id.png" alt-text="Screenshot of a system identity object identifier." border="true":::
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
Previously updated : 03/10/2022 Last updated : 03/30/2022 # Indexer access to content protected by Azure network security features Azure Cognitive Search indexers can make outbound calls to various Azure resources during execution. This article explains the concepts behind indexer access to content that's protected by IP firewalls, private endpoints, or other Azure network-level security mechanisms.
-An indexer makes outbound calls in two situations:
+## Resources accessed by indexers
+
+An indexer makes outbound calls in three situations:
- Connecting to external data sources during indexing - Connecting to external, encapsulated code through a skillset
+- Connecting to Azure Storage during skillset execution to cache enrichments, save debug session state, or write to a knowledge store
A list of all possible resource types that an indexer might access in a typical run are listed in the table below. | Resource | Purpose within indexer run | | | |
-| Azure Storage (blobs, tables, ADLS Gen 2) | Data source |
-| Azure Storage (blobs, tables) | Skillsets (caching enriched documents, and storing knowledge store projections) |
+| Azure Storage (blobs, ADLS Gen 2, files, tables) | Data source |
+| Azure Storage (blobs, tables) | Skillsets (caching enrichments, debug sessions, knowledge store projections) |
| Azure Cosmos DB (various APIs) | Data source | | Azure SQL Database | Data source | | SQL Server on Azure virtual machines | Data source | | SQL Managed Instance | Data source |
-| Azure Functions | Attached to a skillset and used to host for custom web api skills |
-| Cognitive Services | Attached to a skillset and used to bill enrichment beyond the 20 free documents limit |
+| Azure Functions | Attached to a skillset and used to host for custom web API skills |
> [!NOTE]
-> A Cognitive Service resource attached to a skillset is used for billing, based on the enrichments performed and written into the search index or a knowledge store. It isn't used for accessing the Cognitive Services APIs. Access from an indexer's enrichment pipeline to Cognitive Services APIs occurs via an internal secure communication channel, where data is strongly encrypted in transit and is never stored at rest.
+> An indexer also connects to Cognitive Services for built-in skills. However, that connection is made over the internal network and isn't subject to any network provisions under your control.
Your Azure resources could be protected using any number of the network isolation mechanisms offered by Azure. Depending on the resource and region, Cognitive Search indexers can make outbound connections through IP firewalls and private endpoints, subject to the limitations indicated in the following table.
-| Resource | IP Restriction | Private endpoint |
+| Resource | IP restriction | Private endpoint |
| | | - |
-| Azure Storage for text-based indexing (blobs, tables, ADLS Gen 2) | Supported only if the storage account and search service are in different regions. | Supported |
-| Azure Storage for AI enrichment (caching, knowledge store, debug sessions) | Supported only if the storage account and search service are in different regions, and when the search service connects using a full access connection string. Managed identity is not currently supported for write back operations to an IP restricted storage account. | Unsupported |
+| Azure Storage for text-based indexing (blobs, ADLS Gen 2, files, tables) | Supported only if the storage account and search service are in different regions. | Supported |
+| Azure Storage for AI enrichment (caching, debug sessions, knowledge store) | Supported only if the storage account and search service are in different regions. | Unsupported |
| Azure Cosmos DB - SQL API | Supported | Supported | | Azure Cosmos DB - MongoDB API | Supported | Unsupported | | Azure Cosmos DB - Gremlin API | Supported | Unsupported |
Your Azure resources could be protected using any number of the network isolatio
| SQL Managed Instance | Supported | N/A | | Azure Functions | Supported | Supported, only for certain tiers of Azure functions |
-> [!NOTE]
-> In addition to the options listed above, for network-secured Azure Storage accounts, you can make Azure Cognitive Search a [trusted Microsoft service](../storage/common/storage-network-security.md#trusted-microsoft-services). This means that a specific search service can bypass virtual network or IP restrictions on the storage account and can access data in the storage account, if the appropriate role-based access control is enabled on the storage account. For more information, see [Indexer connections using the trusted service exception](search-indexer-howto-access-trusted-service-exception.md). This option can be utilized instead of the IP restriction route, in case either the storage account or the search service can't be moved to a different region.
+### Access to a network-protected storage account
+
+A search service stores indexes and synonym lists. For other features that require storage, Cognitive Search takes a dependency on Azure Storage. Enrichment caching, debug sessions, and knowledge stores fall into this category. The location of each service, and any network protections in place for storage, will determine your data access strategy.
+
+#### Same-region services
+
+In Azure Storage, access through a firewall requires that the request originates from a different region. If Azure Storage and Azure Cognitive Search are in the same region, you can bypass the IP restrictions on the storage account by accessing data under the system identity of the search service.
+
+There are two options for supporting data access using the system identity:
+
+- Configure search to run as a [trusted service](search-indexer-howto-access-trusted-service-exception.md) and use the [trusted service exception](../storage/common/storage-network-security.md#trusted-access-based-on-a-managed-identity) in Azure Storage.
+
+- Configure a [resource instance rule (preview)](../storage/common/storage-network-security.md#grant-access-from-azure-resource-instances-preview) in Azure Storage that admits inbound requests from an Azure resource.
+
+The above options depend on Azure Active Directory for authentication, which means that the connection must be made with an Azure AD login. Currently, only a Cognitive Search [system-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity) is supported for same-region connections through a firewall.
+
+#### Services in different regions
+
+When search and storage are in different regions, you can use the previously mentioned options or set up IP rules that admit requests from your service. Depending on the workload, you might need to set up rules for multiple execution environments as described in the next section.
## Indexer execution environment
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Maximum limits on storage, workloads, and quantities of indexes and other object
| Resource | Free | Basic&nbsp;<sup>1</sup> | S1 | S2 | S3 | S3&nbsp;HD | L1 | L2 | | -- | - | - | | | | | | | | Maximum indexes |3 |5 or 15 |50 |200 |200 |1000 per partition or 3000 per service |10 |10 |
-| Maximum simple fields per index&nbsp;<sup>2</sup> |1000 |100 |3000 |3000 |3000 |1000 |1000 |1000 |
+| Maximum simple fields per index&nbsp;<sup>2</sup> |1000 |100 |1000 |1000 |1000 |1000 |1000 |1000 |
| Maximum complex collections per index |40 |40 |40 |40 |40 |40 |40 |40 | | Maximum elements across all complex collections per document&nbsp;<sup>3</sup> |3000 |3000 |3000 |3000 |3000 |3000 |3000 |3000 | | Maximum depth of complex fields |10 |10 |10 |10 |10 |10 |10 |10 |
Maximum limits on storage, workloads, and quantities of indexes and other object
| Maximum [scoring profiles](/rest/api/searchservice/add-scoring-profiles-to-a-search-index) per index |100 |100 |100 |100 |100 |100 |100 |100 | | Maximum functions per profile |8 |8 |8 |8 |8 |8 |8 |8 |
-<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only SKU with a lower limit of 100 fields per index. You might find some variation in maximum limits for Basic if your service is provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications will be portable across service tiers in any region.
+<sup>1</sup> Basic services created before December 2017 have lower limits (5 instead of 15) on indexes. Basic tier is the only SKU with a lower limit of 100 fields per index.
-<sup>2</sup> The upper limit on fields includes both first-level fields and nested subfields in a complex collection. For example, if an index contains 15 fields and has two complex collections with 5 subfields each, the field count of your index is 25. Indexes with a large fields collection can be slow. Limit fields to just those you need, and run indexing and query test to ensure performance is acceptable.
+<sup>2</sup> The upper limit on fields includes both first-level fields and nested subfields in a complex collection. For example, if an index contains 15 fields and has two complex collections with 5 subfields each, the field count of your index is 25. Indexes with a very large fields collection can be slow. [Limit fields and attributes](search-what-is-an-index.md#physical-structure-and-size) to just those you need, and run indexing and query test to ensure performance is acceptable.
<sup>3</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
+You might find some variation in maximum limits if your service happens to be provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications will be portable across equivlaent service tiers in any region.
+ <a name="document-limits"></a> ## Document limits
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Once a request is admitted, it must still undergo authentication and authorizati
+ [Azure AD authentication (preview)](search-security-rbac.md) establishes the caller (and not the request) as the authenticated identity. An Azure role assignment determines the allowed operation.
-Outbound requests made by an indexer are subject to the authentication protocols supported by the external service. A search service can be made a trusted service on Azure, connecting to other services using a system or user managed identity. For more information, see [Set up an indexer connection to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
+Outbound requests made by an indexer are subject to the authentication protocols supported by the external service. A search service can be made a trusted service on Azure, connecting to other services using a system or user-assigned managed identity. For more information, see [Set up an indexer connection to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
## Authorization
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-an-index.md
Previously updated : 11/12/2021 Last updated : 04/05/2022 # Indexes in Azure Cognitive Search
Although you can add new fields at any time, existing field definitions are lock
In Azure Cognitive Search, the physical structure of an index is largely an internal implementation. You can access its schema, query its content, monitor its size, and manage capacity, but the clusters themselves (indices, [shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards), and other files and folders) are managed internally by Microsoft.
+You can monitor index size in the Indexes tab in the Azure portal, or by issuing a [GET INDEX request](/rest/api/searchservice/get-index) against your search service. You can also issue a [Service Statistics request](/rest/api/searchservice/get-service-statistics) and check the value of storage size.
+ The size of an index is determined by: + Quantity and composition of your documents
-+ Index configuration (specifically, whether you include suggesters)
+ Attributes on individual fields++ Index configuration (specifically, whether you include suggesters)
-You can monitor index size in the Indexes tab in the Azure portal, or by issuing a [GET INDEX request](/rest/api/searchservice/get-index) against your search service.
-
-### Factors influencing index size
-
-Document composition and quantity will be determined by what you choose to import. Remember that a search index should only contain searchable content. If source documents include binary fields, you would generally omit those fields from the index schema (unless you are using AI enrichment to crack and analyze the content to create text searchable information.)
+Document composition and quantity is determined by what you choose to import. Remember that a search index should only contain searchable content. If source data includes binary fields, omit those fields unless you are using AI enrichment to crack and analyze the content to create text searchable information.
-Index configuration can include other components besides documents, such as suggesters, customer analyzers, scoring profiles, CORS settings, and encryption key information. From the above list, the only component that has the potential for impacting index size is suggesters. [**Suggesters**](index-add-suggesters.md) are constructs that support type-ahead or autocomplete queries. As such, when you include a suggester, the indexing process will create the data structures necessary for verbatim character matches. Suggesters are implemented at the field level, so choose only those fields that are reasonable for type-ahead.
+Field attributes determine behaviors. To support those behaviors, the indexing process creates the necessary data structures. For example, "searchable" invokes [full text search](search-lucene-query-architecture.md), which scans inverted indices for the tokenized term. In contrast, a "filterable" or "sortable" attribute supports iteration over unmodified strings. The example in the next section shows variations in index size based on the selected attributes.
-Field attributes are the third consideration of index size. Attributes determine behaviors. To support those behaviors, the indexing process will create the supporting data structures. For example, "searchable" invokes [full text search](search-lucene-query-architecture.md), which scans inverted indices for the tokenized term. In contrast, a "filterable" or "sortable" attribute supports iteration over unmodified strings.
+[**Suggesters**](index-add-suggesters.md) are constructs that support type-ahead or autocomplete queries. As such, when you include a suggester, the indexing process will create the data structures necessary for verbatim character matches. Suggesters are implemented at the field level, so choose only those fields that are reasonable for type-ahead.
### Example demonstrating the storage implications of attributes and suggesters
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
This article explains what Microsoft Sentinel automation rules are, and how to use them to implement your Security Orchestration, Automation and Response (SOAR) operations, increasing your SOC's effectiveness and saving you time and resources.
-> [!IMPORTANT]
->
-> - The **automation rules** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## What are automation rules?
-Automation rules are a new concept in Microsoft Sentinel. This feature allows users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
+Automation rules are a way to centrally manage the automation of incident handling, allowing you to perform simple automation tasks without using playbooks. For example, automation rules allow you to automatically assign incidents to the proper personnel, tag incidents to classify them, and change the status of incidents and close them. Automation rules can also automate responses for multiple analytics rules at once, control the order of actions that are executed, and run playbooks for those cases where more complex automation tasks are necessary. In short, automation rules streamline the use of automation in Microsoft Sentinel, enabling you to simplify complex workflows for your incident orchestration processes.
## Components
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-responses-with-playbooks.md
SIEM/SOC teams are typically inundated with security alerts and incidents on a r
Many, if not most, of these alerts and incidents conform to recurring patterns that can be addressed by specific and defined sets of remediation actions.
-A playbook is a collection of these remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help [**automate and orchestrate your threat response**](tutorial-respond-threats-playbook.md); it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
+A playbook is a collection of these remediation actions that can be run from Microsoft Sentinel as a routine. A playbook can help [**automate and orchestrate your threat response**](tutorial-respond-threats-playbook.md); it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an [automation rule](automate-incident-handling-with-automation-rules.md), respectively.
For example, if an account and machine are compromised, a playbook can isolate the machine from the network and block the account by the time the SOC team is notified of the incident.
Azure Logic Apps communicates with other systems and services using connectors.
- [Alert trigger](/connectors/azuresentinel/#triggers): the playbook receives the alert as its input. - [Incident trigger](/connectors/azuresentinel/#triggers): the playbook receives the incident as its input, along with all its included alerts and entities.
- > [!IMPORTANT]
- >
- > The **incident trigger** feature for playbooks is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- - **Actions:** Actions are all the steps that happen after the trigger. They can be arranged sequentially, in parallel, or in a matrix of complex conditions. - **Dynamic fields:** Temporary fields, determined by the output schema of triggers and actions and populated by their actual output, that can be used in the actions that follow.
sentinel Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automation.md
Microsoft Sentinel, in addition to being a Security Information and Event Manage
## Automation rules
-Automation rules are a new concept in Microsoft Sentinel. This feature allows users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
+Automation rules (now generally available!) allow users to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
Learn more with this [complete explanation of automation rules](automate-incident-handling-with-automation-rules.md).
-> [!IMPORTANT]
->
-> - The **automation rules** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Playbooks A playbook is a collection of response and remediation actions and logic that can be run from Microsoft Sentinel as a routine. A playbook can help automate and orchestrate your threat response, it can integrate with other systems both internal and external, and it can be set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively. It can also be run manually on-demand, in response to alerts, from the incidents page.
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
This tutorial shows you how to use playbooks together with automation rules to a
## What are automation rules and playbooks?
-Automation rules help you triage incidents in Microsoft Sentinel. You can use them to automatically assign incidents to the right personnel, close noisy incidents or known [false positives](false-positives.md), change their severity, and add tags. They are also the mechanism by which you can run playbooks in response to incidents.
+[Automation rules](automate-incident-handling-with-automation-rules.md) help you triage incidents in Microsoft Sentinel. You can use them to automatically assign incidents to the right personnel, close noisy incidents or known [false positives](false-positives.md), change their severity, and add tags. They are also the mechanism by which you can run playbooks in response to incidents.
Playbooks are collections of procedures that can be run from Microsoft Sentinel in response to an alert or incident. A playbook can help automate and orchestrate your response, and can be set to run automatically when specific alerts or incidents are generated, by being attached to an analytics rule or an automation rule, respectively. It can also be run manually on-demand.
You can also choose to run a playbook manually on-demand, as a response to a sel
Get a more complete and detailed introduction to automating threat response using [automation rules](automate-incident-handling-with-automation-rules.md) and [playbooks](automate-responses-with-playbooks.md) in Microsoft Sentinel.
-> [!IMPORTANT]
->
-> - **Automation rules**, and the use of the **incident trigger** for playbooks, are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Create a playbook Follow these steps to create a new playbook in Microsoft Sentinel:
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## March 2022
+- [Automation rules now generally available](#automation-rules-now-generally-available)
- [Create a large watchlist from file in Azure Storage (public preview)](#create-a-large-watchlist-from-file-in-azure-storage-public-preview)
+### Automation rules now generally available
+
+Automation rules are now generally available (GA) in Microsoft Sentinel.
+
+[Automation rules](automate-incident-handling-with-automation-rules.md) allow users to centrally manage the automation of incident handling. They allow you to assign playbooks to incidents, automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules streamline automation use in Microsoft Sentinel and enable you to simplify complex workflows for your incident orchestration processes.
+ ### Create a large watchlist from file in Azure Storage (public preview) Create a watchlist from a large file that's up to 500 MB in size by uploading the file to your Azure Storage account. When you add the watchlist to your workspace, you provide a shared access signature URL. Microsoft Sentinel uses the shared access signature URL to retrieve the watchlist data from Azure Storage.
For more information, see:
## September 2021 - [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)- - [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation) - [Azure Storage account connector changes](#azure-storage-account-connector-changes)
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Restricted Regions reserved for in-country disaster recovery |Switzerland West r
>[!NOTE] >
+> - To protect your VMs from or to any of the Restricted Regions, please get yourselves allowlisted by raising a request [here](https://docs.microsoft.com/troubleshoot/azure/general/region-access-request-process).
> - For **Brazil South**, you can replicate and fail over to these regions: Brazil Southeast, South Central US, West Central US, East US, East US 2, West US, West US 2, and North Central US. > - Brazil South can only be used as a source region from which VMs can replicate using Site Recovery. It can't act as a target region. Note that if you fail over from Brazil South as a source region to a target, failback to Brazil South from the target region is supported. Brazil Southeast can only be used as a target region. > - If the region in which you want to create a vault doesn't show, make sure your subscription has access to create resources in that region.
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
To define a lifecycle management policy with an Azure Resource Manager template,
A lifecycle management policy must be read or written in full. Partial updates are not supported. +
+> [!NOTE]
+> Each rule can have up to 10 case-sensitive prefixes and up to 10 blob index tag conditions.
+ > [!NOTE] > If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support for Azure Bl
> > To enroll in the preview, complete [this form](https://forms.office.com/r/gZguN0j65Y) AND request to join via 'Preview features' in Azure portal.
+## Client support
+
+### Known supported clients
+
+- OpenSSH 7.4+
+- WinSCP 5.17.10+
+- PuTTY 0.74+
+- FileZilla 3.53.0+
+- SSH.NET 2020.0.0+
+- libssh 1.8.2+
+- Cyberduck 7.8.2+
+- Maverick Legacy 1.7.15+
+
+### Known unsupported clients
+
+- SSH.NET 2016.1.0
+- libssh2 1.7.0
+- paramiko 1.16.0
+- AsyncSSH 2.1.0
+- SSH Go
+
+> [!NOTE]
+> The client support lists above are not exhaustive and may change over time.
+ ## Authentication and authorization - _Local users_ is the only form of identity management that is currently supported for the SFTP endpoint.
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
The following table describes key parameters for each redundancy option:
| Parameter | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-| | Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) |
-| Availability for read requests | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) for GRS<br /><br />At least 99.99% (99.9% for cool access tier) for RA-GRS | At least 99.9% (99% for cool access tier) for GZRS<br /><br />At least 99.99% (99.9% for cool access tier) for RA-GZRS |
-| Availability for write requests | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool access tier) |
+| Availability for read requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) for GRS<br /><br />At least 99.99% (99.9% for Cool or Archive access tiers) for RA-GRS | At least 99.9% (99% for Cool or Archive access tiers) for GZRS<br /><br />At least 99.99% (99.9% for Cool or Archive access tiers) for RA-GZRS |
+| Availability for write requests | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) | At least 99.9% (99% for Cool or Archive access tiers) |
| Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region |
+For more information, see the [SLA for Storage Accounts](/support/legal/sla/storage/v1_5/).
+ ### Durability and availability by outage scenario The following table indicates whether your data is durable and available in a given scenario, depending on which type of redundancy is in effect for your storage account:
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
description: Plan for a deployment with Azure File Sync, a service that allows y
Previously updated : 04/13/2021 Last updated : 04/05/2022
Azure File Sync is supported with the following versions of Windows Server:
| Version | Supported SKUs | Supported deployment options | ||-||
+| Windows Server 2022 | Azure, Datacenter, Standard, and IoT | Full and Core |
| Windows Server 2019 | Datacenter, Standard, and IoT | Full and Core | | Windows Server 2016 | Datacenter, Standard, and Storage Server | Full and Core | | Windows Server 2012 R2 | Datacenter, Standard, and Storage Server | Full and Core |
storage Storage Files Migration Storsimple 8000 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md
Migrations to Azure file shares from StorSimple volumes via migration jobs in a
* **Network egress:** Your StorSimple files live in a storage account within a specific Azure region. If you provision the Azure file shares you migrate into a storage account that's located in the same Azure region, no egress cost will occur. You can move your files to a storage account in a different region as part of this migration. In that case, egress costs will apply to you. * **Azure file share transactions:** When files are copied into an Azure file share (as part of a migration or outside of one), transaction costs apply as files and metadata are being written. As a best practice, start your Azure file share on the transaction optimized tier during the migration. Switch to your desired tier after the migration is finished. The following phases will call this out at the appropriate point. * **Change an Azure file share tier:** Changing the tier of an Azure file share costs transactions. In most cases, it will be more cost efficient to follow the advice from the previous point.
-* **Storage cost:** When this migration starts copying files into an Azure file share, Azure Files storage is consumed and billed. Migrated backups will become [Azure file share snapshots](storage-snapshots-files.md). File share snapshots only consume storage capacity for the differences they contain.
+* **Storage cost:** When this migration starts copying files into an Azure file share, storage is consumed and billed. Migrated backups will become [Azure file share snapshots](storage-snapshots-files.md). File share snapshots only consume storage capacity for the differences they contain.
* **StorSimple:** Until you have a chance to deprovision the StorSimple devices and storage accounts, StorSimple cost for storage, backups, and appliances will continue to occur. ### Direct-share-access vs. Azure File Sync
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
Title: Quickstart for creating and using Azure file shares
-description: See how to create and use Azure file shares with the Azure portal, Azure CLI, or Azure PowerShell module. Create a storage account, create an Azure file share, and use your Azure file share.
+description: Learn how to create and use Azure file shares with the Azure portal, Azure CLI, or Azure PowerShell. Create a storage account, create an SMB Azure file share, and use your Azure file share.
Previously updated : 04/04/2022 Last updated : 04/05/2022 ms.devlang: azurecli
-#Customer intent: As a < type of user >, I want < what? > so that < why? >.
+#Customer intent: As an IT admin new to Azure Files, I want to try out Azure Files so I can determine whether I want to subscribe to the service.
# Quickstart: Create and use an Azure file share
If you'd like to install and use PowerShell locally, this guide requires the Azu
### PowerShell - Create a resource group
-A resource group is a logical container into which Azure resources are deployed and managed. If you don't already have an Azure resource group, you can create a new one with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. You need a resource group to create a storage account.
+A resource group is a logical container into which Azure resources are deployed and managed. If you don't already have an Azure resource group, create a new one with the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet. You need a resource group to create a storage account.
The following example creates a resource group named *myResourceGroup* in the West US 2 region:
New-AzResourceGroup `
A storage account is a shared pool of storage you can use to deploy Azure file shares.
-This example creates a storage account using the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) cmdlet. The storage account is named *mystorageaccount\<random number>* and a reference to that storage account is stored in the variable **$storageAcct**. Storage account names must be unique, so use `Get-Random` to append a number to the name to make it unique.
+This example creates a storage account using the [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) cmdlet. The storage account is named *mystorageaccount\<random number>* and a reference to that storage account is stored in the variable **$storageAcct**. Storage account names must be unique, so use `Get-Random` to append a random number to the name to make it unique.
```azurepowershell-interactive $storageAccountName = "mystorageacct$(Get-Random)"
az group create \
### CLI - Create a storage account A storage account is a shared pool of storage in which you can deploy Azure file shares.
-The following example creates a storage account using the [az storage account create](/cli/azure/storage/account) command. Storage account names must be unique, so use `$RANDOM` to append a number to the name to make it unique.
+The following example creates a storage account using the [az storage account create](/cli/azure/storage/account) command. Storage account names must be unique, so use `$RANDOM` to append a random number to the name to make it unique.
```azurecli-interactive export storageAccountName="mystorageacct$RANDOM"
After uploading the file, you can use [Get-AzStorageFile](/powershell/module/Az.
Get-AzStorageFile ` -Context $storageAcct.Context ` -ShareName $shareName `
- -Path "myDirectory\"
+ -Path "myDirectory\" | Get-AzStorageFile
```
storage Storage Python How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-python-how-to-use-file-storage.md
Learn the basics of using Python to develop apps or services that use Azure File
- Create file share backups by using snapshots > [!NOTE]
-> Because Azure Files may be accessed over SMB, it is possible to write simple applications that access the Azure file share using the standard Python I/O classes and functions. This article will describe how to write apps that use the Azure Files Storage Python SDK, which uses the [Azure Files REST API](/rest/api/storageservices/file-service-rest-api) to talk to Azure Files.
+> Because Azure Files may be accessed over SMB, it is possible to write simple applications that access the Azure file share using the standard Python I/O classes and functions. This article will describe how to write apps that use the Azure Storage SDK for Python, which uses the [Azure Files REST API](/rest/api/storageservices/file-service-rest-api) to talk to Azure Files.
## Applies to | File share type | SMB | NFS |
synapse-analytics Machine Learning Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/machine-learning-ai.md
This article highlights Microsoft partners with machine learning and artificial
| - | -- | -- | | ![Dataiku](./media/machine-learning-and-ai/dataiku-logo.png) |**Dataiku**<br>Dataiku is the centralized data platform that moves businesses along their data journey from analytics at scale to Enterprise AI, powering self-service analytics while also ensuring the operationalization of machine learning models in production. |[Product page](https://www.dataiku.com/partners/microsoft/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/dataiku.dataiku-data-science-studio)<br> | | ![MATLAB](./media/machine-learning-and-ai/mathworks-logo.png) |**Matlab**<br>MATLAB® is a programming platform designed for engineers and scientists. It combines a desktop environment tuned for iterative analysis and design processes with a programming language that expresses matrix and array mathematics directly. Millions worldwide use MATLAB for a range of applications, including machine learning, deep learning, signal and image processing, control systems, and computational finance. |[Product page](https://www.mathworks.com/products/database.html)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/mathworks-inc.matlab-byol?tab=Overview)<br> |
-| ![Qubole](./media/data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/) | ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
+| ![Qubole](./media/data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/) |
+| ![SAS](./media/business-intelligence/sas-logo.jpg) |**SAS® Viya®**<br>SAS® Viya® is an AI, analytic, and data management solution running on a scalable, cloud-native architecture. It enables you to operationalize insights, empowering everyone – from data scientists to business users – to collaborate and realize innovative results faster. Using open source or SAS models, SAS® Viya® can be accessed through APIs or interactive interfaces to transform raw data into actions. |[Product page](https://www.sas.com/microsoft)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/sas-institute-560503.sas-viya-saas?tab=Overview)<br> |
## Next steps To learn more about other partners, see [Business Intelligence partners](business-intelligence.md), [Data Integration partners](data-integration.md), and [Data Management partners](data-management.md).-----
synapse-analytics Synapse Spark Sql Pool Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-spark-sql-pool-import-export.md
Title: Azure Synapse Dedicated SQL Pool Connector for Apache Spark
-description: Azure Synapse Dedicated SQL Pool Connector for Apache Spark to move data between the Synapse Serverless Spark Pool and the Synapse Dedicated SQL Pool.
+description: This article discusses the Azure Synapse Dedicated SQL Pool Connector for Apache Spark. The connector is used to move data between a serverless Spark pool and Azure Synapse Dedicated SQL Pool.
# Azure Synapse Dedicated SQL Pool Connector for Apache Spark
+This article discusses the Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics. The connector is used to move data between the Apache Spark runtime (serverless Spark pool) and Azure Synapse Dedicated SQL Pool.
+ ## Introduction
-The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics enables efficient transfer of large data sets between the [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and the [Dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md). The connector is implemented using `Scala` language. The connector is shipped as a default library within Azure Synapse environment - workspace Notebook and Serverless Spark Pool runtime. To use the Connector with other notebook language choices, use the Spark magic command - `%%spark`.
+The Azure Synapse Dedicated SQL Pool Connector for Apache Spark in Azure Synapse Analytics enables efficient transfer of large datasets between the [Apache Spark runtime](../../synapse-analytics/spark/apache-spark-overview.md) and the [dedicated SQL pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md).
+
+The connector is implemented by using the `Scala` language. The connector is shipped as a default library within the Azure Synapse environment that consists of a workspace notebook and the serverless Spark pool runtime. To use the connector with other notebook language choices, use the Spark magic command `%%spark`.
-At a high-level, the connector provides the following capabilities:
+At a high level, the connector provides the following capabilities:
-* Write to Azure Synapse Dedicated SQL Pool:
- * Ingest large volume data to Internal and External table types.
- * Supports following DataFrame save mode preferences:
+* Writes to Azure Synapse Dedicated SQL Pool:
+ * Ingests a large volume of data to internal and external table types.
+ * Supports the following DataFrame save mode preferences:
* `Append` * `ErrorIfExists` * `Ignore` * `Overwrite`
- * Write to External Table type supports Parquet and Delimited Text file format (example - CSV).
- * To write data to internal tables, the connector now uses [COPY statement](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md) instead of CETAS/CTAS approach.
- * Enhancements to optimize end-to-end write throughput performance.
- * Introduces an optional call-back handle (a Scala function argument) that clients can use to receive post-write metrics.
- * Few examples include - number of records, duration to complete certain action, and failure reason.
-* Read from Azure Synapse Dedicated SQL Pool:
- * Read large data sets from Synapse Dedicated SQL Pool Tables (Internal and External) and Views.
- * Comprehensive predicate push down support, where filters on DataFrame get mapped to corresponding SQL predicate push down.
+ * Writes to an external table type that supports Parquet and the delimited text file format, for example, CSV.
+ * To write data to internal tables, the connector now uses a [COPY statement](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md) instead of the CETAS/CTAS approach.
+ * Enhancements optimize end-to-end write throughput performance.
+ * Introduces an optional call-back handle (a Scala function argument) that clients can use to receive post-write metrics:
+ * A few examples include the number of records, the duration to complete a certain action, and failure reason.
+* Reads from Azure Synapse Dedicated SQL Pool:
+ * Reads large datasets from Azure Synapse Dedicated SQL Pool tables, which are internal and external, and views.
+ * Comprehensive predicate push-down support, where filters on DataFrame get mapped to corresponding SQL predicate push down.
* Support for column pruning.
-## Orchestration Approach
+## Orchestration approach
+
+The following two diagrams illustrate write and read orchestrations.
### Write
-![Write-Orchestration](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-write-orchestration.png)
+![Diagram that shows write orchestration.](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-write-orchestration.png)
### Read
-![Read-Orchestration](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-read-orchestration.png)
+![Diagram that shows read orchestration.](./media/synapse-spark-sql-pool-import-export/synapse-dedicated-sql-pool-spark-connector-read-orchestration.png)
-## Pre-requisites
+## Prerequisites
-This section details necessary pre-requisite steps include Azure Resource setup and Configurations including authentication and authorization requirements for using the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
+This section discusses the prerequisite steps for Azure resource setup and configuration. It includes authentication and authorization requirements for using the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
-### Azure Resources
+### Azure resources
-Review and setup following dependent Azure Resources:
+Review and set up the following dependent Azure resources:
-* [Azure Data Lake Storage](../../storage/blobs/data-lake-storage-introduction.md) - used as the primary storage account for the Azure Synapse Workspace.
-* [Azure Synapse Workspace](../../synapse-analytics/get-started-create-workspace.md) - create notebooks, build and deploy DataFrame based ingress-egress workflows.
-* [Dedicated SQL Pool (formerly SQL DW)](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) - provides enterprise Data Warehousing features.
-* [Azure Synapse Serverless Spark Pool](../../synapse-analytics/get-started-analyze-spark.md) - Spark runtime where the jobs are executed as Spark Applications.
+* [Azure Data Lake Storage](../../storage/blobs/data-lake-storage-introduction.md): Used as the primary storage account for the Azure Synapse workspace.
+* [Azure Synapse workspace](../../synapse-analytics/get-started-create-workspace.md): Used to create notebooks and build and deploy DataFrame-based ingress-egress workflows.
+* [Dedicated SQL pool (formerly Azure SQL Data Warehouse)](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md): Provides enterprise data warehousing features.
+* [Azure Synapse serverless Spark pool](../../synapse-analytics/get-started-analyze-spark.md): Provides the Spark runtime where the jobs are executed as Spark applications.
-#### Prepare the Database
+#### Prepare the database
-Connect to the Synapse Dedicated SQL Pool database and run following setup statements:
+Connect to the Azure Synapse Dedicated SQL Pool database and run the following setup statements:
-* Create a database user that is mapped to the Azure Active Directory User Identity used to sign in to the Azure Synapse Workspace.
+* Create a database user that's mapped to the Azure Active Directory (Azure AD) user identity that's used to sign in to the Azure Synapse workspace:
```sql CREATE USER [username@domain.com] FROM EXTERNAL PROVIDER; ```
-* Create schema in which tables will be defined, such that the Connector can successfully write-to and read-from respective tables.
+* Create a schema in which tables are defined so that the connector can successfully write to and read from respective tables:
```sql CREATE SCHEMA [<schema_name>];
Connect to the Synapse Dedicated SQL Pool database and run following setup state
### Authentication
-#### Azure Active Directory based Authentication
+This section discusses two approaches for authentication.
-Azure Active Directory based authentication is an integrated authentication approach. The user is required to successfully log in to the Azure Synapse Analytics Workspace.
+#### Azure AD-based authentication
+
+Azure AD-based authentication is an integrated authentication approach. The user is required to successfully sign in to the Azure Synapse workspace. When users interact with respective resources, such as storage and Azure Synapse Dedicated SQL Pool, the user tokens are used from the runtime.
+
+Azure AD-based authentication is an integrated authentication approach. The user is required to successfully log in to the Azure Synapse workspace.
#### Basic Authentication
-A basic authentication approach requires user to configure `username` and `password` options. Refer to the section - [Configuration Options](#configuration-options) to learn about relevant configuration parameters for reading from and writing to tables in Azure Synapse Dedicated SQL Pool.
+The Basic Authentication approach requires the user to configure `username` and `password` options. See the section [Configuration options](#configuration-options) to learn about relevant configuration parameters for reading from and writing to tables in Azure Synapse Dedicated SQL Pool.
### Authorization
+This section discusses authorization.
+ #### [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)
-There are two ways to grant access permissions to Azure Data Lake Storage Gen2 - Storage Account:
+There are two ways to grant access permissions to an Azure Data Lake Storage Gen2 storage account:
-* Role based Access Control role - [Storage Blob Data Contributor role](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
- * Assigning the `Storage Blob Data Contributor Role` grants the User permissions to read, write and delete from the Azure Storage Blob Containers.
+* Role-based access control (RBAC) role: [Storage Blob Data Contributor role](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
+ * Assigning the `Storage Blob Data Contributor Role` grants the user permission to read, write, and delete from the Azure Storage Blob containers.
* RBAC offers a coarse control approach at the container level.
-* [Access Control Lists (ACL)](../../storage/blobs/data-lake-storage-access-control.md)
- * ACL approach allows for fine-grained controls over specific paths and/or files under a given folder.
- * ACL checks aren't enforced if the User is already granted permissions using RBAC approach.
+* [Access control lists (ACLs)](../../storage/blobs/data-lake-storage-access-control.md)
+ * The ACL approach allows for fine-grained controls over specific paths or files under a given folder.
+ * ACL checks aren't enforced if the user is already granted permission by using an RBAC approach.
* There are two broad types of ACL permissions:
- * Access Permissions (applied at a specific level or object).
- * Default Permissions (automatically applied for all child objects at the time of their creation).
- * Type of permissions include:
- * `Execute` enables ability to traverse or navigate the folder hierarchies.
- * `Read` enables ability to read.
- * `Write` enables ability to write.
- * It's important to configure ACLs such that the Connector can successfully write and read from the storage locations.
+ * Access permissions are applied at a specific level or object.
+ * Default permissions are automatically applied for all child objects at the time of their creation.
+ * Types of permission include:
+ * `Execute` enables the ability to traverse or navigate the folder hierarchies.
+ * `Read` enables the ability to read.
+ * `Write` enables the ability to write.
+ * Configure ACLs so that the connector can successfully write and read from the storage locations.
#### [Azure Synapse Dedicated SQL Pool](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md)
-To enable successful interaction with Azure Synapse Dedicated SQL Pool, following authorization is necessary unless you're a user also configured as an `Active Directory Admin` on the Dedicated SQL End Point:
+To enable successful interaction with Azure Synapse Dedicated SQL Pool, the following authorization is necessary unless you're a user also configured as an `Active Directory Admin` on the dedicated SQL endpoint:
-* Write Scenario
+* Write scenario
* Connector uses the COPY command to write data from staging to the internal table's managed location.
- * Configure required permissions described [here](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md#set-up-the-required-permissions).
+ * Configure the required permissions described in [this quickstart](../../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md#set-up-the-required-permissions).
* Following is a quick access snippet of the same: ```sql
To enable successful interaction with Azure Synapse Dedicated SQL Pool, followin
GRANT INSERT ON <your_table> TO [<your_domain_user>@<your_domain_name>.com] ```
-* Read Scenario
- * Grant the user `db_exporter` using the system stored procedure `sp_addrolemember`.
+* Read scenario
+ * Grant the user `db_exporter` by using the system stored procedure `sp_addrolemember`.
```sql EXEC sp_addrolemember 'db_exporter', [<your_domain_user>@<your_domain_name>.com]; ```
-## Connector API Documentation
+## Connector API documentation
-Azure Synapse Dedicated SQL Pool Connector for Apache Spark - [API Documentation](https://synapsesql.blob.core.windows.net/docs/latest/scala/https://docsupdatetracker.net/index.html).
+Azure Synapse Dedicated SQL Pool Connector for Apache Spark: [API documentation](https://synapsesql.blob.core.windows.net/docs/latest/scala/https://docsupdatetracker.net/index.html)
-### Configuration Options
+### Configuration options
-To successfully bootstrap and orchestrate the read or write operation, the Connector expects certain configuration parameters. The object definition - `com.microsoft.spark.sqlanalytics.utils.Constants` provides a list of standardized constants for each parameter key.
+To successfully bootstrap and orchestrate the read or write operation, the connector expects certain configuration parameters. The object definition `com.microsoft.spark.sqlanalytics.utils.Constants` provides a list of standardized constants for each parameter key.
-Following table describes the essential configuration options that must be set for each usage scenario:
+The following table describes the essential configuration options that must be set for each usage scenario:
-|Usage Scenario| Options to configure |
+|Usage scenario| Options to configure |
|--|-|
-| Write using Azure AD based authentication | <ul><li>Azure Synapse Dedicated SQL End Point<ul><li>`Constants.SERVER`<ul><li>By default, the Connector will infer the Synapse Dedicated SQL End Point associated with the database name (from the three part table name argument to `synapsesql` method).</li><li>Alternatively, users can provide the `Constants.SERVER` option.</li></ul></ul></li><li>Azure Data Lake Storage (Gen 2) End Point - Staging Folders<ul><li>For Internal Table Type:<ul><li>Configure either `Constants.TEMP_FOLDER` or `Constants.DATASOURCE` option.</li><li>If user chose to provide `Constants.DATASOURCE` option, staging folder will be derived by using the `location` value on the DataSource.</li><li>If both are provided, then the `Constants.TEMP_FOLDER` option value will be used.</li><li>In the absence of a staging folder option, the Connector will derive one based on the runtime configuration - `spark.sqlanalyticsconnector.stagingdir.prefix`.</li></ul></li><li>For External Table Type:<ul><li>`Constants.DATASOURCE` is a required configuration option.</li><li>The storage path defined on the Data Source's `location` parameter will be used as the base path to establish final absolute path.</li><li>The base path is then appended with the value set on the `synapsesql` method's `location` argument, example `/<external_table_name>`.</li><li>If the `location` argument to `synapsesql` method isn't provided, then the connector will derive the location value as `<base_path>/dbName/schemaName/tableName`.</li></ul></li></ul></li></ul>|
-| Write using Basic Authentication | <ul><li>Azure Synapse Dedicated SQL End Point<ul><li>`Constants.SERVER` - Synapse Dedicated SQL Pool End Point (Server FQDN)</li><li>`Constants.USER` - SQL User Name.</li><li>`Constants.PASSWORD` - SQL User Password.</li><li>`Constants.STAGING_STORAGE_ACCOUNT_KEY` associated with Storage Account that hosts `Constants.TEMP_FOLDERS` (internal table types only) or `Constants.DATASOURCE`.</li></ul></li><li>Azure Data Lake Storage (Gen 2) End Point - Staging Folders<ul><li>SQL basic authentication credentials don't apply to access storage end points. Hence it's required that the workspace user identity is given relevant access permissions (reference the section - [Azure Data Lake Storage Gen2](#azure-data-lake-storage-gen2).</li></ul></li></ul>|
-|Read using Azure AD based authentication|<ul><li>Credentials are auto-mapped, and user isn't required to provide specific configuration options.</li><li>Three-part table name argument on `synapsesql` method is required to read from respective table in Azure Synapse Dedicated SQL Pool.</li></ul>|
-|Read using basic authentication|<ul><li>Azure Synapse Dedicated SQL End Point<ul><li>`Constants.SERVER` - Synapse Dedicated SQL Pool End Point (Server FQDN)</li><li>`Constants.USER` - SQL User Name.</li><li>`Constants.PASSWORD` - SQL User Password.</li></ul></li><li>Azure Data Lake Storage (Gen 2) End Point - Staging Folders<ul><li>`Constants.DATA_SOURCE` - Location setting from data source is used to stage extracted data from Azure Synapse Dedicated SQL End Point.</li></ul></li></ul>|
+| Write using Azure AD-based authentication | <ul><li>Azure Synapse Dedicated SQL endpoint<ul><li>`Constants.SERVER`<ul><li>By default, the connector infers the Azure Synapse Dedicated SQL endpoint associated with the database name (from the three-part table name argument to `synapsesql` method).</li><li>Alternatively, users can provide the `Constants.SERVER` option.</li></ul></ul></li><li>Azure Data Lake Storage Gen2 endpoint: Staging folders<ul><li>For internal table type:<ul><li>Configure either `Constants.TEMP_FOLDER` or the `Constants.DATASOURCE` option.</li><li>If the user chose to provide the `Constants.DATASOURCE` option, the staging folder is derived by using the `location` value on the data source.</li><li>If both are provided, the `Constants.TEMP_FOLDER` option value is used.</li><li>In the absence of a staging folder option, the connector derives one based on the runtime configuration `spark.sqlanalyticsconnector.stagingdir.prefix`.</li></ul></li><li>For external table type:<ul><li>`Constants.DATASOURCE` is a required configuration option.</li><li>The storage path defined on the data source's `location` parameter is used as the base path to establish the final absolute path.</li><li>The base path is then appended with the value set on the `synapsesql` method's `location` argument, for example, `/<external_table_name>`.</li><li>If the `location` argument to `synapsesql` method isn't provided, the connector derives the location value as `<base_path>/dbName/schemaName/tableName`.</li></ul></li></ul></li></ul>|
+| Write using Basic Authentication | <ul><li>Azure Synapse Dedicated SQL endpoint<ul><li>`Constants.SERVER`: Azure Synapse Dedicated SQL Pool endpoint (server FQDN)</li><li>`Constants.USER`: SQL user name</li><li>`Constants.PASSWORD`: SQL user password</li><li>`Constants.STAGING_STORAGE_ACCOUNT_KEY` associated with the storage account that hosts `Constants.TEMP_FOLDERS` (internal table types only) or `Constants.DATASOURCE`</li></ul></li><li>Azure Data Lake Storage Gen2 endpoint: Staging folders<ul><li>SQL Basic Authentication credentials don't apply to access storage endpoints. It's required that the workspace user identity is given relevant access permissions. (See the section [Azure Data Lake Storage Gen2](#azure-data-lake-storage-gen2).)</li></ul></li></ul>|
+|Read using Azure AD-based authentication|<ul><li>Credentials are automapped and the user isn't required to provide specific configuration options.</li><li>Three-part table name argument on `synapsesql` method is required to read from the respective table in Azure Synapse Dedicated SQL Pool.</li></ul>|
+|Read using Basic Authentication|<ul><li>Azure Synapse Dedicated SQL endpoint<ul><li>`Constants.SERVER`: Azure Synapse Dedicated SQL Pool endpoint (server FQDN)</li><li>`Constants.USER`: SQL user name</li><li>`Constants.PASSWORD`: SQL user password</li></ul></li><li>Azure Data Lake Storage Gen2 endpoint: Staging folders<ul><li>`Constants.DATA_SOURCE`: Location setting from data source is used to stage extracted data from the Azure Synapse Dedicated SQL endpoint.</li></ul></li></ul>|
-## Code Templates
+## Code templates
This section presents reference code templates to describe how to use and invoke the Azure Synapse Dedicated SQL Pool Connector for Apache Spark. ### Write to Azure Synapse Dedicated SQL Pool
-#### Write Request - `synapsesql` Method Signature
+The following sections relate to a write scenario.
-The method signature for the Connector version built for Spark 2.4.8 has one less argument, than that applied to the Spark 3.1.2 version. Following are the two method signatures:
+#### Write request: synapsesql method signature
-* Spark Pool Version 2.4.8
+The method signature for the connector version built for Spark 2.4.8 has one less argument than that applied to the Spark 3.1.2 version. The following snippets are the two method signatures:
-```Scala
-synapsesql(tableName:String,
- tableType:String = Constants.INTERNAL,
- location:Option[String] = None):Unit
-```
+* Spark pool version 2.4.8
+
+ ```Scala
+ synapsesql(tableName:String,
+ tableType:String = Constants.INTERNAL,
+ location:Option[String] = None):Unit
+ ```
-* Spark Pool Version 3.1.2
+* Spark pool version 3.1.2
-```Scala
-synapsesql(tableName:String,
- tableType:String = Constants.INTERNAL,
- location:Option[String] = None,
- callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit]):Unit
-```
+ ```Scala
+ synapsesql(tableName:String,
+ tableType:String = Constants.INTERNAL,
+ location:Option[String] = None,
+ callBackHandle=Option[(Map[String, Any], Option[Throwable])=>Unit]):Unit
+ ```
-### Write using Azure AD based Authentication
+### Write using Azure AD-based authentication
-Following is a comprehensive code template that describes how to use the Connector for write scenarios:
+The following comprehensive code template describes how to use the connector for write scenarios:
```Scala //Add required imports
import org.apache.spark.sql.SaveMode
import com.microsoft.spark.sqlanalytics.utils.Constants import org.apache.spark.sql.SqlAnalyticsConnector._
-//Define read options for example, if reading from CSV source, configure header and delimiter options.
+//Define read options, for example, if reading from a CSV source, configure header and delimiter options.
val pathToInputSource="abfss://<storage_container_name>@<storage_account_name>.dfs.core.windows.net/<some_folder>/<some_dataset>.csv"
-//Define read configuration for the input CSV
+//Define read configuration for the input CSV.
val dfReadOptions:Map[String, String] = Map("header" -> "true", "delimiter" -> ",")
-//Initialize DataFrame that reads CSV data from a given source
+//Initialize the DataFrame that reads CSV data from a given source.
val readDF:DataFrame=spark. read. options(dfReadOptions). csv(pathToInputSource). limit(1000) //Reads first 1000 rows from the source CSV input.
-//Setup and trigger the read DataFrame for write to Synapse Dedicated SQL Pool.
-//Fully qualified SQL Server DNS name can be obtained using one of the following methods:
+//Set up and trigger the read DataFrame for write to Azure Synapse Dedicated SQL Pool.
+//Fully qualified SQL Server DNS name can be obtained by using one of the following methods:
// 1. Synapse Workspace - Manage Pane - SQL Pools - <Properties view of the corresponding Dedicated SQL Pool>
-// 2. From Azure Portal, follow the bread-crumbs for <Portal_Home> -> <Resource_Group> -> <Dedicated SQL Pool> and then go to Connection Strings/JDBC tab.
+// 2. From the Azure portal, follow the breadcrumbs for <Portal_Home> -> <Resource_Group> -> <Dedicated SQL Pool> and then go to the Connection Strings/JDBC tab.
//If `Constants.SERVER` is not provided, the value will be inferred by using the `database_name` in the three-part table name argument to the `synapsesql` method.
-//Like-wise, if `Constants.TEMP_FOLDER` is not provided, the connector will use the runtime staging directory config (see section on Configuration Options for details).
+//Likewise, if `Constants.TEMP_FOLDER` is not provided, the connector will use the runtime staging directory config (see the section on Configuration options for details).
val writeOptionsWithAADAuth:Map[String, String] = Map(Constants.SERVER -> "<dedicated-pool-sql-server-name>.sql.azuresynapse.net", Constants.TEMP_FOLDER -> "abfss://<storage_container_name>@<storage_account_name>.dfs.core.windows.net/<some_temp_folder>")
-//Setup optional callback/feedback function that can receive post write metrics of the job performed.
+//Set up an optional callback/feedback function that can receive post-write metrics of the job performed.
var errorDuringWrite:Option[Throwable] = None val callBackFunctionToReceivePostWriteMetrics: (Map[String, Any], Option[Throwable]) => Unit = (feedback: Map[String, Any], errorState: Option[Throwable]) => {
val callBackFunctionToReceivePostWriteMetrics: (Map[String, Any], Option[Throwab
errorDuringWrite = errorState }
-//Configure and submit the request to write to Synapse Dedicated SQL Pool (note - default SaveMode is set to ErrorIfExists)
-//Sample below is using AAD-based authentication approach; See further examples to leverage SQL Basic auth.
+//Configure and submit the request to write to Azure Synapse Dedicated SQL Pool. (Note the default SaveMode is set to ErrorIfExists.)
+//The following sample uses the Azure AD-based authentication approach. See further examples to use SQL Basic Authentication.
readDF. write. //Configure required configurations.
readDF.
//Optional parameter to receive a callback. callBackHandle = Some(callBackFunctionToReceivePostWriteMetrics))
-//If write request has failed, raise an error and fail the Cell's execution.
+//If the write request has failed, raise an error and fail the cell's execution.
if(errorDuringWrite.isDefined) throw errorDuringWrite.get ``` #### Write using Basic Authentication
-Following code snippet replaces the write definition described in the [Write using Azure AD based authentication](#write-using-azure-ad-based-authentication) section, to submit write request using SQL basic authentication approach:
+The following code snippet replaces the write definition described in the [Write using Azure AD-based authentication](#write-using-azure-ad-based-authentication) section. To submit a write request by using the SQL Basic Authentication approach:
```Scala
-//Define write options to use SQL basic authentication
+//Define write options to use SQL Basic Authentication
val writeOptionsWithBasicAuth:Map[String, String] = Map(Constants.SERVER -> "<dedicated-pool-sql-server-name>.sql.azuresynapse.net", //Set database user name Constants.USER -> "<user_name>",
val writeOptionsWithBasicAuth:Map[String, String] = Map(Constants.SERVER -> "<de
//To be used only when writing to internal tables. Storage path will be used for data staging. Constants.TEMP_FOLDER -> "abfss://<storage_container_name>@<storage_account_name>.dfs.core.windows.net/<some_temp_folder>")
-//Configure and submit the request to write to Synapse Dedicated SQL Pool.
+//Configure and submit the request to write to Azure Synapse Dedicated SQL Pool.
readDF. write. options(writeOptions).
readDF.
callBackHandle = Some(callBackFunctionToReceivePostWriteMetrics)) ```
-In a basic authentication approach, in order to read data from a source storage path other configuration options are required. Following code snippet provides an example to read from a Azure Data Lake Storage Gen2 data source using Service Principal credentials:
+In a Basic Authentication approach, in order to read data from a source storage path, other configuration options are required. The following code snippet provides an example to read from an Azure Data Lake Storage Gen2 data source by using Service Principal credentials:
```Scala //Specify options that Spark runtime must support when interfacing and consuming source data
val dfReadOptions:Map[String, String]=Map("header"->"true",
"fs.abfss.impl" -> "org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem") //Initialize the Storage Path string, where source data is maintained/kept. val pathToInputSource=s"abfss://$storageContainerName@$storageAccountName.dfs.core.windows.net/<base_path_for_source_data>/<specific_file (or) collection_of_files>"
-//Define data frame to interface with the data source
+//Define the data frame to interface with the data source.
val df:DataFrame = spark. read. options(dfReadOptions).
val df:DataFrame = spark.
limit(100) ```
-#### DataFrame Write SaveMode Support
+#### DataFrame write SaveMode support
-Following SaveModes are supported when writing source data to a destination table in Azure Synapse Dedicated SQL Pool:
+The following SaveModes are supported when writing source data to a destination table in Azure Synapse Dedicated SQL Pool:
* ErrorIfExists (default save mode)
- * If destination table exists, then the write is aborted with an exception returned to the callee. Else, a new table is created with data from the staging folders.
+ * If a destination table exists, the write is aborted with an exception returned to the callee. Else, a new table is created with data from the staging folders.
* Ignore
- * If the destination table exists, then the write will ignore the write request without returning an error. Else, a new table is created with data from the staging folders.
+ * If the destination table exists, the write ignores the write request without returning an error. Else, a new table is created with data from the staging folders.
* Overwrite
- * If the destination table exists, then existing data in the destination is replaced with data from the staging folders. Else, a new table is created with data from the staging folders.
+ * If the destination table exists, the existing data in the destination is replaced with data from the staging folders. Else, a new table is created with data from the staging folders.
* Append
- * If the destination table exists, then the new data is appended to it. Else, a new table is created with data from the staging folders.
+ * If the destination table exists, the new data is appended to it. Else, a new table is created with data from the staging folders.
-#### Write Request Callback Handle
+#### Write request callback handle
+
+The new write path API changes introduced an experimental feature to provide the client with a key-value map of post-write metrics. These metrics provide information like the number of records staged to the number of records written to a SQL table. They can also include information on the time spent in staging and executing the SQL statements to write data to Azure Synapse Dedicated SQL Pool.
-The new write path API changes introduced an experimental feature to provide the client with a key->value map of post-write metrics. Keys for the metrics are defined in the new Object definition - `Constants.FeedbackConstants`. Metrics can be retrieved as a JSON string by passing in the callback handle (a `Scala Function`). Following is the function signature:
+The new write path API changes introduced an experimental feature to provide the client with a key-value map of post-write metrics. Keys for the metrics are defined in the new object definition `Constants.FeedbackConstants`. Metrics can be retrieved as a JSON string by passing in the callback handle, for example, `Scala Function`. The following snippet is the function signature:
```Scala
-//Function signature is expected to have two arguments - a `scala.collection.immutable.Map[String, Any]` and an Option[Throwable]
+//Function signature is expected to have two arguments, a `scala.collection.immutable.Map[String, Any]` and an Option[Throwable].
//Post-write if there's a reference of this handle passed to the `synapsesql` signature, it will be invoked by the closing process.
-//These arguments will have valid objects in either Success or Failure case. In case of Failure the second argument will be a `Some(Throwable)`.
+//These arguments will have valid objects in either a Success or Failure case. In the case of Failure, the second argument will be a `Some(Throwable)`.
(Map[String, Any], Option[Throwable]) => Unit ```
-Following are some notable metrics (presented in camel case):
+The following notable metrics are presented with internal capitalization:
* `WriteFailureCause` * `DataStagingSparkJobDurationInMilliseconds`
Following are some notable metrics (presented in camel case):
* `SQLStatementExecutionDurationInMilliseconds` * `rows_processed`
-Following is a sample JSON string with post-write metrics:
+The following snippet is a sample JSON string with post-write metrics:
```doc {
Following is a sample JSON string with post-write metrics:
### Read from Azure Synapse Dedicated SQL Pool
-#### Read Request - `synapsesql` Method Signature
+The following sections relate to a read scenario.
+
+#### Read request: synapsesql method signature
```Scala synapsesql(tableName:String) => org.apache.spark.sql.DataFrame ```
-#### Read using Azure AD based authentication
+#### Read using Azure AD-based authentication
```Scala
-//Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
-//Azure Active Directory based authentication approach is preferred here.
+//Use case is to read data from an internal table in an Azure Synapse Dedicated SQL Pool database.
+//Azure Active Directory-based authentication approach is preferred here.
import org.apache.spark.sql.DataFrame import com.microsoft.spark.sqlanalytics.utils.Constants import org.apache.spark.sql.SqlAnalyticsConnector._
-//Read from existing internal table
+//Read from the existing internal table.
val dfToReadFromTable:DataFrame = spark.read. //If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
- //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL endpoint.
option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net"). //Defaults to storage path defined in the runtime configurations (See section on Configuration Options above). option(Constants.TEMP_FOLDER, "abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<some_base_path_for_temporary_staging_folders>").
val dfToReadFromTable:DataFrame = spark.read.
//Fetch a sample of 10 records limit(10)
-//Show contents of the dataframe
+//Show contents of the DataFrame.
dfToReadFromTable.show() ```
-#### Read using basic authentication
+#### Read using Basic Authentication
```Scala
-//Use case is to read data from an internal table in Synapse Dedicated SQL Pool DB
-//Azure Active Directory based authentication approach is preferred here.
+//Use case is to read data from an internal table in an Azure Synapse Dedicated SQL Pool database.
+//Azure Active Directory-based authentication approach is preferred here.
import org.apache.spark.sql.DataFrame import com.microsoft.spark.sqlanalytics.utils.Constants import org.apache.spark.sql.SqlAnalyticsConnector._
-//Read from existing internal table
+//Read from an existing internal table.
val dfToReadFromTable:DataFrame = spark.read. //If `Constants.SERVER` is not provided, the `<database_name>` from the three-part table name argument
- //to `synapsesql` method is used to infer the Synapse Dedicated SQL End Point.
+ //to `synapsesql` method is used to infer the Synapse Dedicated SQL endpoint.
option(Constants.SERVER, "<sql-server-name>.sql.azuresynapse.net").
- //Set database user name
+ //Set database user name.
option(Constants.USER, "<user_name>").
- //Set user's password to the database
+ //Set user's password to the database.
option(Constants.PASSWORD, "<user_password>").
- //Set name of the data source definition that is defined with database scoped credentials.
+ //Set name of the data source definition that is defined with database-scoped credentials.
//Data extracted from the SQL query will be staged to the storage path defined on the data source's location setting. option(Constants.DATA_SOURCE, "<data_source_name>"). //Three-part table name from where data will be read. synapsesql("<database_name>.<schema_name>.<table_name>").
- //Column-pruning i.e., query select column values.
+ //Column pruning, for example, query select column values.
select("<some_column_1>", "<some_column_5>", "<some_column_n>").
- //Push-down filter criteria that gets translated to SQL Push-down Predicates.
+ //Push-down filter criteria that gets translated to SQL push-down predicates.
filter(col("Title").startsWith("E")).
- //Fetch a sample of 10 records
+ //Fetch a sample of 10 records.
limit(10)
-//Show contents of the dataframe
+//Show contents of the DataFrame.
dfToReadFromTable.show() ```
-### More Code Samples
+### More code samples
-#### Using the Connector with Other language preferences
+This section includes some other code samples.
-Example that demonstrates how to use the Connector with `PySpark (Python)` language preference:
+#### Use the connector with other language preferences
+
+This example demonstrates how to use the connector with the `PySpark (Python)` language preference:
```Python %%spark
import org.apache.spark.sql.DataFrame
import com.microsoft.spark.sqlanalytics.utils.Constants import org.apache.spark.sql.SqlAnalyticsConnector._
-//Code to write or read goes here (refer to the aforementioned code templates)
+//Code to write or read goes here (refer to the aforementioned code templates).
```
-#### Using materialized data across cells
-
-Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched in another cell, by registering a temporary view.
+#### Use materialized data across cells
-* Cell where data is fetched (say with Notebook language preference as `Scala`)
+Spark DataFrame's `createOrReplaceTempView` can be used to access data fetched in another cell by registering a temporary view.
-```Scala
- //Necessary imports
- import org.apache.spark.sql.DataFrame
- import org.apache.spark.sql.SaveMode
- import com.microsoft.spark.sqlanalytics.utils.Constants
- import org.apache.spark.sql.SqlAnalyticsConnector._
+* Cell where data is fetched (say with notebook language preference as `Scala`):
- //Configure options and read from Synapse Dedicated SQL Pool.
- val readDF = spark.read.
- //Set Synapse Dedicated SQL End Point name.
- option(Constants.SERVER, "<synapse-dedicated-sql-end-point>.sql.azuresynapse.net").
- //Set database user name.
- option(Constants.USER, "<user_name>").
- //Set database user's password.
- option(Constants.PASSWORD, "<user_password>").
- //Set name of the data source definition that is defined with database scoped credentials.
- option(Constants.DATA_SOURCE,"<data_source_name>").
- //Set the three-part table name from which the read must be performed.
- synapsesql("<database_name>.<schema_name>.<table_name>").
- //Optional - specify number of records the DataFrame would read.
- limit(10)
- //Register the temporary view (scope - current active Spark Session)
- readDF.createOrReplaceTempView("<temporary_view_name>")
-```
+ ```Scala
+ //Necessary imports
+ import org.apache.spark.sql.DataFrame
+ import org.apache.spark.sql.SaveMode
+ import com.microsoft.spark.sqlanalytics.utils.Constants
+ import org.apache.spark.sql.SqlAnalyticsConnector._
+
+ //Configure options and read from Azure Synapse Dedicated SQL Pool.
+ val readDF = spark.read.
+ //Set Synapse Dedicated SQL endpoint name.
+ option(Constants.SERVER, "<synapse-dedicated-sql-end-point>.sql.azuresynapse.net").
+ //Set database user name.
+ option(Constants.USER, "<user_name>").
+ //Set database user's password.
+ option(Constants.PASSWORD, "<user_password>").
+ //Set name of the data source definition that is defined with database scoped credentials.
+ option(Constants.DATA_SOURCE,"<data_source_name>").
+ //Set the three-part table name from which the read must be performed.
+ synapsesql("<database_name>.<schema_name>.<table_name>").
+ //Optional - specify number of records the DataFrame would read.
+ limit(10)
+ //Register the temporary view (scope - current active Spark Session)
+ readDF.createOrReplaceTempView("<temporary_view_name>")
+ ```
-* Now, change the language preference on the Notebook to `PySpark (Python)` and fetch data from the registered view `<temporary_view_name>`
+* Now, change the language preference on the notebook to `PySpark (Python)` and fetch data from the registered view `<temporary_view_name>`:
```Python spark.sql("select * from <temporary_view_name>").show() ```
-## Response Handling
+## Response handling
+
+Invoking `synapsesql` has two possible end states that are either success or fail. This section describes how to handle the request response for each scenario.
-Invoking `synapsesql` has two possible end states - Success or a Failed State. This section describes how to handle the request response for each scenario.
+### Read request response
-### Read Request Response
+Upon completion, the read response snippet is displayed in the cell's output. Failure in the current cell also cancels subsequent cell executions. Detailed error information is available in the Spark application logs.
-Upon completion, the read response snippet is displayed in the cell's output. Failure in the current cell will also cancel subsequent cell executions. Detailed error information is available in the Spark Application Logs.
+### Write request response
-### Write Request Response
+By default, a write response is printed to the cell output. On failure, the current cell is marked as failed, and subsequent cell executions are aborted. The other approach is to pass the [callback handle](#write-request-callback-handle) option to the `synapsesql` method. The callback handle provides programmatic access to the write response.
-By default, a write response is printed to the cell output. On failure, the current cell is marked as failed, and subsequent cell executions will be aborted. The other approach is to pass the [callback handle](#write-request-callback-handle) option to the `synapsesql` method. The callback handle will provide programmatic access to the write response.
+## Things to note
-## Things to Note
+Consider the following points on read and write performance:
-* When writing to the Azure Synapse Dedicated SQL Pool tables:
+* When writing to Azure Synapse Dedicated SQL Pool tables:
* For internal table types: * Tables are created with ROUND_ROBIN data distribution.
- * Column types are inferred from the DataFrame that would read data from source. String columns are mapped to `NVARCHAR(4000)`.
+ * Column types are inferred from the DataFrame that would read data from the source. String columns are mapped to `NVARCHAR(4000)`.
* For external table types:
- * DataFrame's initial parallelism drives the data organization for the external table.
- * Column types are inferred from the DataFrame that would read data from source.
+ * The DataFrame's initial parallelism drives the data organization for the external table.
+ * Column types are inferred from the DataFrame that would read data from the source.
* Better data distribution across executors can be achieved by tuning the `spark.sql.files.maxPartitionBytes` and the DataFrame's `repartition` parameter.
- * When writing large data sets, it's important to factor in the impact of [DWU Performance Level](../../synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md) setting that limits [transaction size](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md#transaction-size).
-* When reading from the Azure Synapse Dedicated SQL Pool tables:
- * Consider applying necessary filters on the DataFrame to take advantage of the Connector's column-pruning feature.
- * Read scenario doesn't support the `TOP(n-rows)` clause, when framing the `SELECT` query statements. The choice to limit data is to use the DataFrame's limit(.) clause.
- * Refer the example - [Using materialized data across cells](#using-materialized-data-across-cells) section.
-* Monitor [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-best-practices.md) utilization trends to spot throttling behaviors that can [impact](../../storage/common/scalability-targets-standard-account.md) read and write performance.
+ * When writing large datasets, factor in the impact of [DWU Performance Level](../../synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md) setting that limits [transaction size](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-develop-transactions.md#transaction-size).
+* When reading from Azure Synapse Dedicated SQL Pool tables:
+ * Consider applying necessary filters on the DataFrame to take advantage of the connector's column-pruning feature.
+ * The read scenario doesn't support the `TOP(n-rows)` clause when framing the `SELECT` query statements. The choice to limit data is to use the DataFrame's limit(.) clause.
+ * Refer to the example [Use materialized data across cells](#use-materialized-data-across-cells) section.
+* Monitor [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-best-practices.md) utilization trends to spot throttling behaviors that can [affect](../../storage/common/scalability-targets-standard-account.md) read and write performance.
## References
synapse-analytics How To Query Analytical Store Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md
The syntax in **Python** would be the following:
# If you are using managed private endpoints for Azure Cosmos DB analytical store and using batch writes/reads and/or streaming writes/reads to transactional store you should set connectionMode to Gateway.
+def writeBatchToCosmos(batchDF, batchId):
+ batchDF.persist()
+ print("--> BatchId: {}, Document count: {} : {}".format(batchId, batchDF.count(), datetime.datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S.%f")))
+ batchDF.write.format("cosmos.oltp")\
+ .option("spark.synapse.linkedService", "<enter linked service name>")\
+ .option("spark.cosmos.container", "<enter container name>")\
+ .option("spark.cosmos.write.upsertEnabled", "true")\
+ .mode('append')\
+ .save()
+ print("<-- BatchId: {}, Document count: {} : {}".format(batchId, batchDF.count(), datetime.datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S.%f")))
+ batchDF.unpersist()
+ streamQuery = dfStream\ .writeStream\
- .format("cosmos.oltp")\
- .outputMode("append")\
+ .foreachBatch(writeBatchToCosmos) \
.option("checkpointLocation", "/localWriteCheckpointFolder")\
- .option("spark.synapse.linkedService", "<enter linked service name>")\
- .option("spark.cosmos.container", "<enter container name>")\
- .option("spark.cosmos.connection.mode", "Gateway")\
.start() streamQuery.awaitTermination()
The equivalent syntax in **Scala** would be the following:
val query = dfStream. writeStream.
- format("cosmos.oltp").
- outputMode("append").
+ foreachBatch { (batchDF: DataFrame, batchId: Long) =>
+ batchDF.persist()
+ batchDF.write.format("cosmos.oltp").
+ option("spark.synapse.linkedService", "<enter linked service name>").
+ option("spark.cosmos.container", "<enter container name>").
+ option("spark.cosmos.write.upsertEnabled", "true").
+ mode(SaveMode.Overwrite).
+ save()
+ println(s"BatchId: $batchId, Document count: ${batchDF.count()}")
+ batchDF.unpersist()
+ ()
+ }.
option("checkpointLocation", "/localWriteCheckpointFolder").
- option("spark.synapse.linkedService", "<enter linked service name>").
- option("spark.cosmos.container", "<enter container name>").
- option("spark.cosmos.connection.mode", "Gateway").
start() query.awaitTermination()
virtual-machines Disks Incremental Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md
You can also use Azure Resource Manager templates to create an incremental snaps
## Cross-region snapshot copy
-You can copy incremental snapshots to any region of your choice. Azure manages the copy process removing the maintenance overhead of managing the copy process by staging a storage account in the target region. Moreover, Azure ensures that only changes since the last snapshot in the target region are copied to the target region to reduce the data footprint, reducing the recovery point objective. You can check the progress of the copy so you can know when a target snapshot is ready to restore disks in the target region. Customers are charged only for the bandwidth cost of the data transfer across the region.
+You can copy incremental snapshots to any region of your choice. Azure manages the copy process removing the maintenance overhead of managing the copy process by staging a storage account in the target region. Moreover, Azure ensures that only changes since the last snapshot in the target region are copied to the target region to reduce the data footprint, reducing the recovery point objective. You can check the progress of the copy so you can know when a target snapshot is ready to restore disks in the target region. Customers are charged only for the bandwidth cost of the data transfer across the region and the read transactions on the source snapshots.
:::image type="content" source="media/disks-incremental-snapshots/cross-region-snapshot.png" alt-text="Diagram of Azure orchestrated cross-region copy of incremental snapshots via the clone option." lightbox="media/disks-incremental-snapshots/cross-region-snapshot.png":::
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
+
+ Title: Ebdsv5 and Ebsv5 series
+description: Specifications for the Ebdsv5-series and Ebsv5-series Azure virtual machines.
+++++ Last updated : 04/05/2022+++
+# Ebv5-series
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+The memory-optimized Ebsv5 and Ebdsv5 Azure virtual machine (VM) series deliver higher remote storage performance in each VM size than the [Ev4 series](ev4-esv4-series.md). The increased remote storage performance of the Ebsv5 and Ebdsv5 VMs is ideal for storage throughput-intensive workloads. For example, relational databases and data analytics applications.
+
+The Ebsv5 and Ebdsv5 VMs offer up to 120000 IOPS and 4000 MBps of remote disk storage throughput. Both series also include up to 512 GiB of RAM. The Ebdsv5 series has local SSD storage up to 2400 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series.
+
+The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8272CL (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature:
+
+- Up to 512 GiB of RAM
+- [Intel® Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html)
+- [Intel® Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html)
+- [Intel® Advanced Vector Extensions 512 (Intel® AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html)
+- Support for [Intel® Deep Learning Boost](https://software.intel.com/content/www/us/en/develop/topics/ai/deep-learning-boost.html)
+
+## Ebdsv5 series
+
+Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake) processors. The Ebdsv5 VM sizes feature up to 512 GiB of RAM, in addition to fast and large local SSD storage (up to 2400 GiB). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance, low latency, high-speed local storage. Remote Data disk storage is billed separately from VMs.
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 1 and Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported (required)
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Supported
+- Nested virtualization: Supported
+
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
+| | | | | | | | | | |
+| Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 |
+| Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
+| Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 | 4 | 10000 |
+| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 | 8 | 12500 |
+| Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/1250 | 88000/2500 | 120000/4000 | 8 | 16000 |
+| Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/2000 | 120000/4000 | 120000/4000 | 8 | 16000 |
+| Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 | 8 | 20000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Ebdsv5 VMs.
+>
+> Accelerated networking can be applied to two NICs.
+
+> [!NOTE]
+> Ebdsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
++
+## Ebsv5 series
+
+Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These VMs are ideal for memory-intensive enterprise applications and applications that benefit from high remote storage performance but with no local SSD storage. Ebsv5-series VMs feature Intel® Hyper-Threading Technology. Remote Data disk storage is billed separately from VMs.
+
+- [Premium Storage](premium-storage-performance.md): Supported
+- [Premium Storage caching](premium-storage-performance.md): Supported
+- [Live Migration](maintenance-and-updates.md): Supported
+- [Memory Preserving Updates](maintenance-and-updates.md): Supported
+- [VM Generation Support](generation-2.md): Generation 1 and Generation 2
+- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported (required)
+- [Ephemeral OS Disks](ephemeral-os-disks.md): Not supported
+- Nested virtualization: Supported
+
+| Size | vCPU | Memory: GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth |
+| | | | | | | | | |
+| Standard_E2bds_v5 | 2 | 16 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 |
+| Standard_E4bds_v5 | 4 | 32 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
+| Standard_E8bds_v5 | 8 | 64 | 16 | 38000/500 | 22000/625 | 40000/1200 | 4 | 10000 |
+| Standard_E16bds_v5 | 16 | 128 | 32 | 75000/1000 | 44000/1250 | 64000/2000 | 8 | 12500
+| Standard_E32bds_v5 | 32 | 256 | 32 | 150000/1250 | 88000/2500 | 120000/4000 | 8 | 16000 |
+| Standard_E48bds_v5 | 48 | 384 | 32 | 225000/2000 | 120000/4000 | 120000/4000 | 8 | 16000 |
+| Standard_E64bds_v5 | 64 | 512 | 32 | 300000/4000 | 120000/4000 | 120000/4000 | 8 | 20000 |
+
+> [!NOTE]
+> Accelerated networking is required and turned on by default on all Ebsv5 VMs.
+>
+> Accelerated networking can be applied to two NICs.
+
+> [!NOTE]
+> Ebsv5-series VMs can [burst their disk performance](disk-bursting.md) and get up to their bursting max for up to 30 minutes at a time.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+## Next steps
+
+- Use the Azure [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
virtual-machines Update Linux Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/update-linux-agent.md
AutoUpdate.Enabled=y
To enable run: ```bash
-sudo sed -i 's/# AutoUpdate.Enabled=n/AutoUpdate.Enabled=y/g' /etc/waagent.conf
+sudo sed -i 's/AutoUpdate.Enabled=n.*/AutoUpdate.Enabled=y/g' /etc/waagent.conf
``` Restart the waagent service
AutoUpdate.Enabled=y
To enable run: ```bash
-sudo sed -i 's/# AutoUpdate.Enabled=n/AutoUpdate.Enabled=y/g' /etc/waagent.conf
+sudo sed -i 's/AutoUpdate.Enabled=n.*/AutoUpdate.Enabled=y/g' /etc/waagent.conf
Restart the waagent service sudo systemctl restart walinuxagent.service ``` ## Oracle Linux 6 and Oracle Linux 7
-For Oracle Linux, make sure that the `Addons` repository is enabled. Choose to edit the file `/etc/yum.repos.d/public-yum-ol6.repo`(Oracle Linux 6) or `/etc/yum.repos.d/public-yum-ol7.repo`(Oracle Linux), and change the line `enabled=0` to `enabled=1` under **[ol6_addons]** or **[ol7_addons]** in this file.
+For Oracle Linux, make sure that the `Addons` repository is enabled. Choose to edit the file `/etc/yum.repos.d/public-yum-ol6.repo`(Oracle Linux 6) or `/etc/yum.repos.d/oracle-linux-o17.repo`(Oracle Linux), and change the line `enabled=0` to `enabled=1` under **[ol6_addons]** or **[ol7_addons]** in this file.
Then, to install the latest version of the Azure Linux Agent, type:
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
You can also specify plan information, for example:
Sets the source image as an existing managed image of a generalized VHD or VM. > [!NOTE]
-> The source managed image must be of a supported OS and the image must same region as your Azure Image Builder template.
+> The source managed image must be of a supported OS and the image must reside in the same subscription and region as your Azure Image Builder template.
```json "source": {
The `imageId` should be the ResourceId of the managed image. Use `az image list`
### SharedImageVersion source
-Sets the source image an existing image version in an Azure Compute Gallery.
+Sets the source image as an existing image version in an Azure Compute Gallery.
> [!NOTE]
-> The source managed image must be of a supported OS and the image must same region as your Azure Image Builder template, if not, please replicate the image version to the Image Builder Template region.
+> The source shared image version must be of a supported OS and the image version must reside in the same region as your Azure Image Builder template, if not, please replicate the image version to the Image Builder Template region.
```json
virtual-machines Sizes Memory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-memory.md
Title: Azure VM sizes - Memory | Microsoft Docs
description: Lists the different memory optimized sizes available for virtual machines in Azure. Lists information about the number of vCPUs, data disks, and NICs as well as storage throughput and network bandwidth for sizes in this series. documentationcenter: ''-++ tags: azure-resource-manager,azure-service-management keywords: VM isolation,isolated VM,isolation,isolated- ms.assetid: Previously updated : 10/20/2021- Last updated : 04/04/2022
Memory optimized VM sizes offer a high memory-to-CPU ratio that is great for rel
- The [Eav4 and Easv4-series](eav4-easv4-series.md) utilize AMD's 2.35Ghz EPYC<sup>TM</sup> 7452 processor in a multi-threaded configuration with up to 256MB L3 cache, increasing options for running most memory optimized workloads. The Eav4-series and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series.
+- The [Ebsv5 and Ebdsv5 series](ebdsv5-ebsv5-series.md) deliver higher remote storage performance in each VM size than the Ev4 series. The increased remote storage performance of the Ebsv5 and Ebdsv5 VMs is ideal for storage throughput-intensive workloads, such as relational databases and data analytics applications.
+ - The [Ev3 and Esv3-series](ev3-esv3-series.md) Intel&reg; Xeon&reg; 8171M 2.1 GHz (Skylake) or the Intel&reg; Xeon&reg; E5-2673 v4 2.3 GHz (Broadwell) processor in a hyper-threaded configuration, providing a better value proposition for most general purpose workloads, and bringing the Ev3 into alignment with the general purpose VMs of most other clouds. Memory has been expanded (from 7 GiB/vCPU to 8 GiB/vCPU) while disk and network limits have been adjusted on a per core basis to align with the move to hyper-threading. The Ev3 is the follow up to the high memory VM sizes of the D/Dv2 families. - The [Ev4 and Esv4-series](ev4-esv4-series.md) runs on 2nd Generation Intel&reg; Xeon&reg; Platinum 8272CL (Cascade Lake) processors in a hyper-threaded configuration, are ideal for various memory-intensive enterprise applications and feature up to 504 GiB of RAM. It features the [Intel&reg; Turbo Boost Technology 2.0](https://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html), [Intel&reg; Hyper-Threading Technology](https://www.intel.com/content/www/us/en/architecture-and-technology/hyper-threading/hyper-threading-technology.html) and [Intel&reg; Advanced Vector Extensions 512 (Intel AVX-512)](https://www.intel.com/content/www/us/en/architecture-and-technology/avx-512-overview.html). The Ev4 and Esv4-series do not include a local temp disk. For more information, refer to [Azure VM sizes with no local temp disk](azure-vms-no-temp-disk.yml).
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes.md
Previously updated : 10/20/2021 Last updated : 04/04/2022
This article describes the available sizes and options for the Azure virtual mac
||-|-| | [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. | | [Compute optimized](sizes-compute.md) | F, Fs, Fsv2, FX | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. |
-| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
+| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Ebdsv5, Ebsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
| [Storage optimized](sizes-storage.md) | Lsv2 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. | | [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4, NDasrA100_v4, NDm_A100_v4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. | | [High performance compute](sizes-hpc.md) | HB, HBv2, HBv3, HC, H | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). |
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
We recommend that you use the address ranges enumerated in [RFC 1918](https://to
You can also deploy the Shared Address space reserved in [RFC 6598](https://datatracker.ietf.org/doc/html/rfc6598), which is treated as Private IP Address space in Azure: * 100.64.0.0 - 100.127.255.255 (100.64/10 prefix)
-Other address spaces may work but may have undesirable side effects.
+Other address spaces, including all other IETF-recognized private, non-routable address spaces, may work but may have undesirable side effects.
In addition, you cannot add the following address ranges: * 224.0.0.0/4 (Multicast)