Updates from: 08/11/2021 03:10:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Integer Transformations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/integer-transformations.md
Previously updated : 07/21/2021 Last updated : 08/10/2021
Determines whether a numeric claim is greater, lesser, equal, or not equal to a
| Item | TransformationClaimType | Data Type | Notes | | - | -- | | -- | | InputClaim | inputClaim | int | The first numeric claim to compare whether it is greater, lesser, equal, or not equal than the second number. Null value throws an exception. |
-| InputParameter | CompareToValue | boolean | The second number to compare whether it is greater, lesser, equal, or not equal than the first number. |
+| InputParameter | CompareToValue | int | The second number to compare whether it is greater, lesser, equal, or not equal than the first number. |
| InputParameter | Operator | string | Possible values: `LESSTHAN`, `GREATERTHAN`, `GREATERTHANOREQUAL`, `LESSTHANOREQUAL`, `EQUAL`, `NOTEQUAL`. | | InputParameter | throwError | boolean | Specifies whether this assertion should throw an error if the comparison result is `true`. Possible values: `true` (default), or `false`. <br />&nbsp;<br />When set to `true` (Assertion mode), and the comparison result is `true`, an exception will be thrown. When set to `false` (Evaluation mode), the result is a new boolean claim type with a value of `true`, or `false`.| | OutputClaim | outputClaim | boolean | If `ThrowError` is set to `false`, this output claim contains `true`, or `false` according to the comparison result. |
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/multi-factor-authentication.md
Previously updated : 05/13/2021 Last updated : 08/10/2021
This feature helps applications handle scenarios such as:
- You don't require multi-factor authentication to access one application, but you do require it to access another. For example, the customer can sign into an auto insurance application with a social or local account, but must verify the phone number before accessing the home insurance application registered in the same directory. - You don't require multi-factor authentication to access an application in general, but you do require it to access the sensitive portions within it. For example, the customer can sign in to a banking application with a social or local account and check the account balance, but must verify the phone number before attempting a wire transfer.
+### Verification methods
+
+With [Conditional Access](conditional-access-identity-protection-overview.md) users may or may not be challenged for MFA based on configuration decisions that you can make as an administrator. The methods of the multi-factor authentication are:
+
+- Email
+- SMS
+- Phone calls
+ ## Set multi-factor authentication ::: zone pivot="b2c-user-flow"
This feature helps applications handle scenarios such as:
- **Always on** - MFA is always required, regardless of your Conditional Access setup. During sign-up, users are prompted to enroll in MFA. During sign-in, if users aren't already enrolled in MFA, they're prompted to enroll. - **Conditional** - During sign-up and sign-in, users are prompted to enroll in MFA (both new users and existing users who aren't enrolled in MFA). During sign-in, MFA is enforced only when an active Conditional Access policy evaluation requires it:
- - If the result is an MFA challenge with no risk, MFA is enforced. If the user isn't already enrolled in MFA, they're prompted to enroll.
- - If the result is an MFA challenge due to risk *and* the user is not enrolled in MFA, sign-in is blocked.
+ - If the result is an MFA challenge with no risk, MFA is enforced. If the user isn't already enrolled in MFA, they're prompted to enroll.
+ - If the result is an MFA challenge due to risk *and* the user is not enrolled in MFA, sign-in is blocked.
> [!NOTE] >
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/user-flow-custom-attributes.md
Previously updated : 03/10/2021 Last updated : 08/10/2021
Once you've created a new user using a user flow, which uses the newly created c
## Azure AD B2C extensions app
-Extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called `b2c-extensions-app`. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure AD B2C, app registrations. Get the application properties:
+Extension attributes can only be registered on an application object, even though they might contain data for a user. The extension attribute is attached to the application called `b2c-extensions-app`. Do not modify this application, as it's used by Azure AD B2C for storing user data. You can find this application under Azure AD B2C, app registrations.
++
+To get the application ID:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
+1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**.
+1. Select **App registrations**, and then select **All applications**.
+1. Select the `b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.` application.
+1. Copy the **Application ID**. Example: `11111111-1111-1111-1111-111111111111`.
+
++
+Get the application properties:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select the **Directory + subscription** filter in the top menu, and then select the directory that contains your Azure AD B2C tenant.
Extension attributes can only be registered on an application object, even thoug
* **Application ID**. Example: `11111111-1111-1111-1111-111111111111`. * **Object ID**. Example: `22222222-2222-2222-2222-222222222222`. - ## Modify your custom policy To enable custom attributes in your policy, provide **Application ID** and Application **Object ID** in the AAD-Common technical profile metadata. The *AAD-Common* technical profile is found in the base [Azure Active Directory](active-directory-technical-profile.md) technical profile, and provides support for Azure AD user management. Other Azure AD technical profiles include the AAD-Common to leverage its configuration. Override the AAD-Common technical profile in the extension file.
active-directory Troubleshoot Policy Changes Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md
+
+ Title: Troubleshooting Conditional Access policy changes - Azure Active Directory
+description: Diagnose changes to Conditional Access policy with the Azure AD audit logs.
+++++ Last updated : 08/09/2021++++++++
+# Troubleshooting Conditional Access policy changes
+
+The Azure Active Directory (Azure AD) audit log is a valuable source of information when troubleshooting why and how Conditional Access policy changes happened in your environment.
+
+Audit log data is only kept for 30 days by default, which may not be long enough for every organization. Organizations can store data for longer periods by changing diagnostic settings in Azure AD to:
+
+- Send data to a Log Analytics workspace
+- Archive data to a storage account
+- Stream data to an Event Hub
+- Send data to a partner solution
+
+Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one.
+
+## Use the audit log
+
+1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator.
+1. Browse to **Azure Active Directory** > **Audit logs**.
+1. Select the **Date** range you want to query in.
+1. Select **Activity** and choose one of the following
+ 1. **Add conditional access policy** - This activity lists newly created policies
+ 1. **Update conditional access policy** - This activity lists changed policies
+ 1. **Delete conditional access policy** - This activity lists deleted policies
++
+## Use Log Analytics
+
+Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
++
+Once enabled find access to Log Analytics in the **Azure portal** > **Azure AD** > **Log Analytics**. The table of most interest to Conditional Access administrators is **AuditLogs**.
+
+```kusto
+AuditLogs
+| where OperationName == "Update conditional access policy"
+```
+
+Changes can be found under **TargetResources** > **modifiedProperties**.
+
+## Reading the values
+
+The old and new values from the audit log and Log Analytics are in JSON format. Compare the two values to see the changes to the policy.
+
+Old policy example:
+
+```json
+{
+ "conditions": {
+ "applications": {
+ "applicationFilter": null,
+ "excludeApplications": [
+ ],
+ "includeApplications": [
+ "797f4846-ba00-4fd7-ba43-dac1f8f63013"
+ ],
+ "includeAuthenticationContextClassReferences": [
+ ],
+ "includeUserActions": [
+ ]
+ },
+ "clientAppTypes": [
+ "browser",
+ "mobileAppsAndDesktopClients"
+ ],
+ "servicePrincipalRiskLevels": [
+ ],
+ "signInRiskLevels": [
+ ],
+ "userRiskLevels": [
+ ],
+ "users": {
+ "excludeGroups": [
+ "eedad040-3722-4bcb-bde5-bc7c857f4983"
+ ],
+ "excludeRoles": [
+ ],
+ "excludeUsers": [
+ ],
+ "includeGroups": [
+ ],
+ "includeRoles": [
+ ],
+ "includeUsers": [
+ "All"
+ ]
+ }
+ },
+ "displayName": "Common Policy - Require MFA for Azure management",
+ "grantControls": {
+ "builtInControls": [
+ "mfa"
+ ],
+ "customAuthenticationFactors": [
+ ],
+ "operator": "OR",
+ "termsOfUse": [
+ "a0d3eb5b-6cbe-472b-a960-0baacbd02b51"
+ ]
+ },
+ "id": "334e26e9-9622-4e0a-a424-102ed4b185b3",
+ "modifiedDateTime": "2021-08-09T17:52:40.781994+00:00",
+ "state": "enabled"
+}
+
+```
+
+Updated policy example:
+
+```json
+{
+ "conditions": {
+ "applications": {
+ "applicationFilter": null,
+ "excludeApplications": [
+ ],
+ "includeApplications": [
+ "797f4846-ba00-4fd7-ba43-dac1f8f63013"
+ ],
+ "includeAuthenticationContextClassReferences": [
+ ],
+ "includeUserActions": [
+ ]
+ },
+ "clientAppTypes": [
+ "browser",
+ "mobileAppsAndDesktopClients"
+ ],
+ "servicePrincipalRiskLevels": [
+ ],
+ "signInRiskLevels": [
+ ],
+ "userRiskLevels": [
+ ],
+ "users": {
+ "excludeGroups": [
+ "eedad040-3722-4bcb-bde5-bc7c857f4983"
+ ],
+ "excludeRoles": [
+ ],
+ "excludeUsers": [
+ ],
+ "includeGroups": [
+ ],
+ "includeRoles": [
+ ],
+ "includeUsers": [
+ "All"
+ ]
+ }
+ },
+ "displayName": "Common Policy - Require MFA for Azure management",
+ "grantControls": {
+ "builtInControls": [
+ "mfa"
+ ],
+ "customAuthenticationFactors": [
+ ],
+ "operator": "OR",
+ "termsOfUse": [
+ ]
+ },
+ "id": "334e26e9-9622-4e0a-a424-102ed4b185b3",
+ "modifiedDateTime": "2021-08-09T17:52:54.9739405+00:00",
+ "state": "enabled"
+}
+
+```
+
+In the example above, the updated policy doesn't include terms of use in grant controls.
+
+### Restoring Conditional Access policies
+
+For more information about programmatically updating your Conditional Access policies using the Microsoft Graph API, see the article [Conditional Access: Programmatic access](howto-conditional-access-apis.md).
+
+## Next steps
+
+- [What is Azure Active Directory monitoring?](../reports-monitoring/overview-monitoring.md)
+- [Install and use the log analytics views for Azure Active Directory](../reports-monitoring/howto-install-use-log-analytics-views.md)
+- [Conditional Access: Programmatic access](howto-conditional-access-apis.md)
active-directory Scenario Protected Web Api Verification Scope App Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-protected-web-api-verification-scope-app-roles.md
public class TodoListController : Controller
// GET: api/values [HttpGet]
- [RequiredScope(scopeRequiredByApi)
+ [RequiredScope(scopeRequiredByApi)]
public IEnumerable<TodoItem> Get() { // Do the work and return the result.
public class TodoListController : Controller
{ // GET: api/values [HttpGet]
- [RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")
+ [RequiredScope(RequiredScopesConfigurationKey = "AzureAd:Scopes")]
public IEnumerable<TodoItem> Get() { // Do the work and return the result.
active-directory Security Operations Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-introduction.md
For the best results, we recommend that you monitor your domain controllers usin
* [Connect Microsoft Defender for Identity to Active Directory quickstart](/defender-for-identity/install-step2)
-If you do not plan to use Microsoft Defender for identity, you can monitor your domain controllers either by event log messages or by running PowerShell cmdlets.
+If you do not plan to use Microsoft Defender for identity, you can [monitor your domain controllers either by event log messages](/windows-server/identity/ad-ds/plan/security-best-practices/monitoring-active-directory-for-signs-of-compromise) or by [running PowerShell cmdlets](/windows-server/identity/ad-ds/deploy/troubleshooting-domain-controller-deployment).
## Components of hybrid authentication
See these security operations guide articles:
[Security operations for devices](security-operations-devices.md)
-[Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Security Operations Privileged Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/security-operations-privileged-accounts.md
You can monitor privileged account sign-in events in the Azure AD Sign-in logs.
| Discover privileged accounts not registered for MFA. | High | Azure AD Graph API| Query for IsMFARegistered eq false for administrator accounts. [List credentialUserRegistrationDetails - Microsoft Graph beta](/graph/api/reportroot-list-credentialuserregistrationdetails?view=graph-rest-beta&tabs=http) | Audit and investigate to determine if intentional or an oversight. | | Account lockout | High | Azure AD Sign-ins log | Status = Failure<br>-and-<br>error code = 50053 | Define a baseline threshold, and then monitor and adjust to suite your organizational behaviors and limit false alerts from being generated. | | Account disabled/blocked for sign-ins | Low | Azure AD Sign-ins log | Status = Failure<br>-and-<br>Target = user UPN<br>-and-<br>error code = 50057 | This could indicate someone is trying to gain access to an account once they have left an organization. Although the account is blocked, it's still important to log and alert on this activity. |
-| MFA fraud alert/block | High | Azure AD Sign-ins log | Succeeded = false<br>-and-<br>Result detail = MFA denied<br>-and-<br>Target = user | Privileged user has indicated they haven't instigated the MFA prompt and could indicate an attacker has the password for the account. |
+| MFA fraud alert/block | High | Azure AD Sign-ins log/Azure Log Anaylitics | Succeeded = false<br>-and-<br>Result detail = MFA denied<br>-and-<br>Target = user | Privileged user has indicated they haven't instigated the MFA prompt and could indicate an attacker has the password for the account. |
| Privileged account sign-ins outside of expected controls. | | Azure AD Sign-ins log | Status = failure<br>UserPricipalName = <Admin account><br>Location = <unapproved location><br>IP Address = <unapproved IP><br>Device Info= <unapproved Browser, Operating System> | Monitor and alert on any entries that you have defined as unapproved. | | Outside of normal sign in times | High | Azure AD Sign-ins log | Status =success<br>-and-<br>Location =<br>-and-<br>Time = outside of working hours | Monitor and alert if sign-ins occur outside of expected times. It is important to find the normal working pattern for each privileged account and to alert if there are unplanned changes outside of normal working times. Sign-ins outside of normal working hours could indicate compromise or possible insider threats. | | Identity protection risk | High | Identity Protection logs | Risk state = at risk<br>-and-<br>Risk level = low/medium/high<br>-and-<br>Activity = Unfamiliar sign-in/TOR, etc. | This indicates there is some abnormality detected with the sign in for the account and should be alerted on. |
You can monitor privileged account changes using Azure AD Audit logs and Azure M
| Approvals and deny elevation| Low| Azure AD Audit Logs| Service = Access Review<br>-and-<br>Category = UserManagement<br>-and-<br>Activity Type = Request Approved/Denied<br>-and-<br>Initiated actor = UPN| Monitor all elevations as it could give a clear indication of timeline for an attack. | | Changes to PIM settings| High| Azure AD Audit Logs| Service =PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type = Update role setting in PIM<br>-and-<br>Status Reason = MFA on activation disabled (example)| One of these actions could reduce the security of the PIM elevation and make it easier for attackers to acquire a privileged account. | | Elevation not occurring on SAW/PAW| High| Azure AD Sign In logs| Device IDΓÇï<br>-and-<br>Browser<br>-and-<br>OS<br>-and-<br>Compliant/Managed<br>Correlate with:<br>Service = PIM<br>-and-<br>Category = Role Management<br>-and-<br>Activity Type ΓÇô Add member to role completed (PIM activation)<br>-and-<br>Status = Success/failure<br>-and-<br>Modified properties = Role.DisplayName| If this is configured, any attempt to elevate on a non-PAW/SAW device should be investigated immediately as it could indicate an attacker trying to use the account. |
-| Elevation to manage all Azure subscriptions| High| Azure Monitor| Activity Log/Directory Activity<br>Assigns the caller to user access administrator<br>-and-<br>Status = succeeded, success, fail<br>-and-<br>Event initiated by| This should be investigated immediately if not a planned change. This setting could allow an attacker access to Azure subscriptions in your environment. |
+| Elevation to manage all Azure subscriptions| High| Azure Monitor| Activity Log Tab <br>Directory Activity Tab <br> Operations Name=Assigns the caller to user access administrator <br> -and- <br> Event Category=administrative <br> -and-<br>Status = succeeded, start, fail<br>-and-<br>Event initiated by| This should be investigated immediately if not a planned change. This setting could allow an attacker access to Azure subscriptions in your environment. |
For more information about managing elevation, see [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md). For information on monitoring elevations using information available in the Azure AD logs, see [Azure Activity log](../../azure-monitor/essentials/activity-log.md), which is part of the Azure Monitor documentation.
See these security operations guide articles:
[Security operations for devices](security-operations-devices.md)
-[Security operations for infrastructure](security-operations-infrastructure.md)
+[Security operations for infrastructure](security-operations-infrastructure.md)
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
You can move SaaS applications that are currently federated with ADFS to Azure A
For more information, see ΓÇô -- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](/manage-apps/migrate-adfs-apps-to-azure) and
+- [Moving application authentication from Active Directory Federation Services to Azure Active Directory](/azure/active-directory/manage-apps/migrate-adfs-apps-to-azure) and
- [AD FS to Azure AD application migration playbook for developers](/samples/azure-samples/ms-identity-adfs-to-aad/ms-identity-dotnet-adfs-to-aad) ### Remove relying party trust
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/reference-connect-version-history.md
Please follow this link to read more about [auto upgrade](how-to-connect-install
> >For version history information on retired versions, see [Azure AD Connect version release history archive](reference-connect-version-history-archive.md)
+## 2.0.8.0
+>[!NOTE]
+>This is a security update release of Azure AD Connect. This release requires Windows Server 2016 or newer. If you are using an older version of Windows Server, please use [version 1.6.11.3](#16113).
+>This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
+>You can download this release using [this link](https://www.microsoft.com/en-us/download/details.aspx?id=47594).
+
+### Release status
+8/10/2021: Released for download only, not available for auto upgrade.
+
+### Functional changes
+There are no functional changes in this release
+
+## 1.6.11.3
+>[!NOTE]
+>This is security update release of Azure AD Connect. This version is intended to be used by customers are running an older version of Windows Server and cannot upgrade their server to Windows Server 2016 or newer as this time. You cannot use this version to update an Azure AD Connect V2.0 server.
+>This release addresses a vulnerability as documented in [this CVE](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2021-36949). For more information about this vulnerability please refer to the CVE.
+>You can download this release using [this link](https://www.microsoft.com/download/details.aspx?id=103336)
+
+### Release status
+8/10/2021: Released for download only, not available for auto upgrade.
+
+### Functional changes
+There are no functional changes in this release
+ ## 2.0.3.0 >[!NOTE] >This is a major release of Azure AD Connect. Please refer to the [Azure Active Directory V2.0 article](whatis-azure-ad-connect-v2.md) for more details.
We fixed a bug in the sync errors compression utility that was not handling surr
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-overview.md
The following are the high-level steps that Azure AD uses to determine if you ha
1. A user (or service principal) acquires a token to the Microsoft Graph or Azure AD Graph endpoint. 1. The user makes an API call to Azure Active Directory (Azure AD) via Microsoft Graph or Azure AD Graph using the issued token. 1. Depending on the circumstance, Azure AD takes one of the following actions:
- - Evaluates the userΓÇÖs role memberships based on the [wids claim](../../active-directory-b2c/access-tokens.md) in the userΓÇÖs access token.
+ - Evaluates the userΓÇÖs role memberships based on the [wids claim](../develop/access-tokens.md) in the userΓÇÖs access token.
- Retrieves all the role assignments that apply for the user, either directly or via group membership, to the resource on which the action is being taken. 1. Azure AD determines if the action in the API call is included in the roles the user has for this resource. 1. If the user doesn't have a role with the action at the requested scope, access is not granted. Otherwise access is granted.
Using built-in roles in Azure AD is free, while custom roles requires an Azure A
- [Understand Azure AD roles](concept-understand-roles.md) - Create custom role assignments using [the Azure portal, Azure AD PowerShell, and Graph API](custom-create.md)-- [List role assignments](view-assignments.md)
+- [List role assignments](view-assignments.md)
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
Terminology for verifiable credentials (VCs) might be confusing if you're not fa
ΓÇ£A ***credential*** is a set of one or more claims made by an issuer. A verifiable credential is a tamper-evident credential that has authorship that can be cryptographically verified. Verifiable credentials can be used to build verifiable presentations, which can also be cryptographically verified. The claims in a credential can be about different subjects.ΓÇ¥
- ΓÇ£A ***decentralized identifier*** is a portable URL-based identifier, also known as a DID, associated with an entity. These identifiers are often used in a verifiable credential and are associated with subjects, issuers, and verifiers.ΓÇ¥.
+ ΓÇ£A ***decentralized identifier*** is a portable URI-based identifier, also known as a DID, associated with an entity. These identifiers are often used in a verifiable credential and are associated with subjects, issuers, and verifiers.ΓÇ¥.
* In the preceding diagram, the public keys of the actorΓÇÖs DIDs are shown stored in the decentralized ledger (ION).- in the decentralized identifier document.
active-directory Plan Verification Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/verifiable-credentials/plan-verification-solution.md
Verifiable credentials can be used as additional proof to access to sensitive ap
#### Additional elements
-**Relying party web front end**: This is the web front end of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
+**Relying party web frontend**: This is the web frontend of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
**User access authorization logic**: Logic layer in the application that authorizes user access and is enhanced to consume the user attributes inside the VC to make authorization decisions.
-**Other back-end services and dependencies**: Represents the rest of the logic of the application, which typically is unchanged by the inclusion of identity proofing through VCs.
+**Other backend services and dependencies**: Represents the rest of the logic of the application, which typically is unchanged by the inclusion of identity proofing through VCs.
#### Design Considerations
The decentralized nature of verifiable credentials enables this scenario without
#### Additional elements
-**Relying party web front end**: This is the web front end of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
+**Relying party web frontend**: This is the web frontend of the application that is enhanced through Azure AD Verifiable Credential SDK or API calls for VC presentation and validation, based on your business requirements.
-**User access authorization logic**: Logic layer in the application that authorizes user access and is enhanced to consume the user attributes inside the VC to make authorization decisions.
+**User access authorization logic**: Logic layer in the application that authorizes user access and is enhanced to consume the user attributes inside the VC to make authorization decisions.
**Other backend services and dependencies**: Represents the rest of the logic of the application, which typically is unchanged by the inclusion of identity proofing through VCs.
advisor Advisor Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/advisor-performance-recommendations.md
We have determined that your VMs are located in a region different or far from w
## Upgrade to the latest version of the Immersive Reader SDK We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
-Learn more about [Immersive reader SDK](../cognitive-services/immersive-reader/index.yml).
+Learn more about [Immersive reader SDK](../applied-ai-services/immersive-reader/index.yml).
## Improve VM performance by changing the maximum session limit
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/concepts-clusters-workloads.md
In an AKS cluster with multiple node pools, you may need to tell the Kubernetes
Node selectors let you define various parameters, like node OS, to control where a pod should be scheduled.
-The following basic example schedules an NGINX instance on a Linux node using the node selector *"beta.kubernetes.io/os": linux*:
+The following basic example schedules an NGINX instance on a Linux node using the node selector *"kubernetes.io/os": linux*:
```yaml kind: Pod
spec:
- name: myfrontend image: mcr.microsoft.com/oss/nginx/nginx:1.15.12-alpine nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
``` For more information on how to control where pods are scheduled, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
aks Kubernetes Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-service-principal.md
The `Scope` for a resource needs to be a full resource ID, such as */subscriptio
> [!NOTE] > If you have removed the Contributor role assignment from the node resource group, the operations below may fail.
+> Permission grants to clusters using System Managed Identity may take up 60 minutes to populate.
The following sections detail common delegations that you may need to make.
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
Two Kubernetes Services are also created:
app: azure-vote-back spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-back image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
Two Kubernetes Services are also created:
app: azure-vote-front spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-front image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
aks Kubernetes Walkthrough Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-powershell.md
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-back spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-back image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-front spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-front image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
aks Kubernetes Walkthrough Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-rm-template.md
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-back spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-back image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-front spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-front image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
aks Kubernetes Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough.md
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-back spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-back image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
Two [Kubernetes Services][kubernetes-service] are also created:
app: azure-vote-front spec: nodeSelector:
- "beta.kubernetes.io/os": linux
+ "kubernetes.io/os": linux
containers: - name: azure-vote-front image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/start-stop-cluster.md
Title: Start and Stop an Azure Kubernetes Service (AKS)
description: Learn how to stop or start an Azure Kubernetes Service (AKS) cluster. Previously updated : 09/24/2020 Last updated : 08/09/2021 -+ # Stop and Start an Azure Kubernetes Service (AKS) cluster
-Your AKS workloads may not need to run continuously, for example a development cluster that is used only during business hours. This leads to times where your Azure Kubernetes Service (AKS) cluster might be idle, running no more than the system components. You can reduce the cluster footprint by [scaling all the `User` node pools to 0](scale-cluster.md#scale-user-node-pools-to-0), but your [`System` pool](use-system-pools.md) is still required to run the system components while the cluster is running.
+Your AKS workloads may not need to run continuously, for example a development cluster that is used only during business hours. This leads to times where your Azure Kubernetes Service (AKS) cluster might be idle, running no more than the system components. You can reduce the cluster footprint by [scaling all the `User` node pools to 0](scale-cluster.md#scale-user-node-pools-to-0), but your [`System` pool](use-system-pools.md) is still required to run the system components while the cluster is running.
To optimize your costs further during these periods, you can completely turn off (stop) your cluster. This action will stop your control plane and agent nodes altogether, allowing you to save on all the compute costs, while maintaining all your objects and cluster state stored for when you start it again. You can then pick up right where you left of after a weekend or to have your cluster running only while you run your batch jobs. ## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli] or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][kubernetes-walkthrough-powershell], or [using the Azure portal][aks-quickstart-portal].
### Limitations
When using the cluster start/stop feature, the following restrictions apply:
## Stop an AKS Cluster
+### [Azure CLI](#tab/azure-cli)
+ You can use the `az aks stop` command to stop a running AKS cluster's nodes and control plane. The following example stops a cluster named *myAKSCluster*: ```azurecli-interactive
You can verify when your cluster is stopped by using the [az aks show][az-aks-sh
If the `provisioningState` shows `Stopping` that means your cluster hasn't fully stopped yet.
+### [Azure PowerShell](#tab/azure-powershell)
+
+You can use the [Stop-AzAksCluster][stop-azakscluster] cmdlet to stop a running AKS cluster's nodes and control plane. The following example stops a cluster named *myAKSCluster*:
+
+```azurepowershell-interactive
+Stop-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
+```
+
+You can verify your cluster is stopped using the [Get-AzAksCluster][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows as `Stopped` as shown in the following output:
+
+```Output
+ProvisioningState : Stopped
+MaxAgentPools : 100
+KubernetesVersion : 1.20.7
+...
+```
+
+If the `ProvisioningState` shows `Stopping` that means your cluster hasn't fully stopped yet.
+++ > [!IMPORTANT] > If you are using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/) the stop operation can take longer as the drain process will take more time to complete. ## Start an AKS Cluster
-You can use the `az aks start` command to start a stopped AKS cluster's nodes and control plane. The cluster is restarted with the previous control plane state and number of agent nodes.
+### [Azure CLI](#tab/azure-cli)
+
+You can use the `az aks start` command to start a stopped AKS cluster's nodes and control plane. The cluster is restarted with the previous control plane state and number of agent nodes.
The following example starts a cluster named *myAKSCluster*: ```azurecli-interactive
You can verify when your cluster has started by using the [az aks show][az-aks-s
If the `provisioningState` shows `Starting` that means your cluster hasn't fully started yet.
+### [Azure PowerShell](#tab/azure-powershell)
+
+You can use the [Start-AzAksCluster][start-azakscluster] cmdlet to start a stopped AKS cluster's nodes and control plane. The cluster is restarted with the previous control plane state and number of agent nodes.
+The following example starts a cluster named *myAKSCluster*:
+
+```azurepowershell-interactive
+Start-AzAksCluster -Name myAKSCluster -ResourceGroupName myResourceGroup
+```
+
+You can verify when your cluster has started using the [Get-AzAksCluster][get-azakscluster] cmdlet and confirming the `ProvisioningState` shows `Succeeded` as shown in the following output:
+
+```Output
+ProvisioningState : Succeeded
+MaxAgentPools : 100
+KubernetesVersion : 1.20.7
+...
+```
+
+If the `ProvisioningState` shows `Starting` that means your cluster hasn't fully started yet.
+++ > [!NOTE] > When you start your cluster back up, the following is expected behavior:
->
+>
> * The IP address of your API server may change. > * If you are using cluster autoscaler, when you start your cluster back up your current node count may not be between the min and max range values you set. The cluster starts with the number of nodes it needs to run its workloads, which isn't impacted by your autoscaler settings. When your cluster performs scaling operations, the min and max values will impact your current node count and your cluster will eventually enter and remain in that desired range until you stop your cluster.
If the `provisioningState` shows `Starting` that means your cluster hasn't fully
[az-feature-list]: /cli/azure/feature#az_feature_list [az-provider-register]: /cli/azure/provider#az_provider_register [az-aks-show]: /cli/azure/aks#az_aks_show
+[kubernetes-walkthrough-powershell]: kubernetes-walkthrough-powershell.md
+[stop-azakscluster]: /powershell/module/az.aks/stop-azakscluster
+[get-azakscluster]: /powershell/module/az.aks/get-azakscluster
+[start-azakscluster]: /powershell/module/az.aks/start-azakscluster
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/supported-kubernetes-versions.md
Title: Supported Kubernetes versions in Azure Kubernetes Service
description: Understand the Kubernetes version support policy and lifecycle of clusters in Azure Kubernetes Service (AKS) Previously updated : 03/29/2021 Last updated : 08/09/2021 + # Supported Kubernetes versions in Azure Kubernetes Service (AKS)
-The Kubernetes community releases minor versions roughly every three months. Recently, the Kubernetes community has [increased the support window for each version from 9 months to 12 months](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/), starting with version 1.19.
+The Kubernetes community releases minor versions roughly every three months. Recently, the Kubernetes community has [increased the support window for each version from 9 months to 12 months](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/), starting with version 1.19.
Minor version releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs.
You can use one minor version older or newer of `kubectl` relative to your *kube
For example, if your *kube-apiserver* is at *1.17*, then you can use versions *1.16* to *1.18* of `kubectl` with that *kube-apiserver*.
+### [Azure CLI](#tab/azure-cli)
+ To install or update your version of `kubectl`, run `az aks install-cli`.
+### [Azure PowerShell](#tab/azure-powershell)
+
+To install or update your version of `kubectl`, run [Install-AzAksKubectl][install-azakskubectl].
+++ ## Release and deprecation process You can reference upcoming version releases and deprecations on the [AKS Kubernetes Release Calendar](#aks-kubernetes-release-calendar).
For new **minor** versions of Kubernetes:
> [!NOTE] > To find out who is your subscription administrators or to change it, please refer to [manage Azure subscriptions](../cost-management-billing/manage/add-change-subscription-administrator.md#assign-a-subscription-administrator).
-
+ * Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support. For new **patch** versions of Kubernetes:
Specific patch releases may be skipped or rollout accelerated, depending on the
## Azure portal and CLI versions
-When you deploy an AKS cluster in the portal or with the Azure CLI, the cluster defaults to the N-1 minor version and latest patch. For example, if AKS supports *1.17.a*, *1.17.b*, *1.16.c*, *1.16.d*, *1.15.e*, and *1.15.f*, the default version selected is *1.16.c*.
+When you deploy an AKS cluster in the portal, with the Azure CLI, or with Azure PowerShell, the cluster defaults to the N-1 minor version and latest patch. For example, if AKS supports *1.17.a*, *1.17.b*, *1.16.c*, *1.16.d*, *1.15.e*, and *1.15.f*, the default version selected is *1.16.c*.
+
+### [Azure CLI](#tab/azure-cli)
To find out what versions are currently available for your subscription and region, use the [az aks get-versions][az-aks-get-versions] command. The following example lists the available Kubernetes versions for the *EastUS* region:
To find out what versions are currently available for your subscription and regi
az aks get-versions --location eastus --output table ``` +
+### [Azure PowerShell](#tab/azure-powershell)
+
+To find out what versions are currently available for your subscription and region, use the
+[Get-AzAksVersion][get-azaksversion] cmdlet. The following example lists the available Kubernetes versions for the *EastUS* region:
+
+```azurepowershell-interactive
+Get-AzAksVersion -Location eastus
+```
+++ ## AKS Kubernetes Release Calendar For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes#History). | K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | *1.21 GA |
-| 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA |
+| 1.18 | Mar-23-20 | May 2020 | Aug 2020 | *1.21 GA |
+| 1.19 | Aug-04-20 | Sep 2020 | Nov 2020 | 1.22 GA |
| 1.20 | Dec-08-20 | Jan 2021 | Mar 2021 | 1.23 GA | | 1.21 | Apr-08-21 | May 2021 | Jul 2021 | 1.24 GA | | 1.22 | Aug-04-21 | Sept 2021 | Oct 2021 | 1.25 GA |
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
**How does Microsoft notify me of new Kubernetes versions?**
-The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases) as well as emails to subscription administrators who own clusters that are going to fall out of support. In addition to announcements, AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to notify the customer inside the Azure Portal to alert users if they are out of support, as well as alerting them of deprecated APIs that will affect their application or development process.
+The AKS team publishes pre-announcements with planned dates of the new Kubernetes versions in our documentation, our [GitHub](https://github.com/Azure/AKS/releases) as well as emails to subscription administrators who own clusters that are going to fall out of support. In addition to announcements, AKS also uses [Azure Advisor](../advisor/advisor-overview.md) to notify the customer inside the Azure Portal to alert users if they are out of support, as well as alerting them of deprecated APIs that will affect their application or development process.
**How often should I expect to upgrade Kubernetes versions to stay in support?**
-Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you will be able to upgrade at a minimum of once a year to stay on a supported version.
+Starting with Kubernetes 1.19, the [open source community has expanded support to 1 year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). AKS commits to enabling patches and support matching the upstream commitments. For AKS clusters on 1.19 and greater, you will be able to upgrade at a minimum of once a year to stay on a supported version.
For versions on 1.18 or below, the window of support remains at 9 months, requiring an upgrade once every 9 months to stay on a supported version. Regularly test new versions and be prepared to upgrade to newer versions to capture the latest stable enhancements within Kubernetes.
Downgrades are not supported.
'Outside of Support' means that: * The version you're running is outside of the supported versions list.
-* You'll be asked to upgrade the cluster to a supported version when requesting support, unless you're within the 30-day grace period after version deprecation.
+* You'll be asked to upgrade the cluster to a supported version when requesting support, unless you're within the 30-day grace period after version deprecation.
Additionally, AKS doesn't make any runtime or other guarantees for clusters outside of the supported versions list.
No. Once a version is deprecated/removed, you cannot create a cluster with that
**I am on a freshly deprecated version, can I still add new node pools? Or will I have to upgrade?**
-No. You will not be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version. However, this may require you to update the control plane first.
+No. You will not be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version. However, this may require you to update the control plane first.
## Next steps
For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes
[aks-upgrade]: upgrade-cluster.md [az-aks-get-versions]: /cli/azure/aks#az_aks_get_versions [preview-terms]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[install-azakskubectl]: /powershell/module/az.aks/install-azakskubectl
+[get-azaksversion]: /powershell/module/az.aks/get-azaksversion
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/update-credentials.md
To check the expiration date of your service principal, use the [az ad sp creden
```azurecli SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \ --query servicePrincipalProfile.clientId -o tsv)
-az ad sp credential list --id $SP_ID --query "[].endDate" -o tsv
+az ad sp credential list --id "$SP_ID" --query "[].endDate" -o tsv
``` ### Reset the existing service principal credential
SP_ID=$(az aks show --resource-group myResourceGroup --name myAKSCluster \
With a variable set that contains the service principal ID, now reset the credentials using [az ad sp credential reset][az-ad-sp-credential-reset]. The following example lets the Azure platform generate a new secure secret for the service principal. This new secure secret is also stored as a variable. ```azurecli-interactive
-SP_SECRET=$(az ad sp credential reset --name $SP_ID --query password -o tsv)
+SP_SECRET=$(az ad sp credential reset --name "$SP_ID" --query password -o tsv)
``` Now continue on to [update AKS cluster with new service principal credentials](#update-aks-cluster-with-new-service-principal-credentials). This step is necessary for the Service Principal changes to reflect on the AKS cluster.
az aks update-credentials \
--resource-group myResourceGroup \ --name myAKSCluster \ --reset-service-principal \
- --service-principal $SP_ID \
- --client-secret $SP_SECRET
+ --service-principal "$SP_ID" \
+ --client-secret "$SP_SECRET"
``` For small and midsize clusters, it takes a few moments for the service principal credentials to be updated in the AKS.
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
Azure Active Directory pod-managed identities uses Kubernetes primitives to asso
You must have the following resource installed: * The Azure CLI, version 2.20.0 or later
-* The `azure-preview` extension version 0.5.5 or later
+* The `aks-preview` extension version 0.5.5 or later
### Limitations
az feature register --name EnablePodIdentityPreview --namespace Microsoft.Contai
### Install the `aks-preview` Azure CLI
-You also need the *aks-preview* Azure CLI extension version 0.4.64 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+You also need the *aks-preview* Azure CLI extension version 0.5.5 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
```azurecli-interactive # Install the aks-preview extension
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
Update an existing AKS cluster with Azure CNI to include pod-managed identity. ```azurecli-interactive
-az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity --network-plugin azure
+az aks update -g $MY_RESOURCE_GROUP -n $MY_CLUSTER --enable-pod-identity
``` ## Using Kubenet network plugin with Azure Active Directory pod-managed identities
aks Windows Container Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-cli.md
spec:
app: sample spec: nodeSelector:
- "beta.kubernetes.io/os": windows
+ "kubernetes.io/os": windows
containers: - name: sample image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
aks Windows Container Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-container-powershell.md
spec:
app: sample spec: nodeSelector:
- "beta.kubernetes.io/os": windows
+ "kubernetes.io/os": windows
containers: - name: sample image: mcr.microsoft.com/dotnet/framework/samples:aspnetapp
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/windows-faq.md
az aks update \
## How many node pools can I create?
-The AKS cluster can have a maximum of 10 node pools. You can have a maximum of 1000 nodes across those node pools. [Node pool limitations][nodepool-limitations].
+The AKS cluster can have a maximum of 100 node pools. You can have a maximum of 1000 nodes across those node pools. [Node pool limitations][nodepool-limitations].
## What can I name my Windows node pools?
To get started with Windows Server containers in AKS, [create a node pool that r
[hybrid-vms]: ../virtual-machines/windows/hybrid-use-benefit-licensing.md [resource-groups]: faq.md#why-are-two-resource-groups-created-with-aks [dsr]: ../load-balancer/load-balancer-multivip-overview.md#rule-type-2-backend-port-reuse-by-using-floating-ip
-[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference
+[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-access-restriction-policies.md
This example shows how to use the [Validate JWT](api-management-access-restricti
| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 | | header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A | | query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| token-value | Expression returning a string containing JWT token | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing JWT token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
| id | The `id` attribute on the `key` element allows you to specify the string that will be matched against `kid` claim in the token (if present) to find out the appropriate key to use for signature validation. | No | N/A | | match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all | | require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
app-service App Service Web Tutorial Connect Msi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-connect-msi.md
Prepare your environment for the Azure CLI.
First enable Azure AD authentication to SQL Database by assigning an Azure AD user as the Active Directory admin of the server. This user is different from the Microsoft account you used to sign up for your Azure subscription. It must be a user that you created, imported, synced, or invited into Azure AD. For more information on allowed Azure AD users, see [Azure AD features and limitations in SQL Database](../azure-sql/database/authentication-aad-overview.md#azure-ad-features-and-limitations).
-If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
+1. If your Azure AD tenant doesn't have a user yet, create one by following the steps at [Add or delete users using Azure Active Directory](../active-directory/fundamentals/add-users-azure-active-directory.md).
-Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable.
+1. Find the object ID of the Azure AD user using the [`az ad user list`](/cli/azure/ad/user#az_ad_user_list) and replace *\<user-principal-name>*. The result is saved to a variable.
-```azurecli-interactive
-azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv)
-```
-> [!TIP]
-> To see the list of all user principal names in Azure AD, run `az ad user list --query [].userPrincipalName`.
->
+ ```azurecli-interactive
+ azureaduser=$(az ad user list --filter "userPrincipalName eq '<user-principal-name>'" --query [].objectId --output tsv)
+ ```
-Add this Azure AD user as an Active Directory admin using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<server-name>* with the server name (without the `.database.windows.net` suffix).
+ > [!TIP]
+ > To see the list of all user principal names in Azure AD, run `az ad user list --query [].userPrincipalName`.
+ >
-```azurecli-interactive
-az sql server ad-admin create --resource-group myResourceGroup --server-name <server-name> --display-name ADMIN --object-id $azureaduser
-```
+1. Add this Azure AD user as an Active Directory admin using [`az sql server ad-admin create`](/cli/azure/sql/server/ad-admin#az_sql_server_ad_admin_create) command in the Cloud Shell. In the following command, replace *\<server-name>* with the server name (without the `.database.windows.net` suffix).
+
+ ```azurecli-interactive
+ az sql server ad-admin create --resource-group myResourceGroup --server-name <server-name> --display-name ADMIN --object-id $azureaduser
+ ```
For more information on adding an Active Directory admin, see [Provision an Azure Active Directory administrator for your server](../azure-sql/database/authentication-aad-configure.md#provision-azure-ad-admin-sql-managed-instance) ## Set up Visual Studio
-### Windows client
-Visual Studio for Windows is integrated with Azure AD authentication. To enable development and debugging in Visual Studio, add your Azure AD user in Visual Studio by selecting **File** > **Account Settings** from the menu, and click **Add an account**.
+# [Windows client](#tab/windowsclient)
-To set the Azure AD user for Azure service authentication, select **Tools** > **Options** from the menu, then select **Azure Service Authentication** > **Account Selection**. Select the Azure AD user you added and click **OK**.
+1. Visual Studio for Windows is integrated with Azure AD authentication. To enable development and debugging in Visual Studio, add your Azure AD user in Visual Studio by selecting **File** > **Account Settings** from the menu, and click **Add an account**.
-You're now ready to develop and debug your app with the SQL Database as the back end, using Azure AD authentication.
+1. To set the Azure AD user for Azure service authentication, select **Tools** > **Options** from the menu, then select **Azure Service Authentication** > **Account Selection**. Select the Azure AD user you added and click **OK**.
-### macOS client
+# [macOS client](#tab/macosclient)
-Visual Studio for Mac is not integrated with Azure AD authentication. However, the [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) library that you will use later can use tokens from Azure CLI. To enable development and debugging in Visual Studio, first you need to [install Azure CLI](/cli/azure/install-azure-cli) on your local machine.
+1. Visual Studio for Mac is not integrated with Azure AD authentication. However, the [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication) library that you will use later can use tokens from Azure CLI. To enable development and debugging in Visual Studio, [install Azure CLI](/cli/azure/install-azure-cli) on your local machine.
-Once Azure CLI is installed on your local machine, sign in to Azure CLI with the following command using your Azure AD user:
+1. Sign in to Azure CLI with the following command using your Azure AD user:
+
+ ```azurecli
+ az login --allow-no-subscriptions
+ ```
+
+--
-```azurecli
-az login --allow-no-subscriptions
-```
You're now ready to develop and debug your app with the SQL Database as the back end, using Azure AD authentication. ## Modify your project The steps you follow for your project depends on whether it's an ASP.NET project or an ASP.NET Core project. -- [Modify ASP.NET](#modify-aspnet)-- [Modify ASP.NET Core](#modify-aspnet-core)-
-### Modify ASP.NET
+# [ASP.NET](#tab/dotnet)
-In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication):
+1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication):
-```powershell
-Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.4.0
-```
-
-In *Web.config*, working from the top of the file and make the following changes:
--- In `<configSections>`, add the following section declaration in it:-
- ```xml
- <section name="SqlAuthenticationProviders" type="System.Data.SqlClient.SqlAuthenticationProviderConfigurationSection, System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
+ ```powershell
+ Install-Package Microsoft.Azure.Services.AppAuthentication -Version 1.4.0
``` -- below the closing `</configSections>` tag, add the following XML code for `<SqlAuthenticationProviders>`.-
- ```xml
- <SqlAuthenticationProviders>
- <providers>
- <add name="Active Directory Interactive" type="Microsoft.Azure.Services.AppAuthentication.SqlAppAuthenticationProvider, Microsoft.Azure.Services.AppAuthentication" />
- </providers>
- </SqlAuthenticationProviders>
- ```
--- Find the connection string called `MyDbConnection` and replace its `connectionString` value with `"server=tcp:<server-name>.database.windows.net;database=<db-name>;UID=AnyString;Authentication=Active Directory Interactive"`. Replace _\<server-name>_ and _\<db-name>_ with your server name and database name.-
-> [!NOTE]
-> The SqlAuthenticationProvider you just registered is based on top of the AppAuthentication library you installed earlier. By default, it uses a system-assigned identity. To leverage a user-assigned identity, you will need to provide an additional configuration. Please see [connection string support](/dotnet/api/overview/azure/service-to-service-authentication#connection-string-support) for the AppAuthentication library.
-
-That's every thing you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app.
-
-Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
-
-### Modify ASP.NET Core
+1. In *Web.config*, working from the top of the file and make the following changes:
+
+ - In `<configSections>`, add the following section declaration in it:
+
+ ```xml
+ <section name="SqlAuthenticationProviders" type="System.Data.SqlClient.SqlAuthenticationProviderConfigurationSection, System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />
+ ```
+
+ - below the closing `</configSections>` tag, add the following XML code for `<SqlAuthenticationProviders>`.
+
+ ```xml
+ <SqlAuthenticationProviders>
+ <providers>
+ <add name="Active Directory Interactive" type="Microsoft.Azure.Services.AppAuthentication.SqlAppAuthenticationProvider, Microsoft.Azure.Services.AppAuthentication" />
+ </providers>
+ </SqlAuthenticationProviders>
+ ```
+
+ - Find the connection string called `MyDbConnection` and replace its `connectionString` value with `"server=tcp:<server-name>.database.windows.net;database=<db-name>;UID=AnyString;Authentication=Active Directory Interactive"`. Replace _\<server-name>_ and _\<db-name>_ with your server name and database name.
+
+ > [!NOTE]
+ > The SqlAuthenticationProvider you just registered is based on top of the AppAuthentication library you installed earlier. By default, it uses a system-assigned identity. To leverage a user-assigned identity, you will need to provide an additional configuration. Please see [connection string support](/dotnet/api/overview/azure/service-to-service-authentication#connection-string-support) for the AppAuthentication library.
+
+ That's every thing you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app.
+
+1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
+
+# [ASP.NET Core](#tab/dotnetcore)
> [!NOTE] > **Microsoft.Azure.Services.AppAuthentication** is no longer recommended to use with new Azure SDK. > It is replaced with new **Azure Identity client library** available for .NET, Java, TypeScript and Python and should be used for all new development. > Information about how to migrate to `Azure Identity`can be found here: [AppAuthentication to Azure.Identity Migration Guidance](/dotnet/api/overview/azure/app-auth-migration).
-In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication):
-
-```powershell
-Install-Package Microsoft.Data.SqlClient -Version 2.1.2
-Install-Package Azure.Identity -Version 1.4.0
-```
-
-In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string isn't used at all because the local development environment uses a Sqlite database file, and the Azure production environment uses a connection string from App Service. With Active Directory authentication, you want both environments to use the same connection string. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
+1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Azure.Services.AppAuthentication](https://www.nuget.org/packages/Microsoft.Azure.Services.AppAuthentication):
-```json
-"Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Device Code Flow; Database=<database-name>;"
-```
+ ```powershell
+ Install-Package Microsoft.Data.SqlClient -Version 2.1.2
+ Install-Package Azure.Identity -Version 1.4.0
+ ```
-> [!NOTE]
-> We use the `Active Directory Device Code Flow` authentication type because this is the closest we can get to a custom option. Ideally, a `Custom Authentication` type would be available. Without a better term to use at this time, we're using `Device Code Flow`.
->
+1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string isn't used at all because the local development environment uses a Sqlite database file, and the Azure production environment uses a connection string from App Service. With Active Directory authentication, you want both environments to use the same connection string. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with:
-Next, you need to create a custom authentication provider class to acquire and supply the Entity Framework database context with the access token for the SQL Database. In the *Data\\* directory, add a new class `CustomAzureSQLAuthProvider.cs` with the following code inside:
+ ```json
+ "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Device Code Flow; Database=<database-name>;"
+ ```
-```csharp
-public class CustomAzureSQLAuthProvider : SqlAuthenticationProvider
-{
- private static readonly string[] _azureSqlScopes = new[]
- {
- "https://database.windows.net//.default"
- };
+ > [!NOTE]
+ > We use the `Active Directory Device Code Flow` authentication type because this is the closest we can get to a custom option. Ideally, a `Custom Authentication` type would be available. Without a better term to use at this time, we're using `Device Code Flow`.
+ >
- private static readonly TokenCredential _credential = new DefaultAzureCredential();
+1. Next, you need to create a custom authentication provider class to acquire and supply the Entity Framework database context with the access token for the SQL Database. In the *Data\\* directory, add a new class `CustomAzureSQLAuthProvider.cs` with the following code inside:
- public override async Task<SqlAuthenticationToken> AcquireTokenAsync(SqlAuthenticationParameters parameters)
+ ```csharp
+ public class CustomAzureSQLAuthProvider : SqlAuthenticationProvider
{
- var tokenRequestContext = new TokenRequestContext(_azureSqlScopes);
- var tokenResult = await _credential.GetTokenAsync(tokenRequestContext, default);
- return new SqlAuthenticationToken(tokenResult.Token, tokenResult.ExpiresOn);
+ private static readonly string[] _azureSqlScopes = new[]
+ {
+ "https://database.windows.net//.default"
+ };
+
+ private static readonly TokenCredential _credential = new DefaultAzureCredential();
+
+ public override async Task<SqlAuthenticationToken> AcquireTokenAsync(SqlAuthenticationParameters parameters)
+ {
+ var tokenRequestContext = new TokenRequestContext(_azureSqlScopes);
+ var tokenResult = await _credential.GetTokenAsync(tokenRequestContext, default);
+ return new SqlAuthenticationToken(tokenResult.Token, tokenResult.ExpiresOn);
+ }
+
+ public override bool IsSupported(SqlAuthenticationMethod authenticationMethod) => authenticationMethod.Equals(SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow);
}
+ ```
- public override bool IsSupported(SqlAuthenticationMethod authenticationMethod) => authenticationMethod.Equals(SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow);
-}
-```
-
-In *Startup.cs*, update the `ConfigureServices()` method with the following code:
+1. In *Startup.cs*, update the `ConfigureServices()` method with the following code:
-```csharp
-services.AddControllersWithViews();
-services.AddDbContext<MyDatabaseContext>(options =>
-{
- SqlAuthenticationProvider.SetProvider(
- SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow,
- new CustomAzureSQLAuthProvider());
- var sqlConnection = new SqlConnection(Configuration.GetConnectionString("MyDbConnection"));
- options.UseSqlServer(sqlConnection);
-});
-```
+ ```csharp
+ services.AddControllersWithViews();
+ services.AddDbContext<MyDatabaseContext>(options =>
+ {
+ SqlAuthenticationProvider.SetProvider(
+ SqlAuthenticationMethod.ActiveDirectoryDeviceCodeFlow,
+ new CustomAzureSQLAuthProvider());
+ var sqlConnection = new SqlConnection(Configuration.GetConnectionString("MyDbConnection"));
+ options.UseSqlServer(sqlConnection);
+ });
+ ```
-> [!NOTE]
-> This demonstration code is synchronous for clarity and simplicity.
+ > [!NOTE]
+ > This demonstration code is synchronous for clarity and simplicity.
+
+ The preceding code uses the `Azure.Identity` library so that it can authenticate and retrieve an access token for the database, no matter where the code is running. If you're running on your local machine, `DefaultAzureCredential()` loops through a number of options to find a valid account that is logged in. You can read more about the [DefaultAzureCredential class](/dotnet/api/azure.identity.defaultazurecredential).
-The preceding code uses the `Azure.Identity` library so that it can authenticate and retrieve an access token for the database, no matter where the code is running. If you're running on your local machine, `DefaultAzureCredential()` loops through a number of options to find a valid account that is logged in. You can read more about the [DefaultAzureCredential class](/dotnet/api/azure.identity.defaultazurecredential?view=azure-dotnet).
+ That's everything you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
-That's everything you need to connect to SQL Database. When debugging in Visual Studio, your code uses the Azure AD user you configured in [Set up Visual Studio](#set-up-visual-studio). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token.
+1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
-Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio.
+--
## Use managed identity connectivity
Here's an example of the output:
> ``` >
-In the Cloud Shell, sign in to SQL Database by using the SQLCMD command. Replace _\<server-name>_ with your server name, _\<db-name>_ with the database name your app uses, and _\<aad-user-name>_ and _\<aad-password>_ with your Azure AD user's credentials.
+1. In the Cloud Shell, sign in to SQL Database by using the SQLCMD command. Replace _\<server-name>_ with your server name, _\<db-name>_ with the database name your app uses, and _\<aad-user-name>_ and _\<aad-password>_ with your Azure AD user's credentials.
-```bash
-sqlcmd -S <server-name>.database.windows.net -d <db-name> -U <aad-user-name> -P "<aad-password>" -G -l 30
-```
+ ```bash
+ sqlcmd -S <server-name>.database.windows.net -d <db-name> -U <aad-user-name> -P "<aad-password>" -G -l 30
+ ```
-In the SQL prompt for the database you want, run the following commands to grant the permissions your app needs. For example,
+1. In the SQL prompt for the database you want, run the following commands to grant the permissions your app needs. For example,
-```sql
-CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
-ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
-ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
-ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
-GO
-```
+ ```sql
+ CREATE USER [<identity-name>] FROM EXTERNAL PROVIDER;
+ ALTER ROLE db_datareader ADD MEMBER [<identity-name>];
+ ALTER ROLE db_datawriter ADD MEMBER [<identity-name>];
+ ALTER ROLE db_ddladmin ADD MEMBER [<identity-name>];
+ GO
+ ```
-*\<identity-name>* is the name of the managed identity in Azure AD. If the identity is system-assigned, the name is always the same as the name of your App Service app. For a [deployment slot](deploy-staging-slots.md), the name of its system-assigned identity is *\<app-name>/slots/\<slot-name>*. To grant permissions for an Azure AD group, use the group's display name instead (for example, *myAzureSQLDBAccessGroup*).
+ *\<identity-name>* is the name of the managed identity in Azure AD. If the identity is system-assigned, the name is always the same as the name of your App Service app. For a [deployment slot](deploy-staging-slots.md), the name of its system-assigned identity is *\<app-name>/slots/\<slot-name>*. To grant permissions for an Azure AD group, use the group's display name instead (for example, *myAzureSQLDBAccessGroup*).
-Type `EXIT` to return to the Cloud Shell prompt.
+1. Type `EXIT` to return to the Cloud Shell prompt.
-> [!NOTE]
-> The back-end services of managed identities also [maintains a token cache](overview-managed-identity.md#obtain-tokens-for-azure-resources) that updates the token for a target resource only when it expires. If you make a mistake configuring your SQL Database permissions and try to modify the permissions *after* trying to get a token with your app, you don't actually get a new token with the updated permissions until the cached token expires.
+ > [!NOTE]
+ > The back-end services of managed identities also [maintains a token cache](overview-managed-identity.md#obtain-tokens-for-azure-resources) that updates the token for a target resource only when it expires. If you make a mistake configuring your SQL Database permissions and try to modify the permissions *after* trying to get a token with your app, you don't actually get a new token with the updated permissions until the cached token expires.
-> [!NOTE]
-> AAD is not supported for on-prem SQL Server, and this includes MSIs.
+ > [!NOTE]
+ > Azure Active Directory and managed identities are not supported for on-premises SQL Server.
### Modify connection string
az webapp config connection-string delete --resource-group myResourceGroup --nam
All that's left now is to publish your changes to Azure.
-**If you came from [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md)**, publish your changes in Visual Studio. In the **Solution Explorer**, right-click your **DotNetAppSqlDb** project and select **Publish**.
+# [ASP.NET](#tab/dotnet)
-![Publish from Solution Explorer](./media/app-service-web-tutorial-dotnet-sqldatabase/solution-explorer-publish.png)
+1. **If you came from [Tutorial: Build an ASP.NET app in Azure with SQL Database](app-service-web-tutorial-dotnet-sqldatabase.md)**, publish your changes in Visual Studio. In the **Solution Explorer**, right-click your **DotNetAppSqlDb** project and select **Publish**.
-In the publish page, click **Publish**.
+ ![Publish from Solution Explorer](./media/app-service-web-tutorial-dotnet-sqldatabase/solution-explorer-publish.png)
-> [!IMPORTANT]
-> Ensure that your app service name doesn't match with any existing [App Registrations](../active-directory/manage-apps/add-application-portal.md). This will lead to Principal ID conflicts.
+1. In the publish page, click **Publish**.
+
+ > [!IMPORTANT]
+ > Ensure that your app service name doesn't match with any existing [App Registrations](../active-directory/manage-apps/add-application-portal.md). This will lead to Principal ID conflicts.
+
+# [ASP.NET Core](#tab/dotnetcore)
**If you came from [Tutorial: Build an ASP.NET Core and SQL Database app in Azure App Service](tutorial-dotnetcore-sqldb-app.md)**, publish your changes using Git, with the following commands:
git commit -am "configure managed identity"
git push azure main ```
+--
+ When the new webpage shows your to-do list, your app is connecting to the database using the managed identity. ![Azure app after Code First Migration](./media/app-service-web-tutorial-dotnet-sqldatabase/this-one-is-done.png)
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-web-tutorial-rest-api.md
In this step, you set up the local ASP.NET Core project. App Service supports th
### Clone the sample application
-In the terminal window, `cd` to a working directory.
+1. In the terminal window, `cd` to a working directory.
-Run the following command to clone the sample repository.
+1. Clone the sample repository and change to the repository root.
-```bash
-git clone https://github.com/Azure-Samples/dotnet-core-api
-```
+ ```bash
+ git clone https://github.com/Azure-Samples/dotnet-core-api
+ cd dotnet-core-api
+ ```
+
+ This repository contains an app that's created based on the following tutorial: [ASP.NET Core Web API help pages using Swagger](/aspnet/core/tutorials/web-api-help-pages-using-swagger?tabs=visual-studio). It uses a Swagger generator to serve the [Swagger UI](https://swagger.io/swagger-ui/) and the Swagger JSON endpoint.
+
+1. Make sure the default branch is `main`.
-This repository contains an app that's created based on the following tutorial: [ASP.NET Core Web API help pages using Swagger](/aspnet/core/tutorials/web-api-help-pages-using-swagger?tabs=visual-studio). It uses a Swagger generator to serve the [Swagger UI](https://swagger.io/swagger-ui/) and the Swagger JSON endpoint.
+ ```bash
+ git branch -m main
+ ```
+
+ > [!TIP]
+ > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)), this tutorial also shows you how to deploy a repository from `main`.
### Run the application
-Run the following commands to install the required packages, run database migrations, and start the application.
+1. Run the following commands to install the required packages, run database migrations, and start the application.
-```bash
-cd dotnet-core-api
-dotnet restore
-dotnet run
-```
+ ```bash
+ dotnet restore
+ dotnet run
+ ```
-Navigate to `http://localhost:5000/swagger` in a browser to play with the Swagger UI.
+1. Navigate to `http://localhost:5000/swagger` in a browser to play with the Swagger UI.
-![ASP.NET Core API running locally](./media/app-service-web-tutorial-rest-api/azure-app-service-local-swagger-ui.png)
+ ![ASP.NET Core API running locally](./media/app-service-web-tutorial-rest-api/azure-app-service-local-swagger-ui.png)
-Navigate to `http://localhost:5000/api/todo` and see a list of ToDo JSON items.
+1. Navigate to `http://localhost:5000/api/todo` and see a list of ToDo JSON items.
-Navigate to `http://localhost:5000` and play with the browser app. Later, you will point the browser app to a remote API in App Service to test CORS functionality. Code for the browser app is found in the repository's _wwwroot_ directory.
+1. Navigate to `http://localhost:5000` and play with the browser app. Later, you will point the browser app to a remote API in App Service to test CORS functionality. Code for the browser app is found in the repository's _wwwroot_ directory.
-To stop ASP.NET Core at any time, press `Ctrl+C` in the terminal.
+1. To stop ASP.NET Core at any time, press `Ctrl+C` in the terminal.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
In this step, you deploy your SQL Database-connected .NET Core application to Ap
[!INCLUDE [app-service-plan-no-h](../../includes/app-service-web-git-push-to-azure-no-h.md)]
-<pre>
-Enumerating objects: 83, done.
-Counting objects: 100% (83/83), done.
-Delta compression using up to 8 threads
-Compressing objects: 100% (78/78), done.
-Writing objects: 100% (83/83), 22.15 KiB | 3.69 MiB/s, done.
-Total 83 (delta 26), reused 0 (delta 0)
-remote: Updating branch 'master'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id '509236e13d'.
-remote: Generating deployment script.
-remote: Project file path: .\TodoApi.csproj
-remote: Generating deployment script for ASP.NET MSBuild16 App
-remote: Generated deployment script files
-remote: Running deployment command...
-remote: Handling ASP.NET Core Web Application deployment with MSBuild16.
-remote: .
-remote: .
-remote: .
-remote: Finished successfully.
-remote: Running post deployment command(s)...
-remote: Triggering recycle (preview mode disabled).
-remote: Deployment successful.
-To https://&lt;app_name&gt;.scm.azurewebsites.net/&lt;app_name&gt;.git
- * [new branch] master -> master
-</pre>
+ <pre>
+ Enumerating objects: 83, done.
+ Counting objects: 100% (83/83), done.
+ Delta compression using up to 8 threads
+ Compressing objects: 100% (78/78), done.
+ Writing objects: 100% (83/83), 22.15 KiB | 3.69 MiB/s, done.
+ Total 83 (delta 26), reused 0 (delta 0)
+ remote: Updating branch 'master'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id '509236e13d'.
+ remote: Generating deployment script.
+ remote: Project file path: .\TodoApi.csproj
+ remote: Generating deployment script for ASP.NET MSBuild16 App
+ remote: Generated deployment script files
+ remote: Running deployment command...
+ remote: Handling ASP.NET Core Web Application deployment with MSBuild16.
+ remote: .
+ remote: .
+ remote: .
+ remote: Finished successfully.
+ remote: Running post deployment command(s)...
+ remote: Triggering recycle (preview mode disabled).
+ remote: Deployment successful.
+ To https://&lt;app_name&gt;.scm.azurewebsites.net/&lt;app_name&gt;.git
+ * [new branch] master -> master
+ </pre>
### Browse to the Azure app
-Navigate to `http://<app_name>.azurewebsites.net/swagger` in a browser and play with the Swagger UI.
+1. Navigate to `http://<app_name>.azurewebsites.net/swagger` in a browser and play with the Swagger UI.
-![ASP.NET Core API running in Azure App Service](./media/app-service-web-tutorial-rest-api/azure-app-service-browse-app.png)
+ ![ASP.NET Core API running in Azure App Service](./media/app-service-web-tutorial-rest-api/azure-app-service-browse-app.png)
-Navigate to `http://<app_name>.azurewebsites.net/swagger/v1/swagger.json` to see the _swagger.json_ for your deployed API.
+1. Navigate to `http://<app_name>.azurewebsites.net/swagger/v1/swagger.json` to see the _swagger.json_ for your deployed API.
-Navigate to `http://<app_name>.azurewebsites.net/api/todo` to see your deployed API working.
+1. Navigate to `http://<app_name>.azurewebsites.net/api/todo` to see your deployed API working.
## Add CORS functionality
Next, you enable the built-in CORS support in App Service for your API.
### Test CORS in sample app
-In your local repository, open _wwwroot/https://docsupdatetracker.net/index.html_.
+1. In your local repository, open _wwwroot/https://docsupdatetracker.net/index.html_.
-In Line 51, set the `apiEndpoint` variable to the URL of your deployed API (`http://<app_name>.azurewebsites.net`). Replace _\<appname>_ with your app name in App Service.
+1. In Line 51, set the `apiEndpoint` variable to the URL of your deployed API (`http://<app_name>.azurewebsites.net`). Replace _\<appname>_ with your app name in App Service.
-In your local terminal window, run the sample app again.
+1. In your local terminal window, run the sample app again.
-```bash
-dotnet run
-```
+ ```bash
+ dotnet run
+ ```
-Navigate to the browser app at `http://localhost:5000`. Open the developer tools window in your browser (`Ctrl`+`Shift`+`i` in Chrome for Windows) and inspect the **Console** tab. You should now see the error message, `No 'Access-Control-Allow-Origin' header is present on the requested resource`.
+1. Navigate to the browser app at `http://localhost:5000`. Open the developer tools window in your browser (`Ctrl`+`Shift`+`i` in Chrome for Windows) and inspect the **Console** tab. You should now see the error message, `No 'Access-Control-Allow-Origin' header is present on the requested resource`.
-![CORS error in browser client](./media/app-service-web-tutorial-rest-api/azure-app-service-cors-error.png)
+ ![CORS error in browser client](./media/app-service-web-tutorial-rest-api/azure-app-service-cors-error.png)
-Because of the domain mismatch between the browser app (`http://localhost:5000`) and remote resource (`http://<app_name>.azurewebsites.net`), and the fact that your API in App Service is not sending the `Access-Control-Allow-Origin` header, your browser has prevented cross-domain content from loading in your browser app.
+ Because of the domain mismatch between the browser app (`http://localhost:5000`) and remote resource (`http://<app_name>.azurewebsites.net`), and the fact that your API in App Service is not sending the `Access-Control-Allow-Origin` header, your browser has prevented cross-domain content from loading in your browser app.
-In production, your browser app would have a public URL instead of the localhost URL, but the way to enable CORS to a localhost URL is the same as a public URL.
+ In production, your browser app would have a public URL instead of the localhost URL, but the way to enable CORS to a localhost URL is the same as a public URL.
### Enable CORS
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-local-git.md
Set-AzResource -PropertyObject $PropertiesObject -ResourceGroupName <group-name>
> [!NOTE] > If you [created a Git-enabled app in PowerShell using New-AzWebApp](#create-a-git-enabled-app), the remote is already created for you.
-1. Push to the Azure remote with `git push azure master`.
+1. Push to the Azure remote with `git push azure master` (see [Change deployment branch](#change-deployment-branch)).
1. In the **Git Credential Manager** window, enter your [user-scope or application-scope credentials](#configure-a-deployment-user), not your Azure sign-in credentials.
Set-AzResource -PropertyObject $PropertiesObject -ResourceGroupName <group-name>
1. Browse to your app in the Azure portal to verify that the content is deployed.
+## Change deployment branch
+
+When you push commits to your App Service repository, App Service deploys the files in the `master` branch by default. Because many Git repositories are moving away from `master` to `main`, you need to make sure that you push to the right branch in the App Service repository in one of two ways:
+
+- Deploy to `master` explicitly with a command like:
+
+ ```bash
+ git push azure main:master
+ ```
+
+- Change the deployment branch by setting the `DEPLOYMENT_BRANCH` app setting, then push commits to the custom branch. To do it with Azure CLI:
+
+ ```azurecli-interactive
+ az webapp config appsettings set --name <app-name> --resource-group <group-name> --settings DEPLOYMENT_BRANCH='main'
+ git push azure main
+ ```
+ ## Troubleshoot deployment You may see the following common error messages when you use Git to publish to an App Service app in Azure:
You may see the following common error messages when you use Git to publish to a
|`Unable to access '[siteURL]': Failed to connect to [scmAddress]`|The app isn't up and running.|Start the app in the Azure portal. Git deployment isn't available when the web app is stopped.| |`Couldn't resolve host 'hostname'`|The address information for the 'azure' remote is incorrect.|Use the `git remote -v` command to list all remotes, along with the associated URL. Verify that the URL for the 'azure' remote is correct. If needed, remove and recreate this remote using the correct URL.| |`No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'main'.`|You didn't specify a branch during `git push`, or you haven't set the `push.default` value in `.gitconfig`.|Run `git push` again, specifying the main branch: `git push azure main`.|
-|`Error - Changes committed to remote repository but deployment to website failed.`|You pushed a local branch that doesn't match the app deployment branch on 'azure'.|Verify that current branch is `master`. To change the default branch, use `DEPLOYMENT_BRANCH` application setting.|
+|`Error - Changes committed to remote repository but deployment to website failed.`|You pushed a local branch that doesn't match the app deployment branch on 'azure'.|Verify that current branch is `master`. To change the default branch, use `DEPLOYMENT_BRANCH` application setting (see [Change deployment branch](#change-deployment-branch)). |
|`src refspec [branchname] does not match any.`|You tried to push to a branch other than main on the 'azure' remote.|Run `git push` again, specifying the main branch: `git push azure main`.| |`RPC failed; result=22, HTTP code = 5xx.`|This error can happen if you try to push a large git repository over HTTPS.|Change the git configuration on the local machine to make the `postBuffer` bigger. For example: `git config --global http.postBuffer 524288000`.| |`Error - Changes committed to remote repository but your web app not updated.`|You deployed a Node.js app with a _package.json_ file that specifies additional required modules.|Review the `npm ERR!` error messages before this error for more context on the failure. The following are the known causes of this error, and the corresponding `npm ERR!` messages:<br /><br />**Malformed package.json file**: `npm ERR! Couldn't read dependencies.`<br /><br />**Native module doesn't have a binary distribution for Windows**:<br />`npm ERR! \cmd "/c" "node-gyp rebuild"\ failed with 1` <br />or <br />`npm ERR! [modulename@version] preinstall: \make || gmake\ `|
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-php.md
To complete this quickstart:
## Download the sample locally
-In a terminal window, run the following commands. This will clone the sample application to your local machine, and navigate to the directory containing the sample code.
-
-```bash
-git clone https://github.com/Azure-Samples/php-docs-hello-world
-cd php-docs-hello-world
-```
-
+1. In a terminal window, run the following commands. This will clone the sample application to your local machine, and navigate to the directory containing the sample code.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/php-docs-hello-world
+ cd php-docs-hello-world
+ ```
+
+1. Make sure the default branch is `main`.
+
+ ```bash
+ git branch -m main
+ ```
+
+ > [!TIP]
+ > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this quickstart also shows you how to deploy a repository from `main`.
+
## Run the app locally
-Run the application locally so that you see how it should look when you deploy it to Azure. Open a terminal window and use the `php` command to launch the built-in PHP web server.
-
-```bash
-php -S localhost:8080
-```
-
-Open a web browser, and navigate to the sample app at `http://localhost:8080`.
+1. Run the application locally so that you see how it should look when you deploy it to Azure. Open a terminal window and use the `php` command to launch the built-in PHP web server.
-You see the **Hello World!** message from the sample app displayed in the page.
+ ```bash
+ php -S localhost:8080
+ ```
+
+1. Open a web browser, and navigate to the sample app at `http://localhost:8080`.
-![Sample app running locally](media/quickstart-php/localhost-hello-world-in-browser.png)
-
-In your terminal window, press **Ctrl+C** to exit the web server.
+ You see the **Hello World!** message from the sample app displayed in the page.
+
+ ![Sample app running locally](media/quickstart-php/localhost-hello-world-in-browser.png)
+
+1. In your terminal window, press **Ctrl+C** to exit the web server.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
In your terminal window, press **Ctrl+C** to exit the web server.
[!INCLUDE [Create resource group](../../includes/app-service-web-create-resource-group-linux.md)] ::: zone-end ## Create a web app
-In the Cloud Shell, create a web app in the `myAppServicePlan` App Service plan with the [`az webapp create`](/cli/azure/webapp#az_webapp_create) command.
+1. In the Cloud Shell, create a web app in the `myAppServicePlan` App Service plan with the [`az webapp create`](/cli/azure/webapp#az_webapp_create) command.
-In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az_webapp_list_runtimes).
+ In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.4`. To see all supported runtimes, run [`az webapp list-runtimes`](/cli/azure/webapp#az_webapp_list_runtimes).
-```azurecli-interactive
-# Bash
-az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.4" --deployment-local-git
-# PowerShell
-az --% webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.4" --deployment-local-git
-```
+ ```azurecli-interactive
+ az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'PHP|7.4' --deployment-local-git
+ ```
+
+ When the web app has been created, the Azure CLI shows output similar to the following example:
-> [!NOTE]
-> The stop-parsing symbol `(--%)`, introduced in PowerShell 3.0, directs PowerShell to refrain from interpreting input as PowerShell commands or expressions.
->
+ <pre>
+ Local git is configured with url of 'https://&lt;username&gt;@&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git'
+ {
+ "availabilityState": "Normal",
+ "clientAffinityEnabled": true,
+ "clientCertEnabled": false,
+ "cloningInfo": null,
+ "containerSize": 0,
+ "dailyMemoryTimeQuota": 0,
+ "defaultHostName": "&lt;app-name&gt;.azurewebsites.net",
+ "enabled": true,
+ &lt; JSON data removed for brevity. &gt;
+ }
+ </pre>
+
+ You've created an empty new web app, with git deployment enabled.
-When the web app has been created, the Azure CLI shows output similar to the following example:
+ > [!NOTE]
+ > The URL of the Git remote is shown in the `deploymentLocalGitUrl` property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
+ >
-<pre>
-Local git is configured with url of 'https://&lt;username&gt;@&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git'
-{
- "availabilityState": "Normal",
- "clientAffinityEnabled": true,
- "clientCertEnabled": false,
- "cloningInfo": null,
- "containerSize": 0,
- "dailyMemoryTimeQuota": 0,
- "defaultHostName": "&lt;app-name&gt;.azurewebsites.net",
- "enabled": true,
- &lt; JSON data removed for brevity. &gt;
-}
-</pre>
+1. Browse to your newly created web app. Replace _&lt;app-name>_ with your unique app name created in the prior step.
-You've created an empty new web app, with git deployment enabled.
+ ```bash
+ http://<app-name>.azurewebsites.net
+ ```
-> [!NOTE]
-> The URL of the Git remote is shown in the `deploymentLocalGitUrl` property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
->
+ Here is what your new web app should look like:
-Browse to your newly created web app. Replace _&lt;app-name>_ with your unique app name created in the prior step.
-
-```bash
-http://<app-name>.azurewebsites.net
-```
-
-Here is what your new web app should look like:
-
-![Empty web app page](media/quickstart-php/app-service-web-service-created.png)
+ ![Empty web app page](media/quickstart-php/app-service-web-service-created.png)
[!INCLUDE [Push to Azure](../../includes/app-service-web-git-push-to-azure.md)]
-<pre>
-Counting objects: 2, done.
-Delta compression using up to 4 threads.
-Compressing objects: 100% (2/2), done.
-Writing objects: 100% (2/2), 352 bytes | 0 bytes/s, done.
-Total 2 (delta 1), reused 0 (delta 0)
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id '25f18051e9'.
-remote: Generating deployment script.
-remote: Running deployment command...
-remote: Handling Basic Web Site deployment.
-remote: Kudu sync from: '/home/site/repository' to: '/home/site/wwwroot'
-remote: Copying file: '.gitignore'
-remote: Copying file: 'LICENSE'
-remote: Copying file: 'README.md'
-remote: Copying file: 'index.php'
-remote: Ignoring: .git
-remote: Finished successfully.
-remote: Running post deployment command(s)...
-remote: Deployment successful.
-To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- cc39b1e..25f1805 main -> main
-</pre>
+ <pre>
+ Counting objects: 2, done.
+ Delta compression using up to 4 threads.
+ Compressing objects: 100% (2/2), done.
+ Writing objects: 100% (2/2), 352 bytes | 0 bytes/s, done.
+ Total 2 (delta 1), reused 0 (delta 0)
+ remote: Updating branch 'main'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id '25f18051e9'.
+ remote: Generating deployment script.
+ remote: Running deployment command...
+ remote: Handling Basic Web Site deployment.
+ remote: Kudu sync from: '/home/site/repository' to: '/home/site/wwwroot'
+ remote: Copying file: '.gitignore'
+ remote: Copying file: 'LICENSE'
+ remote: Copying file: 'README.md'
+ remote: Copying file: 'index.php'
+ remote: Ignoring: .git
+ remote: Finished successfully.
+ remote: Running post deployment command(s)...
+ remote: Deployment successful.
+ To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
+ cc39b1e..25f1805 main -> main
+ </pre>
## Browse to the app
The PHP sample code is running in an Azure App Service web app.
## Update locally and redeploy the code
-Using a local text editor, open the `index.php` file within the PHP app, and make a small change to the text within the string next to `echo`:
+1. Using a local text editor, open the `index.php` file within the PHP app, and make a small change to the text within the string next to `echo`:
-```php
-echo "Hello Azure!";
-```
+ ```php
+ echo "Hello Azure!";
+ ```
-In the local terminal window, commit your changes in Git, and then push the code changes to Azure.
+1. In the local terminal window, commit your changes in Git, and then push the code changes to Azure.
-```bash
-git commit -am "updated output"
-git push azure main
-```
+ ```bash
+ git commit -am "updated output"
+ git push azure main
+ ```
-Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page.
+1. Once deployment has completed, return to the browser window that opened during the **Browse to the app** step, and refresh the page.
-![Updated sample app running in Azure](media/quickstart-php/hello-azure-in-browser.png)
+ ![Updated sample app running in Azure](media/quickstart-php/hello-azure-in-browser.png)
## Manage your new Azure app
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-ruby.md
## Download the sample
-In a terminal window, run the following command to clone the sample app repository to your local machine:
+1. In a terminal window, clone the sample application to your local machine, and navigate to the directory containing the sample code.
-```bash
-git clone https://github.com/Azure-Samples/ruby-docs-hello-world
-```
+ ```bash
+ git clone https://github.com/Azure-Samples/ruby-docs-hello-world
+ cd ruby-docs-hello-world
+ ```
-## Run the application locally
+1. Make sure the default branch is `main`.
-Run the application locally so that you see how it should look when you deploy it to Azure. Open a terminal window, change to the `hello-world` directory, and use the `rails server` command to start the server.
+ ```bash
+ git branch -m main
+ ```
+
+ > [!TIP]
+ > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
-The first step is to install the required gems. There's a `Gemfile` included in the sample, so just run the following command:
+## Run the application locally
-```bash
-bundle install
-```
+1. Install the required gems. There's a `Gemfile` included in the sample, so just run the following command:
-Once the gems are installed, we'll use bundler to start the app:
+ ```bash
+ bundle install
+ ```
-```bash
-bundle exec rails server
-```
+1. Once the gems are installed, start the app:
-Using your web browser, navigate to `http://localhost:3000` to test the app locally.
+ ```bash
+ bundle exec rails server
+ ```
-![Hello World configured](./media/quickstart-ruby/hello-world-updated.png)
+1. Using your web browser, navigate to `http://localhost:3000` to test the app locally.
+
+ ![Hello World configured](./media/quickstart-ruby/hello-world-updated.png)
[!INCLUDE [Try Cloud Shell](../../includes/cloud-shell-try-it.md)]
Using your web browser, navigate to `http://localhost:3000` to test the app loca
## Create a web app
+1. Create a [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan.
-Browse to the app to see your newly created web app with built-in image. Replace _&lt;app name>_ with your web app name.
+ In the Cloud Shell, you can use the [`az webapp create`](/cli/azure/webapp) command. In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `RUBY|2.6.2`. To see all supported runtimes, run [`az webapp list-runtimes --linux`](/cli/azure/webapp).
-```bash
-http://<app_name>.azurewebsites.net
-```
+ ```azurecli-interactive
+ az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'RUBY|2.6.2' --deployment-local-git
+ ```
-Here is what your new web app should look like:
+ When the web app has been created, the Azure CLI shows output similar to the following example:
-![Splash page](./media/quickstart-ruby/splash-page.png)
+ <pre>
+ Local git is configured with url of 'https://&lt;username&gt;@&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git'
+ {
+ "availabilityState": "Normal",
+ "clientAffinityEnabled": true,
+ "clientCertEnabled": false,
+ "cloningInfo": null,
+ "containerSize": 0,
+ "dailyMemoryTimeQuota": 0,
+ "defaultHostName": "&lt;app-name&gt;.azurewebsites.net",
+ "deploymentLocalGitUrl": "https://&lt;username&gt;@&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git",
+ "enabled": true,
+ &lt; JSON data removed for brevity. &gt;
+ }
+ </pre>
+
+ You've created an empty new web app, with git deployment enabled.
-## Deploy your application
+ > [!NOTE]
+ > The URL of the Git remote is shown in the `deploymentLocalGitUrl` property, with the format `https://<username>@<app-name>.scm.azurewebsites.net/<app-name>.git`. Save this URL as you need it later.
+ >
-Run the following commands to deploy the local application to your Azure web app:
+1. Browse to the app to see your newly created web app with built-in image. Replace _&lt;app-name>_ with your web app name.
-```bash
-git remote add azure <Git deployment URL from above>
-git push azure main
-```
+ ```bash
+ http://<app_name>.azurewebsites.net
+ ```
-Confirm that the remote deployment operations report success. The commands produce output similar to the following text:
+ Here is what your new web app should look like:
-```bash
-remote: Using turbolinks 5.2.0
-remote: Using uglifier 4.1.20
-remote: Using web-console 3.7.0
-remote: Bundle complete! 18 Gemfile dependencies, 78 gems now installed.
-remote: Bundled gems are installed into `/tmp/bundle`
-remote: Zipping up bundle contents
-remote: .......
-remote: ~/site/repository
-remote: Finished successfully.
-remote: Running post deployment command(s)...
-remote: Deployment successful.
-remote: App container will begin restart within 10 seconds.
-To https://<app-name>.scm.azurewebsites.net/<app-name>.git
- a6e73a2..ae34be9 main -> main
-```
+ ![Splash page](./media/quickstart-ruby/splash-page.png)
+
+## Deploy your application
++
+ <pre>
+ remote: Using turbolinks 5.2.0
+ remote: Using uglifier 4.1.20
+ remote: Using web-console 3.7.0
+ remote: Bundle complete! 18 Gemfile dependencies, 78 gems now installed.
+ remote: Bundled gems are installed into `/tmp/bundle`
+ remote: Zipping up bundle contents
+ remote: .......
+ remote: ~/site/repository
+ remote: Finished successfully.
+ remote: Running post deployment command(s)...
+ remote: Deployment successful.
+ remote: App container will begin restart within 10 seconds.
+ To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
+ a6e73a2..ae34be9 main -> main
+ </pre>
+
+## Browse to the app
Once the deployment has completed, wait about 10 seconds for the web app to restart, and then navigate to the web app and verify the results.
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-auth-aad.md
In this step, you set up the local .NET Core project. You use the same project t
### Clone and run the sample application
-Run the following commands to clone the sample repository and run it.
+1. Run the following commands to clone the sample repository and run it.
-```bash
-git clone https://github.com/Azure-Samples/dotnet-core-api
-cd dotnet-core-api
-dotnet run
-```
+ ```bash
+ git clone https://github.com/Azure-Samples/dotnet-core-api
+ cd dotnet-core-api
+ dotnet run
+ ```
+
+1. Navigate to `http://localhost:5000` and try adding, editing, and removing todo items.
+
+ ![ASP.NET Core API running locally](./media/tutorial-auth-aad/local-run.png)
-Navigate to `http://localhost:5000` and try adding, editing, and removing todo items.
+1. To stop ASP.NET Core, press `Ctrl+C` in the terminal.
-![ASP.NET Core API running locally](./media/tutorial-auth-aad/local-run.png)
+1. Make sure the default branch is `main`.
-To stop ASP.NET Core at any time, press `Ctrl+C` in the terminal.
+ ```bash
+ git branch -m main
+ ```
+
+ > [!TIP]
+ > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
## Deploy apps to Azure
az webapp create --resource-group myAuthResourceGroup --plan myAuthAppServicePla
### Push to Azure from Git
-Back in the _local terminal window_, run the following Git commands to deploy to the back-end app. Replace _\<deploymentLocalGitUrl-of-back-end-app>_ with the URL of the Git remote that you saved from [Create Azure resources](#create-azure-resources). When prompted for credentials by Git Credential Manager, make sure that you enter [your deployment credentials](deploy-configure-credentials.md), not the credentials you use to sign in to the Azure portal.
+1. Since you're deploying the `main` branch, you need to set the default deployment branch for your two App Service apps to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)). In the Cloud Shell, set the `DEPLOYMENT_BRANCH` app setting with the [`az webapp config appsettings set`](/cli/azure/webapp/appsettings#az_webapp_config_appsettings_set) command.
-```bash
-git remote add backend <deploymentLocalGitUrl-of-back-end-app>
-git push backend master
-```
+ ```azurecli-interactive
+ az webapp config appsettings set --name <front-end-app-name> --resource-group myAuthResourceGroup --settings DEPLOYMENT_BRANCH='main'
+ az webapp config appsettings set --name <back-end-app-name> --resource-group myAuthResourceGroup --settings DEPLOYMENT_BRANCH='main'
+ ```
-In the local terminal window, run the following Git commands to deploy the same code to the front-end app. Replace _\<deploymentLocalGitUrl-of-front-end-app>_ with the URL of the Git remote that you saved from [Create Azure resources](#create-azure-resources).
+1. Back in the _local terminal window_, run the following Git commands to deploy to the back-end app. Replace _\<deploymentLocalGitUrl-of-back-end-app>_ with the URL of the Git remote that you saved from [Create Azure resources](#create-azure-resources). When prompted for credentials by Git Credential Manager, make sure that you enter [your deployment credentials](deploy-configure-credentials.md), not the credentials you use to sign in to the Azure portal.
-```bash
-git remote add frontend <deploymentLocalGitUrl-of-front-end-app>
-git push frontend master
-```
+ ```bash
+ git remote add backend <deploymentLocalGitUrl-of-back-end-app>
+ git push backend main
+ ```
+
+1. In the local terminal window, run the following Git commands to deploy the same code to the front-end app. Replace _\<deploymentLocalGitUrl-of-front-end-app>_ with the URL of the Git remote that you saved from [Create Azure resources](#create-azure-resources).
+
+ ```bash
+ git remote add frontend <deploymentLocalGitUrl-of-front-end-app>
+ git push frontend main
+ ```
### Browse to the apps
In this step, you point the front-end app's server code to access the back-end A
### Modify front-end code
-In the local repository, open _Controllers/TodoController.cs_. At the beginning of the `TodoController` class, add the following lines and replace _\<back-end-app-name>_ with the name of your back-end app:
+1. In the local repository, open _Controllers/TodoController.cs_. At the beginning of the `TodoController` class, add the following lines and replace _\<back-end-app-name>_ with the name of your back-end app:
-```cs
-private static readonly HttpClient _client = new HttpClient();
-private static readonly string _remoteUrl = "https://<back-end-app-name>.azurewebsites.net";
-```
+ ```cs
+ private static readonly HttpClient _client = new HttpClient();
+ private static readonly string _remoteUrl = "https://<back-end-app-name>.azurewebsites.net";
+ ```
-Find the method that's decorated with `[HttpGet]` and replace the code inside the curly braces with:
+1. Find the method that's decorated with `[HttpGet]` and replace the code inside the curly braces with:
-```cs
-var data = await _client.GetStringAsync($"{_remoteUrl}/api/Todo");
-return JsonConvert.DeserializeObject<List<TodoItem>>(data);
-```
+ ```cs
+ var data = await _client.GetStringAsync($"{_remoteUrl}/api/Todo");
+ return JsonConvert.DeserializeObject<List<TodoItem>>(data);
+ ```
-The first line makes a `GET /api/Todo` call to the back-end API app.
+ The first line makes a `GET /api/Todo` call to the back-end API app.
-Next, find the method that's decorated with `[HttpGet("{id}")]` and replace the code inside the curly braces with:
+1. Next, find the method that's decorated with `[HttpGet("{id}")]` and replace the code inside the curly braces with:
-```cs
-var data = await _client.GetStringAsync($"{_remoteUrl}/api/Todo/{id}");
-return Content(data, "application/json");
-```
+ ```cs
+ var data = await _client.GetStringAsync($"{_remoteUrl}/api/Todo/{id}");
+ return Content(data, "application/json");
+ ```
-The first line makes a `GET /api/Todo/{id}` call to the back-end API app.
+ The first line makes a `GET /api/Todo/{id}` call to the back-end API app.
-Next, find the method that's decorated with `[HttpPost]` and replace the code inside the curly braces with:
+1. Next, find the method that's decorated with `[HttpPost]` and replace the code inside the curly braces with:
-```cs
-var response = await _client.PostAsJsonAsync($"{_remoteUrl}/api/Todo", todoItem);
-var data = await response.Content.ReadAsStringAsync();
-return Content(data, "application/json");
-```
+ ```cs
+ var response = await _client.PostAsJsonAsync($"{_remoteUrl}/api/Todo", todoItem);
+ var data = await response.Content.ReadAsStringAsync();
+ return Content(data, "application/json");
+ ```
-The first line makes a `POST /api/Todo` call to the back-end API app.
+ The first line makes a `POST /api/Todo` call to the back-end API app.
-Next, find the method that's decorated with `[HttpPut("{id}")]` and replace the code inside the curly braces with:
+1. Next, find the method that's decorated with `[HttpPut("{id}")]` and replace the code inside the curly braces with:
-```cs
-var res = await _client.PutAsJsonAsync($"{_remoteUrl}/api/Todo/{id}", todoItem);
-return new NoContentResult();
-```
+ ```cs
+ var res = await _client.PutAsJsonAsync($"{_remoteUrl}/api/Todo/{id}", todoItem);
+ return new NoContentResult();
+ ```
-The first line makes a `PUT /api/Todo/{id}` call to the back-end API app.
+ The first line makes a `PUT /api/Todo/{id}` call to the back-end API app.
-Next, find the method that's decorated with `[HttpDelete("{id}")]` and replace the code inside the curly braces with:
+1. Next, find the method that's decorated with `[HttpDelete("{id}")]` and replace the code inside the curly braces with:
-```cs
-var res = await _client.DeleteAsync($"{_remoteUrl}/api/Todo/{id}");
-return new NoContentResult();
-```
+ ```cs
+ var res = await _client.DeleteAsync($"{_remoteUrl}/api/Todo/{id}");
+ return new NoContentResult();
+ ```
-The first line makes a `DELETE /api/Todo/{id}` call to the back-end API app.
+ The first line makes a `DELETE /api/Todo/{id}` call to the back-end API app.
-Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
+1. Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
-```bash
-git add .
-git commit -m "call back-end API"
-git push frontend master
-```
+ ```bash
+ git add .
+ git commit -m "call back-end API"
+ git push frontend main
+ ```
### Check your changes
-Navigate to `http://<front-end-app-name>.azurewebsites.net` and add a few items, such as `from front end 1` and `from front end 2`.
+1. Navigate to `http://<front-end-app-name>.azurewebsites.net` and add a few items, such as `from front end 1` and `from front end 2`.
-Navigate to `http://<back-end-app-name>.azurewebsites.net` to see the items added from the front-end app. Also, add a few items, such as `from back end 1` and `from back end 2`, then refresh the front-end app to see if it reflects the changes.
+1. Navigate to `http://<back-end-app-name>.azurewebsites.net` to see the items added from the front-end app. Also, add a few items, such as `from back end 1` and `from back end 2`, then refresh the front-end app to see if it reflects the changes.
+ :::image type="content" source="./media/tutorial-auth-aad/remote-api-call-run.png" alt-text="Screenshot of an Azure App Service Rest API Sample in a browser window, which shows a To do list app with items added from the front-end app.":::
## Configure auth
You use Azure Active Directory as the identity provider. For more information, s
### Enable authentication and authorization for back-end app
-In the [Azure portal](https://portal.azure.com) menu, select **Resource groups** or search for and select *Resource groups* from any page.
+1. In the [Azure portal](https://portal.azure.com) menu, select **Resource groups** or search for and select *Resource groups* from any page.
-In **Resource groups**, find and select your resource group. In **Overview**, select your back-end app's management page.
+1. In **Resource groups**, find and select your resource group. In **Overview**, select your back-end app's management page.
+ :::image type="content" source="./media/tutorial-auth-aad/portal-navigate-back-end.png" alt-text="Screenshot of the Resource groups window, showing the Overview for an example resource group and a back-end app's management page selected.":::
-In your back-end app's left menu, select **Authentication**, and then click **Add identity provider**.
+1. In your back-end app's left menu, select **Authentication**, and then click **Add identity provider**.
-In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities.
+1. In the **Add an identity provider** page, select **Microsoft** as the **Identity provider** to sign in Microsoft and Azure AD identities.
-For **App registration** > **App registration type**, select **Create new app registration**.
+1. For **App registration** > **App registration type**, select **Create new app registration**.
-For **App registration** > **Supported account types**, select **Current tenant-single tenant**.
+1. For **App registration** > **Supported account types**, select **Current tenant-single tenant**.
-In the **App Service authentication settings** section, leave **Authentication** set to **Require authentication** and **Unauthenticated requests** set to **HTTP 302 Found redirect: recommended for websites**.
+1. In the **App Service authentication settings** section, leave **Authentication** set to **Require authentication** and **Unauthenticated requests** set to **HTTP 302 Found redirect: recommended for websites**.
-At the bottom of the **Add an identity provider** page, click **Add** to enable authentication for your web app.
+1. At the bottom of the **Add an identity provider** page, click **Add** to enable authentication for your web app.
+ :::image type="content" source="./media/tutorial-auth-aad/configure-auth-back-end.png" alt-text="Screenshot of the back-end app's left menu showing Authentication/Authorization selected and settings selected in the right menu.":::
-The **Authentication** page opens. Copy the **Client ID** of the Azure AD application to a notepad. You need this value later.
+1. The **Authentication** page opens. Copy the **Client ID** of the Azure AD application to a notepad. You need this value later.
+ :::image type="content" source="./media/tutorial-auth-aad/get-application-id-back-end.png" alt-text="Screenshot of the Azure Active Directory Settings window showing the Azure AD App, and the Azure AD Applications window showing the Client ID to copy.":::
If you stop here, you have a self-contained app that's already secured by the App Service authentication and authorization. The remaining sections show you how to secure a multi-app solution by "flowing" the authenticated user from the front end to the back end.
If you like, navigate to `http://<front-end-app-name>.azurewebsites.net`. It sho
Now that you've enabled authentication and authorization to both of your apps, each of them is backed by an AD application. In this step, you give the front-end app permissions to access the back end on the user's behalf. (Technically, you give the front end's _AD application_ the permissions to access the back end's _AD application_ on the user's behalf.)
-In the [Azure portal](https://portal.azure.com) menu, select **Azure Active Directory** or search for and select *Azure Active Directory* from any page.
+1. In the [Azure portal](https://portal.azure.com) menu, select **Azure Active Directory** or search for and select *Azure Active Directory* from any page.
-Select **App registrations** > **Owned applications** > **View all applications in this directory**. Select your front-end app name, then select **API permissions**.
+1. Select **App registrations** > **Owned applications** > **View all applications in this directory**. Select your front-end app name, then select **API permissions**.
+ :::image type="content" source="./media/tutorial-auth-aad/add-api-access-front-end.png" alt-text="Screenshot of Microsoft - App registrations window with Owned applications, a front-end app name, and API permissions selected.":::
-Select **Add a permission**, then select **APIs my organization uses** > **\<back-end-app-name>**.
+1. Select **Add a permission**, then select **APIs my organization uses** > **\<back-end-app-name>**.
-In the **Request API permissions** page for the back-end app, select **Delegated permissions** and **user_impersonation**, then select **Add permissions**.
+1. In the **Request API permissions** page for the back-end app, select **Delegated permissions** and **user_impersonation**, then select **Add permissions**.
+ :::image type="content" source="./media/tutorial-auth-aad/select-permission-front-end.png" alt-text="Screenshot of the Request API permissions page showing Delegated permissions, user_impersonation, and the Add permission button selected.":::
### Configure App Service to return a usable access token The front-end app now has the required permissions to access the back-end app as the signed-in user. In this step, you configure App Service authentication and authorization to give you a usable access token for accessing the back end. For this step, you need the back end's client ID, which you copied from [Enable authentication and authorization for back-end app](#enable-authentication-and-authorization-for-back-end-app).
-Navigate to [Azure Resource Explorer](https://resources.azure.com) and using the resource tree, locate your front-end web app.
+1. Navigate to [Azure Resource Explorer](https://resources.azure.com) and using the resource tree, locate your front-end web app.
-The [Azure Resource Explorer](https://resources.azure.com) is now opened with your front-end app selected in the resource tree. At the top of the page, click **Read/Write** to enable editing of your Azure resources.
+1. The [Azure Resource Explorer](https://resources.azure.com) is now opened with your front-end app selected in the resource tree. At the top of the page, click **Read/Write** to enable editing of your Azure resources.
+ :::image type="content" source="./media/tutorial-auth-aad/resources-enable-write.png" alt-text="Screenshot of the Read Only and Read/Write buttons at the top of the Azure Resource Explorer page, with the Read/Write button selected.":::
-In the left browser, drill down to **config** > **authsettings**.
+1. In the left browser, drill down to **config** > **authsettings**.
-In the **authsettings** view, click **Edit**. Set `additionalLoginParams` to the following JSON string, using the client ID you copied.
+1. In the **authsettings** view, click **Edit**. Set `additionalLoginParams` to the following JSON string, using the client ID you copied.
-```json
-"additionalLoginParams": ["response_type=code id_token","resource=<back-end-client-id>"],
-```
+ ```json
+ "additionalLoginParams": ["response_type=code id_token","resource=<back-end-client-id>"],
+ ```
+ :::image type="content" source="./media/tutorial-auth-aad/additional-login-params-front-end.png" alt-text="Screenshot of a code example in the authsettings view showing the additionalLoginParams string with an example of a client ID.":::
-Save your settings by clicking **PUT**.
+1. Save your settings by clicking **PUT**.
-Your apps are now configured. The front end is now ready to access the back end with a proper access token.
+ Your apps are now configured. The front end is now ready to access the back end with a proper access token.
For information on how to configure the access token for other providers, see [Refresh identity provider tokens](configure-authentication-oauth-tokens.md#refresh-auth-tokens).
Your front-end app now has the required permission and also adds the back end's
> [!NOTE] > These headers are injected for all supported languages. You access them using the standard pattern for each respective language.
-In the local repository, open _Controllers/TodoController.cs_ again. Under the `TodoController(TodoContext context)` constructor, add the following code:
-
-```cs
-public override void OnActionExecuting(ActionExecutingContext context)
-{
- base.OnActionExecuting(context);
+1. In the local repository, open _Controllers/TodoController.cs_ again. Under the `TodoController(TodoContext context)` constructor, add the following code:
- _client.DefaultRequestHeaders.Accept.Clear();
- _client.DefaultRequestHeaders.Authorization =
- new AuthenticationHeaderValue("Bearer", Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"]);
-}
-```
+ ```cs
+ public override void OnActionExecuting(ActionExecutingContext context)
+ {
+ base.OnActionExecuting(context);
+
+ _client.DefaultRequestHeaders.Accept.Clear();
+ _client.DefaultRequestHeaders.Authorization =
+ new AuthenticationHeaderValue("Bearer", Request.Headers["X-MS-TOKEN-AAD-ACCESS-TOKEN"]);
+ }
+ ```
-This code adds the standard HTTP header `Authorization: Bearer <access-token>` to all remote API calls. In the ASP.NET Core MVC request execution pipeline, `OnActionExecuting` executes just before the respective action does, so each of your outgoing API call now presents the access token.
+ This code adds the standard HTTP header `Authorization: Bearer <access-token>` to all remote API calls. In the ASP.NET Core MVC request execution pipeline, `OnActionExecuting` executes just before the respective action does, so each of your outgoing API call now presents the access token.
-Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
+1. Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
-```bash
-git add .
-git commit -m "add authorization header for server code"
-git push frontend master
-```
+ ```bash
+ git add .
+ git commit -m "add authorization header for server code"
+ git push frontend main
+ ```
-Sign in to `https://<front-end-app-name>.azurewebsites.net` again. At the user data usage agreement page, click **Accept**.
+1. Sign in to `https://<front-end-app-name>.azurewebsites.net` again. At the user data usage agreement page, click **Accept**.
-You should now be able to create, read, update, and delete data from the back-end app as before. The only difference now is that both apps are now secured by App Service authentication and authorization, including the service-to-service calls.
+ You should now be able to create, read, update, and delete data from the back-end app as before. The only difference now is that both apps are now secured by App Service authentication and authorization, including the service-to-service calls.
Congratulations! Your server code is now accessing the back-end data on behalf of the authenticated user.
This step is not related to authentication and authorization. However, you need
### Point Angular.js app to back-end API
-In the local repository, open _wwwroot/https://docsupdatetracker.net/index.html_.
+1. In the local repository, open _wwwroot/https://docsupdatetracker.net/index.html_.
-In Line 51, set the `apiEndpoint` variable to the HTTPS URL of your back-end app (`https://<back-end-app-name>.azurewebsites.net`). Replace _\<back-end-app-name>_ with your app name in App Service.
+1. In Line 51, set the `apiEndpoint` variable to the HTTPS URL of your back-end app (`https://<back-end-app-name>.azurewebsites.net`). Replace _\<back-end-app-name>_ with your app name in App Service.
-In the local repository, open _wwwroot/app/scripts/todoListSvc.js_ and see that `apiEndpoint` is prepended to all the API calls. Your Angular.js app is now calling the back-end APIs.
+1. In the local repository, open _wwwroot/app/scripts/todoListSvc.js_ and see that `apiEndpoint` is prepended to all the API calls. Your Angular.js app is now calling the back-end APIs.
### Add access token to API calls
-In _wwwroot/app/scripts/todoListSvc.js_, above the list of API calls (above the line `getItems : function(){`), add the following function to the list:
-
-```javascript
-setAuth: function (token) {
- $http.defaults.headers.common['Authorization'] = 'Bearer ' + token;
-},
-```
-
-This function is called to set the default `Authorization` header with the access token. You call it in the next step.
+1. In _wwwroot/app/scripts/todoListSvc.js_, above the list of API calls (above the line `getItems : function(){`), add the following function to the list:
-In the local repository, open _wwwroot/app/scripts/app.js_ and find the following code:
-
-```javascript
-$routeProvider.when("/Home", {
- controller: "todoListCtrl",
- templateUrl: "/App/Views/TodoList.html",
-}).otherwise({ redirectTo: "/Home" });
-```
-
-Replace the entire code block with the following code:
-
-```javascript
-$routeProvider.when("/Home", {
- controller: "todoListCtrl",
- templateUrl: "/App/Views/TodoList.html",
- resolve: {
- token: ['$http', 'todoListSvc', function ($http, todoListSvc) {
- return $http.get('/.auth/me').then(function (response) {
- todoListSvc.setAuth(response.data[0].access_token);
- return response.data[0].access_token;
- });
- }]
+ ```javascript
+ setAuth: function (token) {
+ $http.defaults.headers.common['Authorization'] = 'Bearer ' + token;
},
-}).otherwise({ redirectTo: "/Home" });
-```
-
-The new change adds the `resolve` mapping that calls `/.auth/me` and sets the access token. It makes sure you have the access token before instantiating the `todoListCtrl` controller. That way all API calls by the controller includes the token.
+ ```
+
+ This function is called to set the default `Authorization` header with the access token. You call it in the next step.
+
+1. In the local repository, open _wwwroot/app/scripts/app.js_ and find the following code:
+
+ ```javascript
+ $routeProvider.when("/Home", {
+ controller: "todoListCtrl",
+ templateUrl: "/App/Views/TodoList.html",
+ }).otherwise({ redirectTo: "/Home" });
+ ```
+
+1. Replace the entire code block with the following code:
+
+ ```javascript
+ $routeProvider.when("/Home", {
+ controller: "todoListCtrl",
+ templateUrl: "/App/Views/TodoList.html",
+ resolve: {
+ token: ['$http', 'todoListSvc', function ($http, todoListSvc) {
+ return $http.get('/.auth/me').then(function (response) {
+ todoListSvc.setAuth(response.data[0].access_token);
+ return response.data[0].access_token;
+ });
+ }]
+ },
+ }).otherwise({ redirectTo: "/Home" });
+ ```
+
+ The new change adds the `resolve` mapping that calls `/.auth/me` and sets the access token. It makes sure you have the access token before instantiating the `todoListCtrl` controller. That way all API calls by the controller includes the token.
### Deploy updates and test
-Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
+1. Save all your changes. In the local terminal window, deploy your changes to the front-end app with the following Git commands:
-```bash
-git add .
-git commit -m "add authorization header for Angular"
-git push frontend master
-```
+ ```bash
+ git add .
+ git commit -m "add authorization header for Angular"
+ git push frontend main
+ ```
-Navigate to `https://<front-end-app-name>.azurewebsites.net` again. You should now be able to create, read, update, and delete data from the back-end app, directly in the Angular.js app.
+1. Navigate to `https://<front-end-app-name>.azurewebsites.net` again. You should now be able to create, read, update, and delete data from the back-end app, directly in the Angular.js app.
Congratulations! Your client code is now accessing the back-end data on behalf of the authenticated user.
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-custom-container.md
In this section, you make a change to the web app code, rebuild the image, and t
1. Update the version number in the image's tag to v1.0.1: ```bash
- docker tag appsvc-tutorial-custom-image <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest
+ docker tag appsvc-tutorial-custom-image <registry-name>.azurecr.io/appsvc-tutorial-custom-image:v1.0.1
``` Replace `<registry-name>` with the name of your registry.
In this section, you make a change to the web app code, rebuild the image, and t
1. Push the image to the registry: ```bash
- docker push <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest
+ docker push <registry-name>.azurecr.io/appsvc-tutorial-custom-image:v1.0.1
``` 1. Once the image push is complete, the webhook notifies App Service about the push, and App Service tries to pull in the updated image. Wait a few minutes, and then verify that the update has been deployed by browsing to `https://<app-name>.azurewebsites.net`.
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-dotnetcore-sqldb-app.md
In this step, you set up the local .NET Core project.
### Clone the sample application
-In the terminal window, `cd` to a working directory.
+1. In the terminal window, `cd` to a working directory.
-Run the following commands to clone the sample repository and change to its root.
+1. Run the following commands to clone the sample repository and change to its root.
-```bash
-git clone https://github.com/azure-samples/dotnetcore-sqldb-tutorial
-cd dotnetcore-sqldb-tutorial
-```
+ ```bash
+ git clone https://github.com/azure-samples/dotnetcore-sqldb-tutorial
+ cd dotnetcore-sqldb-tutorial
+ ```
+
+ The sample project contains a basic CRUD (create-read-update-delete) app using [Entity Framework Core](/ef/core/).
+
+1. Make sure the default branch is `main`.
-The sample project contains a basic CRUD (create-read-update-delete) app using [Entity Framework Core](/ef/core/).
+ ```bash
+ git branch -m main
+ ```
+
+ > [!TIP]
+ > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)), this tutorial also shows you how to deploy a repository from `main`.
### Run the application
-Run the following commands to install the required packages, run database migrations, and start the application.
+1. Run the following commands to install the required packages, run database migrations, and start the application.
-```bash
-dotnet tool install -g dotnet-ef
-dotnet ef database update
-dotnet run
-```
+ ```bash
+ dotnet tool install -g dotnet-ef
+ dotnet ef database update
+ dotnet run
+ ```
-Navigate to `http://localhost:5000` in a browser. Select the **Create New** link and create a couple _to-do_ items.
+1. Navigate to `http://localhost:5000` in a browser. Select the **Create New** link and create a couple _to-do_ items.
-![connects successfully to SQL Database](./media/tutorial-dotnetcore-sqldb-app/local-app-in-browser.png)
+ ![connects successfully to SQL Database](./media/tutorial-dotnetcore-sqldb-app/local-app-in-browser.png)
-To stop .NET Core at any time, press `Ctrl+C` in the terminal.
+1. To stop .NET Core at any time, press `Ctrl+C` in the terminal.
## Create production SQL Database
When the SQL Database logical server is created, the Azure CLI shows information
### Configure a server firewall rule
-Create an [Azure SQL Database server-level firewall rule](../azure-sql/database/firewall-configure.md) using the [`az sql server firewall create`](/cli/azure/sql/server/firewall-rule#az_sql_server_firewall_rule_create) command. When both starting IP and end IP are set to 0.0.0.0, the firewall is only opened for other Azure resources.
+1. Create an [Azure SQL Database server-level firewall rule](../azure-sql/database/firewall-configure.md) using the [`az sql server firewall create`](/cli/azure/sql/server/firewall-rule#az_sql_server_firewall_rule_create) command. When both starting IP and end IP are set to 0.0.0.0, the firewall is only opened for other Azure resources.
-```azurecli-interactive
-az sql server firewall-rule create --resource-group myResourceGroup --server <server-name> --name AllowAzureIps --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
-```
-
-> [!TIP]
-> You can be even more restrictive in your firewall rule by [using only the outbound IP addresses your app uses](overview-inbound-outbound-ips.md#find-outbound-ips).
->
+ ```azurecli-interactive
+ az sql server firewall-rule create --resource-group myResourceGroup --server <server-name> --name AllowAzureIps --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
+ ```
+
+ > [!TIP]
+ > You can be even more restrictive in your firewall rule by [using only the outbound IP addresses your app uses](overview-inbound-outbound-ips.md#find-outbound-ips).
+ >
-In the Cloud Shell, run the command again to allow access from your local computer by replacing *\<your-ip-address>* with [your local IPv4 IP address](https://www.whatsmyip.org/).
+1. In the Cloud Shell, run the command again to allow access from your local computer by replacing *\<your-ip-address>* with [your local IPv4 IP address](https://www.whatsmyip.org/).
-```azurecli-interactive
-az sql server firewall-rule create --name AllowLocalClient --server <server-name> --resource-group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
-```
+ ```azurecli-interactive
+ az sql server firewall-rule create --name AllowLocalClient --server <server-name> --resource-group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
+ ```
### Create a database
dotnet ef database update
### Run app with new configuration
-Now that database migrations is run on the production database, test your app by running:
+1. Now that database migrations is run on the production database, test your app by running:
-```
-dotnet run
-```
+ ```
+ dotnet run
+ ```
-Navigate to `http://localhost:5000` in a browser. Select the **Create New** link and create a couple _to-do_ items. Your app is now reading and writing data to the production database.
+1. Navigate to `http://localhost:5000` in a browser. Select the **Create New** link and create a couple _to-do_ items. Your app is now reading and writing data to the production database.
-Commit your local changes, then commit it into your Git repository.
+1. Commit your local changes, then commit it into your Git repository.
-```bash
-git add .
-git commit -m "connect to SQLDB in Azure"
-```
+ ```bash
+ git add .
+ git commit -m "connect to SQLDB in Azure"
+ ```
You're now ready to deploy your code.
To see how the connection string is referenced in your code, see [Configure app
### Push to Azure from Git - [!INCLUDE [push-to-azure-no-h](../../includes/app-service-web-git-push-to-azure-no-h.md)]
-<pre>
-Enumerating objects: 268, done.
-Counting objects: 100% (268/268), done.
-Compressing objects: 100% (171/171), done.
-Writing objects: 100% (268/268), 1.18 MiB | 1.55 MiB/s, done.
-Total 268 (delta 95), reused 251 (delta 87), pack-reused 0
-remote: Resolving deltas: 100% (95/95), done.
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id '64821c3558'.
-remote: Generating deployment script.
-remote: Project file path: .\DotNetCoreSqlDb.csproj
-remote: Generating deployment script for ASP.NET MSBuild16 App
-remote: Generated deployment script files
-remote: Running deployment command...
-remote: Handling ASP.NET Core Web Application deployment with MSBuild16.
-remote: .
-remote: .
-remote: .
-remote: Finished successfully.
-remote: Running post deployment command(s)...
-remote: Triggering recycle (preview mode disabled).
-remote: App container will begin restart within 10 seconds.
-To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- * [new branch] main -> main
-</pre>
+
+ <pre>
+ Enumerating objects: 268, done.
+ Counting objects: 100% (268/268), done.
+ Compressing objects: 100% (171/171), done.
+ Writing objects: 100% (268/268), 1.18 MiB | 1.55 MiB/s, done.
+ Total 268 (delta 95), reused 251 (delta 87), pack-reused 0
+ remote: Resolving deltas: 100% (95/95), done.
+ remote: Updating branch 'main'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id '64821c3558'.
+ remote: Generating deployment script.
+ remote: Project file path: .\DotNetCoreSqlDb.csproj
+ remote: Generating deployment script for ASP.NET MSBuild16 App
+ remote: Generated deployment script files
+ remote: Running deployment command...
+ remote: Handling ASP.NET Core Web Application deployment with MSBuild16.
+ remote: .
+ remote: .
+ remote: .
+ remote: Finished successfully.
+ remote: Running post deployment command(s)...
+ remote: Triggering recycle (preview mode disabled).
+ remote: App container will begin restart within 10 seconds.
+ To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
+ * [new branch] main -> main
+ </pre>
::: zone-end ::: zone pivot="platform-linux" -
-<pre>
-Enumerating objects: 273, done.
-Counting objects: 100% (273/273), done.
-Delta compression using up to 4 threads
-Compressing objects: 100% (175/175), done.
-Writing objects: 100% (273/273), 1.19 MiB | 1.85 MiB/s, done.
-Total 273 (delta 96), reused 259 (delta 88)
-remote: Resolving deltas: 100% (96/96), done.
-remote: Deploy Async
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id 'cccecf86c5'.
-remote: Repository path is /home/site/repository
-remote: Running oryx build...
-remote: Build orchestrated by Microsoft Oryx, https://github.com/Microsoft/Oryx
-remote: You can report issues at https://github.com/Microsoft/Oryx/issues
-remote: .
-remote: .
-remote: .
-remote: Done.
-remote: Running post deployment command(s)...
-remote: Triggering recycle (preview mode disabled).
-remote: Deployment successful.
-remote: Deployment Logs : 'https://&lt;app-name&gt;.scm.azurewebsites.net/newui/jsonviewer?view_url=/api/deployments/cccecf86c56493ffa594e76ea1deb3abb3702d89/log'
-To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
- * [new branch] main -> main
-</pre>
+ <pre>
+ Enumerating objects: 273, done.
+ Counting objects: 100% (273/273), done.
+ Delta compression using up to 4 threads
+ Compressing objects: 100% (175/175), done.
+ Writing objects: 100% (273/273), 1.19 MiB | 1.85 MiB/s, done.
+ Total 273 (delta 96), reused 259 (delta 88)
+ remote: Resolving deltas: 100% (96/96), done.
+ remote: Deploy Async
+ remote: Updating branch 'main'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id 'cccecf86c5'.
+ remote: Repository path is /home/site/repository
+ remote: Running oryx build...
+ remote: Build orchestrated by Microsoft Oryx, https://github.com/Microsoft/Oryx
+ remote: You can report issues at https://github.com/Microsoft/Oryx/issues
+ remote: .
+ remote: .
+ remote: .
+ remote: Done.
+ remote: Running post deployment command(s)...
+ remote: Triggering recycle (preview mode disabled).
+ remote: Deployment successful.
+ remote: Deployment Logs : 'https://&lt;app-name&gt;.scm.azurewebsites.net/newui/jsonviewer?view_url=/api/deployments/cccecf86c56493ffa594e76ea1deb3abb3702d89/log'
+ To https://&lt;app-name&gt;.scm.azurewebsites.net/&lt;app-name&gt;.git
+ * [new branch] main -> main
+ </pre>
::: zone-end ### Browse to the Azure app
-Browse to the deployed app using your web browser.
+1. Browse to the deployed app using your web browser.
-```bash
-http://<app-name>.azurewebsites.net
-```
+ ```bash
+ http://<app-name>.azurewebsites.net
+ ```
-Add a few to-do items.
+1. Add a few to-do items.
-![app running in App Service](./media/tutorial-dotnetcore-sqldb-app/azure-app-in-browser.png)
+ ![app running in App Service](./media/tutorial-dotnetcore-sqldb-app/azure-app-in-browser.png)
**Congratulations!** You're running a data-driven .NET Core app in App Service.
dotnet ef database update
Make some changes in your code to use the `Done` property. For simplicity in this tutorial, you're only going to change the `Index` and `Create` views to see the property in action.
-Open _Controllers/TodosController.cs_.
+1. Open _Controllers/TodosController.cs_.
-Find the `Create([Bind("ID,Description,CreatedDate")] Todo todo)` method and add `Done` to the list of properties in the `Bind` attribute. When you're done, your `Create()` method signature looks like the following code:
+1. Find the `Create([Bind("ID,Description,CreatedDate")] Todo todo)` method and add `Done` to the list of properties in the `Bind` attribute. When you're done, your `Create()` method signature looks like the following code:
-```csharp
-public async Task<IActionResult> Create([Bind("ID,Description,CreatedDate,Done")] Todo todo)
-```
+ ```csharp
+ public async Task<IActionResult> Create([Bind("ID,Description,CreatedDate,Done")] Todo todo)
+ ```
-Open _Views/Todos/Create.cshtml_.
+1. Open _Views/Todos/Create.cshtml_.
-In the Razor code, you should see a `<div class="form-group">` element for `Description`, and then another `<div class="form-group">` element for `CreatedDate`. Immediately following these two elements, add another `<div class="form-group">` element for `Done`:
+1. In the Razor code, you should see a `<div class="form-group">` element for `Description`, and then another `<div class="form-group">` element for `CreatedDate`. Immediately following these two elements, add another `<div class="form-group">` element for `Done`:
-```csharp
-<div class="form-group">
- <label asp-for="Done" class="col-md-2 control-label"></label>
- <div class="col-md-10">
- <input asp-for="Done" class="form-control" />
- <span asp-validation-for="Done" class="text-danger"></span>
+ ```csharp
+ <div class="form-group">
+ <label asp-for="Done" class="col-md-2 control-label"></label>
+ <div class="col-md-10">
+ <input asp-for="Done" class="form-control" />
+ <span asp-validation-for="Done" class="text-danger"></span>
+ </div>
</div>
-</div>
-```
+ ```
-Open _Views/Todos/Index.cshtml_.
+1. Open _Views/Todos/Index.cshtml_.
-Search for the empty `<th></th>` element. Just above this element, add the following Razor code:
+1. Search for the empty `<th></th>` element. Just above this element, add the following Razor code:
-```csharp
-<th>
- @Html.DisplayNameFor(model => model.Done)
-</th>
-```
+ ```csharp
+ <th>
+ @Html.DisplayNameFor(model => model.Done)
+ </th>
+ ```
-Find the `<td>` element that contains the `asp-action` tag helpers. Just above this element, add the following Razor code:
+1. Find the `<td>` element that contains the `asp-action` tag helpers. Just above this element, add the following Razor code:
-```csharp
-<td>
- @Html.DisplayFor(modelItem => item.Done)
-</td>
-```
+ ```csharp
+ <td>
+ @Html.DisplayFor(modelItem => item.Done)
+ </td>
+ ```
That's all you need to see the changes in the `Index` and `Create` views. ### Test your changes locally
-Run the app locally.
+1. Run the app locally.
-```bash
-dotnet run
-```
+ ```bash
+ dotnet run
+ ```
-> [!NOTE]
-> If you open a new terminal window, you need to set the connection string to the production database in the terminal, like you did in [Run database migrations to the production database](#run-database-migrations-to-the-production-database).
->
+ > [!NOTE]
+ > If you open a new terminal window, you need to set the connection string to the production database in the terminal, like you did in [Run database migrations to the production database](#run-database-migrations-to-the-production-database).
+ >
-In your browser, navigate to `http://localhost:5000/`. You can now add a to-do item and check **Done**. Then it should show up in your homepage as a completed item. Remember that the `Edit` view doesn't show the `Done` field, because you didn't change the `Edit` view.
+1. In your browser, navigate to `http://localhost:5000/`. You can now add a to-do item and check **Done**. Then it should show up in your homepage as a completed item. Remember that the `Edit` view doesn't show the `Done` field, because you didn't change the `Edit` view.
### Publish changes to Azure
-```bash
-git add .
-git commit -m "added done field"
-git push azure main
-```
+1. Commit your changes to Git and push it to your App Service app.
-Once the `git push` is complete, navigate to your App Service app and try adding a to-do item and check **Done**.
+ ```bash
+ git add .
+ git commit -m "added done field"
+ git push azure main
+ ```
-![Azure app after Code First Migration](./media/tutorial-dotnetcore-sqldb-app/this-one-is-done.png)
+1. Once the `git push` is complete, navigate to your App Service app and try adding a to-do item and check **Done**.
+
+ ![Azure app after Code First Migration](./media/tutorial-dotnetcore-sqldb-app/this-one-is-done.png)
All your existing to-do items are still displayed. When you republish your ASP.NET Core app, existing data in your SQL Database isn't lost. Also, Entity Framework Core Migrations only changes the data schema and leaves your existing data intact.
The sample project already follows the guidance at [ASP.NET Core Logging in Azur
- Includes a reference to `Microsoft.Extensions.Logging.AzureAppServices` in *DotNetCoreSqlDb.csproj*. - Calls `loggerFactory.AddAzureWebAppDiagnostics()` in *Program.cs*.
-To set the ASP.NET Core [log level](/aspnet/core/fundamentals/logging#log-level) in App Service to `Information` from the default level `Error`, use the [`az webapp log config`](/cli/azure/webapp/log#az_webapp_log_config) command in the Cloud Shell.
+1. To set the ASP.NET Core [log level](/aspnet/core/fundamentals/logging#log-level) in App Service to `Information` from the default level `Error`, use the [`az webapp log config`](/cli/azure/webapp/log#az_webapp_log_config) command in the Cloud Shell.
-```azurecli-interactive
-az webapp log config --name <app-name> --resource-group myResourceGroup --application-logging filesystem --level information
-```
+ ```azurecli-interactive
+ az webapp log config --name <app-name> --resource-group myResourceGroup --application-logging filesystem --level information
+ ```
-> [!NOTE]
-> The project's log level is already set to `Information` in *appsettings.json*.
+ > [!NOTE]
+ > The project's log level is already set to `Information` in *appsettings.json*.
-To start log streaming, use the [`az webapp log tail`](/cli/azure/webapp/log#az_webapp_log_tail) command in the Cloud Shell.
+1. To start log streaming, use the [`az webapp log tail`](/cli/azure/webapp/log#az_webapp_log_tail) command in the Cloud Shell.
-```azurecli-interactive
-az webapp log tail --name <app-name> --resource-group myResourceGroup
-```
+ ```azurecli-interactive
+ az webapp log tail --name <app-name> --resource-group myResourceGroup
+ ```
-Once log streaming has started, refresh the Azure app in the browser to get some web traffic. You can now see console logs piped to the terminal. If you don't see console logs immediately, check again in 30 seconds.
+1. Once log streaming has started, refresh the Azure app in the browser to get some web traffic. You can now see console logs piped to the terminal. If you don't see console logs immediately, check again in 30 seconds.
-To stop log streaming at any time, type `Ctrl`+`C`.
+1. To stop log streaming at any time, type `Ctrl`+`C`.
For more information on customizing the ASP.NET Core logs, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging). ## Manage your Azure app
-To see the app you created, in the [Azure portal](https://portal.azure.com), search for and select **App Services**.
+1. To see the app you created, in the [Azure portal](https://portal.azure.com), search for and select **App Services**.
-![Select App Services in Azure portal](./media/tutorial-dotnetcore-sqldb-app/app-services.png)
+ ![Select App Services in Azure portal](./media/tutorial-dotnetcore-sqldb-app/app-services.png)
-On the **App Services** page, select the name of your Azure app.
+1. On the **App Services** page, select the name of your Azure app.
-![Portal navigation to Azure app](./media/tutorial-dotnetcore-sqldb-app/access-portal.png)
+ ![Portal navigation to Azure app](./media/tutorial-dotnetcore-sqldb-app/access-portal.png)
-By default, the portal shows your app's **Overview** page. This page gives you a view of how your app is doing. Here, you can also perform basic management tasks like browse, stop, start, restart, and delete. The tabs on the left side of the page show the different configuration pages you can open.
+ By default, the portal shows your app's **Overview** page. This page gives you a view of how your app is doing. Here, you can also perform basic management tasks like browse, stop, start, restart, and delete. The tabs on the left side of the page show the different configuration pages you can open.
-![App Service page in Azure portal](./media/tutorial-dotnetcore-sqldb-app/web-app-blade.png)
+ ![App Service page in Azure portal](./media/tutorial-dotnetcore-sqldb-app/web-app-blade.png)
[!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)]
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-php-mysql-app.md
If your command runs successfully, then your MySQL server is running. If not, ma
### Create a database locally
-At the `mysql` prompt, create a database.
+1. At the `mysql` prompt, create a database.
-```sql
-CREATE DATABASE sampledb;
-```
+ ```sql
+ CREATE DATABASE sampledb;
+ ```
-Exit your server connection by typing `quit`.
+1. Exit your server connection by typing `quit`.
-```sql
-quit
-```
+ ```sql
+ quit
+ ```
<a name="step2"></a>
In this step, you get a Laravel sample application, configure its database conne
In the terminal window, `cd` to a working directory.
-Run the following command to clone the sample repository.
+1. Clone the sample repository and change to the repository root.
-```bash
-git clone https://github.com/Azure-Samples/laravel-tasks
-```
+ ```bash
+ git clone https://github.com/Azure-Samples/laravel-tasks
+ cd laravel-tasks
+ ```
-`cd` to your cloned directory.
-Install the required packages.
+1. Make sure the default branch is `main`.
-```bash
-cd laravel-tasks
-composer install
-```
+ ```bash
+ git branch -m main
+ ```
+
+ > [!TIP]
+ > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
+
+1. Install the required packages.
+
+ ```bash
+ composer install
+ ```
### Configure MySQL connection
For information on how Laravel uses the _.env_ file, see [Laravel Environment Co
### Run the sample locally
-Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _database/migrations_ directory in the Git repository.
+1. Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _database/migrations_ directory in the Git repository.
-```bash
-php artisan migrate
-```
+ ```bash
+ php artisan migrate
+ ```
-Generate a new Laravel application key.
+1. Generate a new Laravel application key.
-```bash
-php artisan key:generate
-```
+ ```bash
+ php artisan key:generate
+ ```
-Run the application.
+1. Run the application.
-```bash
-php artisan serve
-```
+ ```bash
+ php artisan serve
+ ```
-Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
+1. Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
-![PHP connects successfully to MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
+ ![PHP connects successfully to MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
-To stop PHP, type `Ctrl + C` in the terminal.
+1. To stop PHP, type `Ctrl + C` in the terminal.
## Create MySQL in Azure
When the MySQL server is created, the Azure CLI shows information similar to the
### Configure server firewall
-In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections by using the [`az mysql server firewall-rule create`](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_create) command. When both starting IP and end IP are set to 0.0.0.0, the firewall is only opened for other Azure resources.
-
-```azurecli-interactive
-az mysql server firewall-rule create --name allAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
-```
+1. In the Cloud Shell, create a firewall rule for your MySQL server to allow client connections by using the [`az mysql server firewall-rule create`](/cli/azure/mysql/server/firewall-rule#az_mysql_server_firewall_rule_create) command. When both starting IP and end IP are set to 0.0.0.0, the firewall is only opened for other Azure resources.
-> [!TIP]
-> You can be even more restrictive in your firewall rule by [using only the outbound IP addresses your app uses](overview-inbound-outbound-ips.md#find-outbound-ips).
->
+ ```azurecli-interactive
+ az mysql server firewall-rule create --name allAzureIPs --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
+ ```
-In the Cloud Shell, run the command again to allow access from your local computer by replacing *\<your-ip-address>* with [your local IPv4 IP address](https://www.whatsmyip.org/).
+ > [!TIP]
+ > You can be even more restrictive in your firewall rule by [using only the outbound IP addresses your app uses](overview-inbound-outbound-ips.md#find-outbound-ips).
+ >
-```azurecli-interactive
-az mysql server firewall-rule create --name AllowLocalClient --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
-```
+1. In the Cloud Shell, run the command again to allow access from your local computer by replacing *\<your-ip-address>* with [your local IPv4 IP address](https://www.whatsmyip.org/).
-### Connect to production MySQL server locally
-
-In the local terminal window, connect to the MySQL server in Azure. Use the value you specified previously for _&lt;admin-user>_ and _&lt;mysql-server-name>_. When prompted for a password, use the password you specified when you created the database in Azure.
-
-```bash
-mysql -u <admin-user>@<mysql-server-name> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
-```
+ ```azurecli-interactive
+ az mysql server firewall-rule create --name AllowLocalClient --server <mysql-server-name> --resource-group myResourceGroup --start-ip-address=<your-ip-address> --end-ip-address=<your-ip-address>
+ ```
### Create a production database
-At the `mysql` prompt, create a database.
+1. In the local terminal window, connect to the MySQL server in Azure. Use the value you specified previously for _&lt;admin-user>_ and _&lt;mysql-server-name>_. When prompted for a password, use the password you specified when you created the database in Azure.
-```sql
-CREATE DATABASE sampledb;
-```
+ ```bash
+ mysql -u <admin-user>@<mysql-server-name> -h <mysql-server-name>.mysql.database.azure.com -P 3306 -p
+ ```
-### Create a user with permissions
+1. At the `mysql` prompt, create a database.
-Create a database user called _phpappuser_ and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use _MySQLAzure2017_ as the password.
+ ```sql
+ CREATE DATABASE sampledb;
+ ```
-```sql
-CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2017';
-GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser';
-```
+1. Create a database user called _phpappuser_ and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use _MySQLAzure2017_ as the password.
-Exit the server connection by typing `quit`.
+ ```sql
+ CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2017';
+ GRANT ALL PRIVILEGES ON sampledb.* TO 'phpappuser';
+ ```
-```sql
-quit
-```
+1. Exit the server connection by typing `quit`.
+
+ ```sql
+ quit
+ ```
## Connect app to Azure MySQL
The certificate `BaltimoreCyberTrustRoot.crt.pem` is provided in the repository
### Test the application locally
-Run Laravel database migrations with _.env.production_ as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
+1. Run Laravel database migrations with _.env.production_ as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
-```bash
-php artisan migrate --env=production --force
-```
+ ```bash
+ php artisan migrate --env=production --force
+ ```
-_.env.production_ doesn't have a valid application key yet. Generate a new one for it in the terminal.
+1. _.env.production_ doesn't have a valid application key yet. Generate a new one for it in the terminal.
-```bash
-php artisan key:generate --env=production --force
-```
+ ```bash
+ php artisan key:generate --env=production --force
+ ```
-Run the sample application with _.env.production_ as the environment file.
+1. Run the sample application with _.env.production_ as the environment file.
-```bash
-php artisan serve --env=production
-```
+ ```bash
+ php artisan serve --env=production
+ ```
-Navigate to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
+1. Navigate to `http://localhost:8000`. If the page loads without errors, the PHP application is connecting to the MySQL database in Azure.
-Add a few tasks in the page.
+1. Add a few tasks in the page.
-![PHP connects successfully to Azure Database for MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
+ ![PHP connects successfully to Azure Database for MySQL](./media/tutorial-php-mysql-app/mysql-connect-success.png)
-To stop PHP, type `Ctrl + C` in the terminal.
+1. To stop PHP, type `Ctrl + C` in the terminal.
### Commit your changes
You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php)
Laravel needs an application key in App Service. You can configure it with app settings.
-In the local terminal window, use `php artisan` to generate a new application key without saving it to _.env_.
+1. In the local terminal window, use `php artisan` to generate a new application key without saving it to _.env_.
-```bash
-php artisan key:generate --show
-```
+ ```bash
+ php artisan key:generate --show
+ ```
-In the Cloud Shell, set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
+1. In the Cloud Shell, set the application key in the App Service app by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command. Replace the placeholders _&lt;app-name>_ and _&lt;outputofphpartisankey:generate>_.
-```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
-```
+ ```azurecli-interactive
+ az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings APP_KEY="<output_of_php_artisan_key:generate>" APP_DEBUG="true"
+ ```
-`APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
+ `APP_DEBUG="true"` tells Laravel to return debugging information when the deployed app encounters errors. When running a production application, set it to `false`, which is more secure.
### Set the virtual application path
For more information, see [Change site root](configure-language-php.md#change-si
[!INCLUDE [app-service-plan-no-h](../../includes/app-service-web-git-push-to-azure-no-h.md)]
-<pre>
-Counting objects: 3, done.
-Delta compression using up to 8 threads.
-Compressing objects: 100% (3/3), done.
-Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
-Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id 'a5e076db9c'.
-remote: Running custom deployment command...
-remote: Running deployment command...
-...
-&lt; Output has been truncated for readability &gt;
-</pre>
-
-> [!NOTE]
-> You may notice that the deployment process installs [Composer](https://getcomposer.org/) packages at the end. App Service does not run these automations during default deployment, so this sample repository has three additional files in its root directory to enable it:
->
-> - `.deployment` - This file tells App Service to run `bash deploy.sh` as the custom deployment script.
-> - `deploy.sh` - The custom deployment script. If you review the file, you will see that it runs `php composer.phar install` after `npm install`.
-> - `composer.phar` - The Composer package manager.
->
-> You can use this approach to add any step to your Git-based deployment to App Service. For more information, see [Custom Deployment Script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
->
+ <pre>
+ Counting objects: 3, done.
+ Delta compression using up to 8 threads.
+ Compressing objects: 100% (3/3), done.
+ Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
+ Total 3 (delta 2), reused 0 (delta 0)
+ remote: Updating branch 'main'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id 'a5e076db9c'.
+ remote: Running custom deployment command...
+ remote: Running deployment command...
+ ...
+ &lt; Output has been truncated for readability &gt;
+ </pre>
+
+ > [!NOTE]
+ > You may notice that the deployment process installs [Composer](https://getcomposer.org/) packages at the end. App Service does not run these automations during default deployment, so this sample repository has three additional files in its root directory to enable it:
+ >
+ > - `.deployment` - This file tells App Service to run `bash deploy.sh` as the custom deployment script.
+ > - `deploy.sh` - The custom deployment script. If you review the file, you will see that it runs `php composer.phar install` after `npm install`.
+ > - `composer.phar` - The Composer package manager.
+ >
+ > You can use this approach to add any step to your Git-based deployment to App Service. For more information, see [Custom Deployment Script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script).
+ >
::: zone-end
remote: Running deployment command...
[!INCLUDE [app-service-plan-no-h](../../includes/app-service-web-git-push-to-azure-no-h.md)]
-<pre>
-Counting objects: 3, done.
-Delta compression using up to 8 threads.
-Compressing objects: 100% (3/3), done.
-Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
-Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id 'a5e076db9c'.
-remote: Running custom deployment command...
-remote: Running deployment command...
-...
-&lt; Output has been truncated for readability &gt;
-</pre>
-
+ <pre>
+ Counting objects: 3, done.
+ Delta compression using up to 8 threads.
+ Compressing objects: 100% (3/3), done.
+ Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
+ Total 3 (delta 2), reused 0 (delta 0)
+ remote: Updating branch 'main'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id 'a5e076db9c'.
+ remote: Running custom deployment command...
+ remote: Running deployment command...
+ ...
+ &lt; Output has been truncated for readability &gt;
+ </pre>
+
::: zone-end ### Browse to the Azure app
For the tasks scenario, you modify the application so that you can mark a task a
### Add a column
-In the local terminal window, navigate to the root of the Git repository.
+1. In the local terminal window, navigate to the root of the Git repository.
-Generate a new database migration for the `tasks` table:
+1. Generate a new database migration for the `tasks` table:
-```bash
-php artisan make:migration add_complete_column --table=tasks
-```
+ ```bash
+ php artisan make:migration add_complete_column --table=tasks
+ ```
-This command shows you the name of the migration file that's generated. Find this file in _database/migrations_ and open it.
+1. This command shows you the name of the migration file that's generated. Find this file in _database/migrations_ and open it.
-Replace the `up` method with the following code:
+1. Replace the `up` method with the following code:
-```php
-public function up()
-{
- Schema::table('tasks', function (Blueprint $table) {
- $table->boolean('complete')->default(False);
- });
-}
-```
+ ```php
+ public function up()
+ {
+ Schema::table('tasks', function (Blueprint $table) {
+ $table->boolean('complete')->default(False);
+ });
+ }
+ ```
-The preceding code adds a boolean column in the `tasks` table called `complete`.
+ The preceding code adds a boolean column in the `tasks` table called `complete`.
-Replace the `down` method with the following code for the rollback action:
+1. Replace the `down` method with the following code for the rollback action:
-```php
-public function down()
-{
- Schema::table('tasks', function (Blueprint $table) {
- $table->dropColumn('complete');
- });
-}
-```
+ ```php
+ public function down()
+ {
+ Schema::table('tasks', function (Blueprint $table) {
+ $table->dropColumn('complete');
+ });
+ }
+ ```
-In the local terminal window, run Laravel database migrations to make the change in the local database.
+1. In the local terminal window, run Laravel database migrations to make the change in the local database.
-```bash
-php artisan migrate
-```
+ ```bash
+ php artisan migrate
+ ```
-Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see _app/Task.php_) maps to the `tasks` table by default.
+ Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see _app/Task.php_) maps to the `tasks` table by default.
### Update application logic
-Open the *routes/web.php* file. The application defines its routes and business logic here.
-
-At the end of the file, add a route with the following code:
-
-```php
-/**
- * Toggle Task completeness
- */
-Route::post('/task/{id}', function ($id) {
- error_log('INFO: post /task/'.$id);
- $task = Task::findOrFail($id);
-
- $task->complete = !$task->complete;
- $task->save();
-
- return redirect('/');
-});
-```
+1. Open the *routes/web.php* file. The application defines its routes and business logic here.
+
+1. At the end of the file, add a route with the following code:
+
+ ```php
+ /**
+ * Toggle Task completeness
+ */
+ Route::post('/task/{id}', function ($id) {
+ error_log('INFO: post /task/'.$id);
+ $task = Task::findOrFail($id);
+
+ $task->complete = !$task->complete;
+ $task->save();
+
+ return redirect('/');
+ });
+ ```
-The preceding code makes a simple update to the data model by toggling the value of `complete`.
+ The preceding code makes a simple update to the data model by toggling the value of `complete`.
### Update the view
-Open the *resources/views/tasks.blade.php* file. Find the `<tr>` opening tag and replace it with:
+1. Open the *resources/views/tasks.blade.php* file. Find the `<tr>` opening tag and replace it with:
-```html
-<tr class="{{ $task->complete ? 'success' : 'active' }}" >
-```
+ ```html
+ <tr class="{{ $task->complete ? 'success' : 'active' }}" >
+ ```
-The preceding code changes the row color depending on whether the task is complete.
+ The preceding code changes the row color depending on whether the task is complete.
-In the next line, you have the following code:
+1. In the next line, you have the following code:
-```html
-<td class="table-text"><div>{{ $task->name }}</div></td>
-```
+ ```html
+ <td class="table-text"><div>{{ $task->name }}</div></td>
+ ```
-Replace the entire line with the following code:
+ Replace the entire line with the following code:
-```html
-<td>
- <form action="{{ url('task/'.$task->id) }}" method="POST">
- {{ csrf_field() }}
+ ```html
+ <td>
+ <form action="{{ url('task/'.$task->id) }}" method="POST">
+ {{ csrf_field() }}
+
+ <button type="submit" class="btn btn-xs">
+ <i class="fa {{$task->complete ? 'fa-check-square-o' : 'fa-square-o'}}"></i>
+ </button>
+ {{ $task->name }}
+ </form>
+ </td>
+ ```
- <button type="submit" class="btn btn-xs">
- <i class="fa {{$task->complete ? 'fa-check-square-o' : 'fa-square-o'}}"></i>
- </button>
- {{ $task->name }}
- </form>
-</td>
-```
-
-The preceding code adds the submit button that references the route that you defined earlier.
+ The preceding code adds the submit button that references the route that you defined earlier.
### Test the changes locally
-In the local terminal window, run the development server from the root directory of the Git repository.
+1. In the local terminal window, run the development server from the root directory of the Git repository.
-```bash
-php artisan serve
-```
+ ```bash
+ php artisan serve
+ ```
-To see the task status change, navigate to `http://localhost:8000` and select the checkbox.
+1. To see the task status change, navigate to `http://localhost:8000` and select the checkbox.
-![Added check box to task](./media/tutorial-php-mysql-app/complete-checkbox.png)
+ ![Added check box to task](./media/tutorial-php-mysql-app/complete-checkbox.png)
-To stop PHP, type `Ctrl + C` in the terminal.
+1. To stop PHP, type `Ctrl + C` in the terminal.
### Publish changes to Azure
-In the local terminal window, run Laravel database migrations with the production connection string to make the change in the Azure database.
+1. In the local terminal window, run Laravel database migrations with the production connection string to make the change in the Azure database.
-```bash
-php artisan migrate --env=production --force
-```
+ ```bash
+ php artisan migrate --env=production --force
+ ```
-Commit all the changes in Git, and then push the code changes to Azure.
+1. Commit all the changes in Git, and then push the code changes to Azure.
-```bash
-git add .
-git commit -m "added complete checkbox"
-git push azure main
-```
+ ```bash
+ git add .
+ git commit -m "added complete checkbox"
+ git push azure main
+ ```
-Once the `git push` is complete, navigate to the Azure app and test the new functionality.
+1. Once the `git push` is complete, navigate to the Azure app and test the new functionality.
-![Model and database changes published to Azure](media/tutorial-php-mysql-app/complete-checkbox-published.png)
+ ![Model and database changes published to Azure](media/tutorial-php-mysql-app/complete-checkbox-published.png)
If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
To stop log streaming at any time, type `Ctrl`+`C`.
## Manage the Azure app
-Go to the [Azure portal](https://portal.azure.com) to manage the app you created.
+1. Go to the [Azure portal](https://portal.azure.com) to manage the app you created.
-From the left menu, click **App Services**, and then click the name of your Azure app.
+1. From the left menu, click **App Services**, and then click the name of your Azure app.
-![Portal navigation to Azure app](./media/tutorial-php-mysql-app/access-portal.png)
+ ![Portal navigation to Azure app](./media/tutorial-php-mysql-app/access-portal.png)
-You see your app's Overview page. Here, you can perform basic management tasks like stop, start, restart, browse, and delete.
+ You see your app's Overview page. Here, you can perform basic management tasks like stop, start, restart, browse, and delete.
-The left menu provides pages for configuring your app.
+ The left menu provides pages for configuring your app.
-![App Service page in Azure portal](./media/tutorial-php-mysql-app/web-app-blade.png)
+ ![App Service page in Azure portal](./media/tutorial-php-mysql-app/web-app-blade.png)
[!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)]
app-service Tutorial Ruby Postgres App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-ruby-postgres-app.md
In this step, you create a database in your local Postgres server for your use i
### Connect to local Postgres server
-Open the terminal window and run `psql` to connect to your local Postgres server.
+1. Open the terminal window and run `psql` to connect to your local Postgres server.
-```bash
-sudo -u postgres psql
-```
+ ```bash
+ sudo -u postgres psql
+ ```
-If your connection is successful, your Postgres database is running. If not, make sure that your local Postgres database is started by following the steps at [Downloads - PostgreSQL Core Distribution](https://www.postgresql.org/download/).
+ If your connection is successful, your Postgres database is running. If not, make sure that your local Postgres database is started by following the steps at [Downloads - PostgreSQL Core Distribution](https://www.postgresql.org/download/).
-Type `\q` to exit the Postgres client.
+1. Type `\q` to exit the Postgres client.
-Create a Postgres user that can create databases by running the following command, using your signed-in Linux username.
+1. Create a Postgres user that can create databases by running the following command, using your signed-in Linux username.
-```bash
-sudo -u postgres createuser -d <signed-in-user>
-```
+ ```bash
+ sudo -u postgres createuser -d <signed-in-user>
+ ```
<a name="step2"></a>
In this step, you get a Ruby on Rails sample application, configure its database
### Clone the sample
-In the terminal window, `cd` to a working directory.
+1. In the terminal window, `cd` to a working directory.
-Run the following command to clone the sample repository.
+1. Clone the sample repository and change to the repository root.
-```bash
-git clone https://github.com/Azure-Samples/rubyrails-tasks.git
-```
+ ```bash
+ git clone https://github.com/Azure-Samples/rubyrails-tasks.git
+ cd rubyrails-tasks
+ ```
-`cd` to your cloned directory. Install the required packages.
+1. Make sure the default branch is `main`.
-```bash
-cd rubyrails-tasks
-bundle install --path vendor/bundle
-```
+ ```bash
+ git branch -m main
+ ```
+
+ > [!TIP]
+ > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
+
+1. Install the required packages.
+
+ ```bash
+ bundle install --path vendor/bundle
+ ```
### Run the sample locally
-Run [the Rails migrations](https://guides.rubyonrails.org/active_record_migrations.html#running-migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _db/migrate_ directory in the Git repository.
+1. Run [the Rails migrations](https://guides.rubyonrails.org/active_record_migrations.html#running-migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _db/migrate_ directory in the Git repository.
-```bash
-rake db:create
-rake db:migrate
-```
+ ```bash
+ rake db:create
+ rake db:migrate
+ ```
-Run the application.
+1. Run the application.
-```bash
-rails server
-```
+ ```bash
+ rails server
+ ```
-Navigate to `http://localhost:3000` in a browser. Add a few tasks in the page.
+1. Navigate to `http://localhost:3000` in a browser. Add a few tasks in the page.
-![Ruby on Rails connects successfully to Postgres](./media/tutorial-ruby-postgres-app/postgres-connect-success.png)
+ ![Ruby on Rails connects successfully to Postgres](./media/tutorial-ruby-postgres-app/postgres-connect-success.png)
-To stop the Rails server, type `Ctrl + C` in the terminal.
+1. To stop the Rails server, type `Ctrl + C` in the terminal.
## Create Postgres in Azure
In this step, you create a Postgres database in [Azure Database for PostgreSQL](
<!-- > [!NOTE] > Before you create an Azure Database for PostgreSQL server, check which [compute generation](../postgresql/concepts-pricing-tiers.md#compute-generations-and-vcores) is available in your region. If your region doesn't support Gen4 hardware, change *--sku-name* in the following command line to a value that's supported in your region, such as B_Gen4_1. -->
-In this section, you create an Azure Database for PostgreSQL server and database. To start, install the `db-up` extension with the following command:
-
-```azurecli
-az extension add --name db-up
-```
-
-Create the Postgres database in Azure with the [`az postgres up`](/cli/azure/postgres#az_postgres_up) command, as shown in the following example. Replace *\<postgresql-name>* with a *unique* name (the server endpoint is *https://\<postgresql-name>.postgres.database.azure.com*). For *\<admin-username>* and *\<admin-password>*, specify credentials to create an administrator user for this Postgres server.
-
-<!-- Issue: without --location -->
-```azurecli
-az postgres up --resource-group myResourceGroup --location westeurope --server-name <postgresql-name> --database-name sampledb --admin-user <admin-username> --admin-password <admin-password> --ssl-enforcement Enabled
-```
-
-This command may take a while because it's doing the following:
--- Creates a [resource group](../azure-resource-manager/management/overview.md#terminology) called `myResourceGroup`, if it doesn't exist. Every Azure resource needs to be in one of these. `--resource-group` is optional.-- Creates a Postgres server with the administrative user.-- Creates a `sampledb` database.-- Allows access from your local IP address.-- Allows access from Azure services.-- Create a database user with access to the `sampledb` database.-
-You can do all the steps separately with other `az postgres` commands and `psql`, but `az postgres up` does all of them in one step for you.
-
-When the command finishes, find the output lines that being with `Ran Database Query:`. They show the database user that's created for you, with the username `root` and password `Sampledb1`. You'll use them later to connect your app to the database.
-
-<!-- not all locations support az postgres up -->
-> [!TIP]
-> `--location <location-name>`, can be set to any one of the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). You can get the regions available to your subscription with the [`az account list-locations`](/cli/azure/account#az_account_list_locations) command. For production apps, put your database and your app in the same location.
-
+1. Install the `db-up` extension with the following command:
+
+ ```azurecli
+ az extension add --name db-up
+ ```
+
+1. Create the Postgres database in Azure with the [`az postgres up`](/cli/azure/postgres#az_postgres_up) command, as shown in the following example. Replace *\<postgresql-name>* with a *unique* name (the server endpoint is *https://\<postgresql-name>.postgres.database.azure.com*). For *\<admin-username>* and *\<admin-password>*, specify credentials to create an administrator user for this Postgres server.
+
+ <!-- Issue: without --location -->
+ ```azurecli
+ az postgres up --resource-group myResourceGroup --location westeurope --server-name <postgresql-name> --database-name sampledb --admin-user <admin-username> --admin-password <admin-password> --ssl-enforcement Enabled
+ ```
+
+ This command may take a while because it's doing the following:
+
+ - Creates a [resource group](../azure-resource-manager/management/overview.md#terminology) called `myResourceGroup`, if it doesn't exist. Every Azure resource needs to be in one of these. `--resource-group` is optional.
+ - Creates a Postgres server with the administrative user.
+ - Creates a `sampledb` database.
+ - Allows access from your local IP address.
+ - Allows access from Azure services.
+ - Create a database user with access to the `sampledb` database.
+
+ You can do all the steps separately with other `az postgres` commands and `psql`, but `az postgres up` does all of them in one step for you.
+
+ When the command finishes, find the output lines that being with `Ran Database Query:`. They show the database user that's created for you, with the username `root` and password `Sampledb1`. You'll use them later to connect your app to the database.
+
+ <!-- not all locations support az postgres up -->
+ > [!TIP]
+ > `--location <location-name>`, can be set to any one of the [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). You can get the regions available to your subscription with the [`az account list-locations`](/cli/azure/account#az_account_list_locations) command. For production apps, put your database and your app in the same location.
+
## Connect app to Azure Postgres In this step, you connect the Ruby on Rails application to the Postgres database you created in Azure Database for PostgreSQL.
Save the changes.
### Test the application locally
-Back in the local terminal, set the following environment variables:
+1. Back in the local terminal, set the following environment variables:
-```bash
-export DB_HOST=<postgres-server-name>.postgres.database.azure.com
-export DB_DATABASE=sampledb
-export DB_USERNAME=root@<postgres-server-name>
-export DB_PASSWORD=Sampledb1
-```
+ ```bash
+ export DB_HOST=<postgres-server-name>.postgres.database.azure.com
+ export DB_DATABASE=sampledb
+ export DB_USERNAME=root@<postgres-server-name>
+ export DB_PASSWORD=Sampledb1
+ ```
-Run Rails database migrations with the production values you just configured to create the tables in your Postgres database in Azure Database for PostgreSQL.
+1. Run Rails database migrations with the production values you just configured to create the tables in your Postgres database in Azure Database for PostgreSQL.
-```bash
-rake db:migrate RAILS_ENV=production
-```
+ ```bash
+ rake db:migrate RAILS_ENV=production
+ ```
-When running in the production environment, the Rails application needs precompiled assets. Generate the required assets with the following command:
+1. When running in the production environment, the Rails application needs precompiled assets. Generate the required assets with the following command:
-```bash
-rake assets:precompile
-```
+ ```bash
+ rake assets:precompile
+ ```
-The Rails production environment also uses secrets to manage security. Generate a secret key.
+1. The Rails production environment also uses secrets to manage security. Generate a secret key.
-```bash
-rails secret
-```
+ ```bash
+ rails secret
+ ```
-Save the secret key to the respective variables used by the Rails production environment. For convenience, you use the same key for both variables.
+1. Save the secret key to the respective variables used by the Rails production environment. For convenience, you use the same key for both variables.
-```bash
-export RAILS_MASTER_KEY=<output-of-rails-secret>
-export SECRET_KEY_BASE=<output-of-rails-secret>
-```
+ ```bash
+ export RAILS_MASTER_KEY=<output-of-rails-secret>
+ export SECRET_KEY_BASE=<output-of-rails-secret>
+ ```
-Enable the Rails production environment to serve JavaScript and CSS files.
+1. Enable the Rails production environment to serve JavaScript and CSS files.
-```bash
-export RAILS_SERVE_STATIC_FILES=true
-```
+ ```bash
+ export RAILS_SERVE_STATIC_FILES=true
+ ```
-Run the sample application in the production environment.
+1. Run the sample application in the production environment.
-```bash
-rails server -e production
-```
+ ```bash
+ rails server -e production
+ ```
-Navigate to `http://localhost:3000`. If the page loads without errors, the Ruby on Rails application is connecting to the Postgres database in Azure.
+1. Navigate to `http://localhost:3000`. If the page loads without errors, the Ruby on Rails application is connecting to the Postgres database in Azure.
-Add a few tasks in the page.
+1. Add a few tasks in the page.
-![Ruby on Rails connects successfully to Azure Database for PostgreSQL](./media/tutorial-ruby-postgres-app/azure-postgres-connect-success.png)
+ ![Ruby on Rails connects successfully to Azure Database for PostgreSQL](./media/tutorial-ruby-postgres-app/azure-postgres-connect-success.png)
-To stop the Rails server, type `Ctrl + C` in the terminal.
+1. To stop the Rails server, type `Ctrl + C` in the terminal.
### Commit your changes
-Run the following Git commands to commit your changes:
+1. Run the following Git commands to commit your changes:
-```bash
-git add .
-git commit -m "database.yml updates"
-```
+ ```bash
+ git add .
+ git commit -m "database.yml updates"
+ ```
Your app is ready to be deployed.
az webapp config appsettings set --name <app-name> --resource-group myResourceGr
### Configure Rails environment variables
-In the local terminal, [generate a new secret](configure-language-ruby.md#set-secret_key_base-manually) for the Rails production environment in Azure.
+1. In the local terminal, [generate a new secret](configure-language-ruby.md#set-secret_key_base-manually) for the Rails production environment in Azure.
-```bash
-rails secret
-```
+ ```bash
+ rails secret
+ ```
-Configure the variables required by Rails production environment.
+1. In the following Cloud Shell command, replace the two _&lt;output-of-rails-secret>_ placeholders with the new secret key you generated in the local terminal.
-In the following Cloud Shell command, replace the two _&lt;output-of-rails-secret>_ placeholders with the new secret key you generated in the local terminal.
+ ```azurecli-interactive
+ az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings RAILS_MASTER_KEY="<output-of-rails-secret>" SECRET_KEY_BASE="<output-of-rails-secret>" RAILS_SERVE_STATIC_FILES="true" ASSETS_PRECOMPILE="true"
+ ```
-```azurecli-interactive
-az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings RAILS_MASTER_KEY="<output-of-rails-secret>" SECRET_KEY_BASE="<output-of-rails-secret>" RAILS_SERVE_STATIC_FILES="true" ASSETS_PRECOMPILE="true"
-```
-
-`ASSETS_PRECOMPILE="true"` tells the default Ruby container to precompile assets at each Git deployment. For more information, see [Precompile assets](configure-language-ruby.md#precompile-assets) and [Serve static assets](configure-language-ruby.md#serve-static-assets).
+ `ASSETS_PRECOMPILE="true"` tells the default Ruby container to precompile assets at each Git deployment. For more information, see [Precompile assets](configure-language-ruby.md#precompile-assets) and [Serve static assets](configure-language-ruby.md#serve-static-assets).
### Push to Azure from Git
-In the local terminal, add an Azure remote to your local Git repository.
-
-```bash
-git remote add azure <paste-copied-url-here>
-```
-
-Push to the Azure remote to deploy the Ruby on Rails application. You are prompted for the password you supplied earlier as part of the creation of the deployment user.
-
-```bash
-git push azure main
-```
-
-During deployment, Azure App Service communicates its progress with Git.
-
-<pre>
-Counting objects: 3, done.
-Delta compression using up to 8 threads.
-Compressing objects: 100% (3/3), done.
-Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
-Total 3 (delta 2), reused 0 (delta 0)
-remote: Updating branch 'main'.
-remote: Updating submodules.
-remote: Preparing deployment for commit id 'a5e076db9c'.
-remote: Running custom deployment command...
-remote: Running deployment command...
-...
-&lt; Output has been truncated for readability &gt;
-</pre>
-
+1. Since you're deploying the `main` branch, you need to set the default deployment branch for your App Service app to `main` (see [Change deployment branch](deploy-local-git.md#change-deployment-branch)). In the Cloud Shell, set the `DEPLOYMENT_BRANCH` app setting with the [`az webapp config appsettings set`](/cli/azure/webapp/appsettings#az_webapp_config_appsettings_set) command.
+
+ ```azurecli-interactive
+ az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DEPLOYMENT_BRANCH='main'
+ ```
+
+1. In the local terminal, add an Azure remote to your local Git repository.
+
+ ```bash
+ git remote add azure <paste-copied-url-here>
+ ```
+
+1. Push to the Azure remote to deploy the Ruby on Rails application. You are prompted for the password you supplied earlier as part of the creation of the deployment user.
+
+ ```bash
+ git push azure main
+ ```
+
+ During deployment, Azure App Service communicates its progress with Git.
+
+ <pre>
+ Counting objects: 3, done.
+ Delta compression using up to 8 threads.
+ Compressing objects: 100% (3/3), done.
+ Writing objects: 100% (3/3), 291 bytes | 0 bytes/s, done.
+ Total 3 (delta 2), reused 0 (delta 0)
+ remote: Updating branch 'main'.
+ remote: Updating submodules.
+ remote: Preparing deployment for commit id 'a5e076db9c'.
+ remote: Running custom deployment command...
+ remote: Running deployment command...
+ ...
+ &lt; Output has been truncated for readability &gt;
+ </pre>
+
### Browse to the Azure app Browse to `http://<app-name>.azurewebsites.net` and add a few tasks to the list.
For the tasks scenario, you modify the application so that you can mark a task a
### Add a column
-In the terminal, navigate to the root of the Git repository.
+1. In the terminal, navigate to the root of the Git repository.
-Generate a new migration that adds a boolean column called `Done` to the `Tasks` table:
+1. Generate a new migration that adds a boolean column called `Done` to the `Tasks` table:
-```bash
-rails generate migration AddDoneToTasks Done:boolean
-```
-
-This command generates a new migration file in the _db/migrate_ directory.
+ ```bash
+ rails generate migration AddDoneToTasks Done:boolean
+ ```
+ This command generates a new migration file in the _db/migrate_ directory.
-In the terminal, run Rails database migrations to make the change in the local database.
+1. In the terminal, run Rails database migrations to make the change in the local database.
-```bash
-rake db:migrate
-```
+ ```bash
+ rake db:migrate
+ ```
### Update application logic
-Open the *app/controllers/tasks_controller.rb* file. At the end of the file, find the following line:
+1. Open the *app/controllers/tasks_controller.rb* file. At the end of the file, find the following line:
-```rb
-params.require(:task).permit(:Description)
-```
+ ```rb
+ params.require(:task).permit(:Description)
+ ```
-Modify this line to include the new `Done` parameter.
+1. Modify this line to include the new `Done` parameter.
-```rb
-params.require(:task).permit(:Description, :Done)
-```
+ ```rb
+ params.require(:task).permit(:Description, :Done)
+ ```
### Update the views
-Open the *app/views/tasks/_form.html.erb* file, which is the Edit form.
+1. Open the *app/views/tasks/_form.html.erb* file, which is the Edit form.
-Find the line `<%=f.error_span(:Description) %>` and insert the following code directly below it:
+1. Find the line `<%=f.error_span(:Description) %>` and insert the following code directly below it:
-```erb
-<%= f.label :Done, :class => 'control-label col-lg-2' %>
-<div class="col-lg-10">
- <%= f.check_box :Done, :class => 'form-control' %>
-</div>
-```
+ ```erb
+ <%= f.label :Done, :class => 'control-label col-lg-2' %>
+ <div class="col-lg-10">
+ <%= f.check_box :Done, :class => 'form-control' %>
+ </div>
+ ```
-Open the *app/views/tasks/show.html.erb* file, which is the single-record View page.
+1. Open the *app/views/tasks/show.html.erb* file, which is the single-record View page.
-Find the line `<dd><%= @task.Description %></dd>` and insert the following code directly below it:
+ Find the line `<dd><%= @task.Description %></dd>` and insert the following code directly below it:
-```erb
- <dt><strong><%= model_class.human_attribute_name(:Done) %>:</strong></dt>
- <dd><%= check_box "task", "Done", {:checked => @task.Done, :disabled => true}%></dd>
-```
+ ```erb
+ <dt><strong><%= model_class.human_attribute_name(:Done) %>:</strong></dt>
+ <dd><%= check_box "task", "Done", {:checked => @task.Done, :disabled => true}%></dd>
+ ```
-Open the *app/views/tasks/https://docsupdatetracker.net/index.html.erb* file, which is the Index page for all records.
+ Open the *app/views/tasks/https://docsupdatetracker.net/index.html.erb* file, which is the Index page for all records.
-Find the line `<th><%= model_class.human_attribute_name(:Description) %></th>` and insert the following code directly below it:
+ Find the line `<th><%= model_class.human_attribute_name(:Description) %></th>` and insert the following code directly below it:
-```erb
-<th><%= model_class.human_attribute_name(:Done) %></th>
-```
+ ```erb
+ <th><%= model_class.human_attribute_name(:Done) %></th>
+ ```
-In the same file, find the line `<td><%= task.Description %></td>` and insert the following code directly below it:
+1. In the same file, find the line `<td><%= task.Description %></td>` and insert the following code directly below it:
-```erb
-<td><%= check_box "task", "Done", {:checked => task.Done, :disabled => true} %></td>
-```
+ ```erb
+ <td><%= check_box "task", "Done", {:checked => task.Done, :disabled => true} %></td>
+ ```
### Test the changes locally
-In the local terminal, run the Rails server.
+1. In the local terminal, run the Rails server.
-```bash
-rails server
-```
+ ```bash
+ rails server
+ ```
-To see the task status change, navigate to `http://localhost:3000` and add or edit items.
+1. To see the task status change, navigate to `http://localhost:3000` and add or edit items.
-![Added check box to task](./media/tutorial-ruby-postgres-app/complete-checkbox.png)
+ ![Added check box to task](./media/tutorial-ruby-postgres-app/complete-checkbox.png)
-To stop the Rails server, type `Ctrl + C` in the terminal.
+1. To stop the Rails server, type `Ctrl + C` in the terminal.
### Publish changes to Azure
-In the terminal, run Rails database migrations for the production environment to make the change in the Azure database.
+1. In the terminal, run Rails database migrations for the production environment to make the change in the Azure database.
-```bash
-rake db:migrate RAILS_ENV=production
-```
+ ```bash
+ rake db:migrate RAILS_ENV=production
+ ```
-Commit all the changes in Git, and then push the code changes to Azure.
+1. Commit all the changes in Git, and then push the code changes to Azure.
-```bash
-git add .
-git commit -m "added complete checkbox"
-git push azure main
-```
+ ```bash
+ git add .
+ git commit -m "added complete checkbox"
+ git push azure main
+ ```
-Once the `git push` is complete, navigate to the Azure app and test the new functionality.
+1. Once the `git push` is complete, navigate to the Azure app and test the new functionality.
-![Model and database changes published to Azure](media/tutorial-ruby-postgres-app/complete-checkbox-published.png)
+ ![Model and database changes published to Azure](media/tutorial-ruby-postgres-app/complete-checkbox-published.png)
-If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
+ If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact.
## Stream diagnostic logs
If you added any tasks, they are retained in the database. Updates to the data s
## Manage the Azure app
-Go to the [Azure portal](https://portal.azure.com) to manage the app you created.
+1. Go to the [Azure portal](https://portal.azure.com) to manage the app you created.
-From the left menu, click **App Services**, and then click the name of your Azure app.
+1. From the left menu, click **App Services**, and then click the name of your Azure app.
-![Portal navigation to Azure app](./media/tutorial-php-mysql-app/access-portal.png)
+ ![Portal navigation to Azure app](./media/tutorial-php-mysql-app/access-portal.png)
-You see your app's Overview page. Here, you can perform basic management tasks like stop, start, restart, browse, and delete.
+ You see your app's Overview page. Here, you can perform basic management tasks like stop, start, restart, browse, and delete.
-The left menu provides pages for configuring your app.
+ The left menu provides pages for configuring your app.
-![App Service page in Azure portal](./media/tutorial-php-mysql-app/web-app-blade.png)
+ ![App Service page in Azure portal](./media/tutorial-php-mysql-app/web-app-blade.png)
[!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)]
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-troubleshoot-monitor.md
To complete this tutorial, you'll need:
## Create Azure resources
-First, you run several commands locally to setup a sample app to use with this tutorial. The commands clone a sample app, create Azure resources, create a deployment user and deploy the app to Azure. You'll be prompted for the password supplied as a part of the creation of the deployment user.
+First, you run several commands locally to setup a sample app to use with this tutorial. The commands create Azure resources, create a deployment user, and deploy the sample app to Azure. You'll be prompted for the password supplied as a part of the creation of the deployment user.
```bash
-git clone https://github.com/Azure-Samples/App-Service-Troubleshoot-Azure-Monitor
az group create --name myResourceGroup --location "South Central US" az webapp deployment user set --user-name <username> --password <password> az appservice plan create --name myAppServicePlan --resource-group myResourceGroup --sku B1 --is-linux az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.3" --deployment-local-git
-git remote add azure <url_from_previous_step>
+az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DEPLOYMENT_BRANCH='main'
+git clone https://github.com/Azure-Samples/App-Service-Troubleshoot-Azure-Monitor
+cd App-Service-Troubleshoot-Azure-Monitor
+git branch -m main
+git remote add azure <url-from-app-webapp-create>
git push azure main ```
application-gateway Rewrite Url Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/rewrite-url-portal.md
In the below example whenever the request URL contains */article*, the URL path
e. In the **URL query string value**, enter the new value of the URL query string. In this example, we will use **id={var_uri_path_1}&title={var_uri_path_2}**
- `{var_uri_path_1}` and `{var_uri_path_1}` are used to fetch the substrings captured while evaluating the condition in this expression `.*article/(.*)/(.*)`
+ `{var_uri_path_1}` and `{var_uri_path_2}` are used to fetch the substrings captured while evaluating the condition in this expression `.*article/(.*)/(.*)`
f. Select **OK**.
For more information on all the fields in the access logs, see [here](applicatio
## Next steps
-To learn more about how to set up rewrites for some common use cases, see [common rewrite scenarios](./rewrite-http-headers-url.md).
+To learn more about how to set up rewrites for some common use cases, see [common rewrite scenarios](./rewrite-http-headers-url.md).
applied-ai-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-cache-token.md
+
+ Title: "Cache the authentication token"
+
+description: This article will show you how to cache the authentication token.
++++++ Last updated : 01/14/2020++++
+# How to cache the authentication token
+
+This article demonstrates how to cache the authentication token in order to improve performance of your application.
+
+## Using ASP.NET
+
+Import the **Microsoft.IdentityModel.Clients.ActiveDirectory** NuGet package, which is used to acquire a token. Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
+
+```csharp
+private async Task<AuthenticationResult> GetTokenAsync()
+{
+ AuthenticationContext authContext = new AuthenticationContext($"https://login.windows.net/{TENANT_ID}");
+ ClientCredential clientCredential = new ClientCredential(CLIENT_ID, CLIENT_SECRET);
+ AuthenticationResult authResult = await authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", clientCredential);
+ return authResult;
+}
+```
+
+The `AuthenticationResult` object has an `AccessToken` property which is the actual token you will use when launching the Immersive Reader using the SDK. It also has an `ExpiresOn` property which denotes when the token will expire. Before launching the Immersive Reader, you can check whether the token has expired, and acquire a new token only if it has expired.
+
+## Using Node.JS
+
+Add the [**request**](https://www.npmjs.com/package/request) npm package to your project. Use the following code to acquire a token, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
+
+```javascript
+router.get('/token', function(req, res) {
+ request.post(
+ {
+ headers: { 'content-type': 'application/x-www-form-urlencoded' },
+ url: `https://login.windows.net/${TENANT_ID}/oauth2/token`,
+ form: {
+ grant_type: 'client_credentials',
+ client_id: CLIENT_ID,
+ client_secret: CLIENT_SECRET,
+ resource: 'https://cognitiveservices.azure.com/'
+ }
+ },
+ function(err, resp, json) {
+ const result = JSON.parse(json);
+ return res.send({
+ access_token: result.access_token,
+ expires_on: result.expires_on
+ });
+ }
+ );
+});
+```
+
+The `expires_on` property is the date and time at which the token expires, expressed as the number of seconds since January 1, 1970 UTC. Use this value to determine whether your token has expired before attempting to acquire a new one.
+
+```javascript
+async function getToken() {
+ if (Date.now() / 1000 > CREDENTIALS.expires_on) {
+ CREDENTIALS = await refreshCredentials();
+ }
+ return CREDENTIALS.access_token;
+}
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Configure Read Aloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-configure-read-aloud.md
+
+ Title: "Configure Read Aloud"
+
+description: This article will show you how to configure the various options for Read Aloud.
++++++ Last updated : 06/29/2020+++
+# How to configure Read Aloud
+
+This article demonstrates how to configure the various options for Read Aloud in the Immersive Reader.
+
+## Automatically start Read Aloud
+
+The `options` parameter contains all of the flags that can be used to configure Read Aloud. Set `autoplay` to `true` to enable automatically starting Read Aloud after launching the Immersive Reader.
+
+```typescript
+const options = {
+ readAloudOptions: {
+ autoplay: true
+ }
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+> [!NOTE]
+> Due to browser limitations, automatic playback is not supported in Safari.
+
+## Configure the voice
+
+Set `voice` to either `male` or `female`. Not all languages support both voices. For more information, see the [Language Support](./language-support.md) page.
+
+```typescript
+const options = {
+ readAloudOptions: {
+ voice: 'female'
+ }
+};
+```
+
+## Configure playback speed
+
+Set `speed` to a number between `0.5` (50%) and `2.5` (250%) inclusive. Values outside this range will get clamped to either 0.5 or 2.5.
+
+```typescript
+const options = {
+ readAloudOptions: {
+ speed: 1.5
+ }
+};
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Configure Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-configure-translation.md
+
+ Title: "Configure translation"
+
+description: This article will show you how to configure the various options for translation.
++++++ Last updated : 06/29/2020+++
+# How to configure translation
+
+This article demonstrates how to configure the various options for translation in the Immersive Reader.
+
+## Configure translation language
+
+The `options` parameter contains all of the flags that can be used to configure translation. Set the `language` parameter to the language you wish to translate to. See the [Language Support](./language-support.md) for the full list of supported languages.
+
+```typescript
+const options = {
+ translationOptions: {
+ language: 'fr-FR'
+ }
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## Automatically translate the document on load
+
+Set `autoEnableDocumentTranslation` to `true` to enable automatically translating the entire document when the Immersive Reader loads.
+
+```typescript
+const options = {
+ translationOptions: {
+ autoEnableDocumentTranslation: true
+ }
+};
+```
+
+## Automatically enable word translation
+
+Set `autoEnableWordTranslation` to `true` to enable single word translation.
+
+```typescript
+const options = {
+ translationOptions: {
+ autoEnableWordTranslation: true
+ }
+};
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Create Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-create-immersive-reader.md
+
+ Title: "Create an Immersive Reader Resource"
+
+description: This article will show you how to create a new Immersive Reader resource with a custom subdomain and then configure Azure AD in your Azure tenant.
+++++++ Last updated : 07/22/2019+++
+# Create an Immersive Reader resource and configure Azure Active Directory authentication
+
+In this article, we provide a script that will create an Immersive Reader resource and configure Azure Active Directory (Azure AD) authentication. Each time an Immersive Reader resource is created, whether with this script or in the portal, it must also be configured with Azure AD permissions. This script will help you with that.
+
+The script is designed to create and configure all the necessary Immersive Reader and Azure AD resources for you all in one step. However, you can also just configure Azure AD authentication for an existing Immersive Reader resource, if for instance, you happen to have already created one in the Azure portal.
+
+For some customers, it may be necessary to create multiple Immersive Reader resources, for development vs. production, or perhaps for multiple different regions your service is deployed in. For those cases, you can come back and use the script multiple times to create different Immersive Reader resources and get them configured with the Azure AD permissions.
+
+The script is designed to be flexible. It will first look for existing Immersive Reader and Azure AD resources in your subscription, and create them only as necessary if they don't already exist. If it's your first time creating an Immersive Reader resource, the script will do everything you need. If you want to use it just to configure Azure AD for an existing Immersive Reader resource that was created in the portal, it will do that too. It can also be used to create and configure multiple Immersive Reader resources.
+
+## Set up PowerShell environment
+
+1. Start by opening the [Azure Cloud Shell](../../cloud-shell/overview.md). Ensure that Cloud Shell is set to PowerShell in the upper-left hand dropdown or by typing `pwsh`.
+
+1. Copy and paste the following code snippet into the shell.
+
+ ```azurepowershell-interactive
+ function Create-ImmersiveReaderResource(
+ [Parameter(Mandatory=$true, Position=0)] [String] $SubscriptionName,
+ [Parameter(Mandatory=$true)] [String] $ResourceName,
+ [Parameter(Mandatory=$true)] [String] $ResourceSubdomain,
+ [Parameter(Mandatory=$true)] [String] $ResourceSKU,
+ [Parameter(Mandatory=$true)] [String] $ResourceLocation,
+ [Parameter(Mandatory=$true)] [String] $ResourceGroupName,
+ [Parameter(Mandatory=$true)] [String] $ResourceGroupLocation,
+ [Parameter(Mandatory=$true)] [String] $AADAppDisplayName="ImmersiveReaderAAD",
+ [Parameter(Mandatory=$true)] [String] $AADAppIdentifierUri,
+ [Parameter(Mandatory=$true)] [String] $AADAppClientSecret,
+ [Parameter(Mandatory=$true)] [String] $AADAppClientSecretExpiration
+ )
+ {
+ $unused = ''
+ if (-not [System.Uri]::TryCreate($AADAppIdentifierUri, [System.UriKind]::Absolute, [ref] $unused)) {
+ throw "Error: AADAppIdentifierUri must be a valid URI"
+ }
+
+ Write-Host "Setting the active subscription to '$SubscriptionName'"
+ $subscriptionExists = Get-AzSubscription -SubscriptionName $SubscriptionName
+ if (-not $subscriptionExists) {
+ throw "Error: Subscription does not exist"
+ }
+ az account set --subscription $SubscriptionName
+
+ $resourceGroupExists = az group exists --name $ResourceGroupName
+ if ($resourceGroupExists -eq "false") {
+ Write-Host "Resource group does not exist. Creating resource group"
+ $groupResult = az group create --name $ResourceGroupName --location $ResourceGroupLocation
+ if (-not $groupResult) {
+ throw "Error: Failed to create resource group"
+ }
+ Write-Host "Resource group created successfully"
+ }
+
+ # Create an Immersive Reader resource if it doesn't already exist
+ $resourceId = az cognitiveservices account show --resource-group $ResourceGroupName --name $ResourceName --query "id" -o tsv
+ if (-not $resourceId) {
+ Write-Host "Creating the new Immersive Reader resource '$ResourceName' (SKU '$ResourceSKU') in '$ResourceLocation' with subdomain '$ResourceSubdomain'"
+ $resourceId = az cognitiveservices account create `
+ --name $ResourceName `
+ --resource-group $ResourceGroupName `
+ --kind ImmersiveReader `
+ --sku $ResourceSKU `
+ --location $ResourceLocation `
+ --custom-domain $ResourceSubdomain `
+ --query "id" `
+ -o tsv
+
+ if (-not $resourceId) {
+ throw "Error: Failed to create Immersive Reader resource"
+ }
+ Write-Host "Immersive Reader resource created successfully"
+ }
+
+ # Create an Azure Active Directory app if it doesn't already exist
+ $clientId = az ad app show --id $AADAppIdentifierUri --query "appId" -o tsv
+ if (-not $clientId) {
+ Write-Host "Creating new Azure Active Directory app"
+ $clientId = az ad app create --password $AADAppClientSecret --end-date "$AADAppClientSecretExpiration" --display-name $AADAppDisplayName --identifier-uris $AADAppIdentifierUri --query "appId" -o tsv
+
+ if (-not $clientId) {
+ throw "Error: Failed to create Azure Active Directory app"
+ }
+ Write-Host "Azure Active Directory app created successfully."
+ Write-Host "NOTE: To manage your Active Directory app client secrets after this Immersive Reader Resource has been created please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> $AADAppDisplayName -> Certificates and Secrets blade -> Client Secrets section" -ForegroundColor Yellow
+ }
+
+ # Create a service principal if it doesn't already exist
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv
+ if (-not $principalId) {
+ Write-Host "Creating new service principal"
+ az ad sp create --id $clientId | Out-Null
+ $principalId = az ad sp show --id $AADAppIdentifierUri --query "objectId" -o tsv
+
+ if (-not $principalId) {
+ throw "Error: Failed to create new service principal"
+ }
+ Write-Host "New service principal created successfully"
+ }
+
+ # Sleep for 5 seconds to allow the new service principal to propagate
+ Write-Host "Sleeping for 5 seconds"
+ Start-Sleep -Seconds 5
+
+ Write-Host "Granting service principal access to the newly created Immersive Reader resource"
+ $accessResult = az role assignment create --assignee $principalId --scope $resourceId --role "Cognitive Services User"
+ if (-not $accessResult) {
+ throw "Error: Failed to grant service principal access"
+ }
+ Write-Host "Service principal access granted successfully"
+
+ # Grab the tenant ID, which is needed when obtaining an Azure AD token
+ $tenantId = az account show --query "tenantId" -o tsv
+
+ # Collect the information needed to obtain an Azure AD token into one object
+ $result = @{}
+ $result.TenantId = $tenantId
+ $result.ClientId = $clientId
+ $result.ClientSecret = $AADAppClientSecret
+ $result.Subdomain = $ResourceSubdomain
+
+ Write-Host "Success! " -ForegroundColor Green -NoNewline
+ Write-Host "Save the following JSON object to a text file for future reference:"
+ Write-Output (ConvertTo-Json $result)
+ }
+ ```
+
+1. Run the function `Create-ImmersiveReaderResource`, supplying the '<PARAMETER_VALUES>' placeholders below with your own values as appropriate.
+
+ ```azurepowershell-interactive
+ Create-ImmersiveReaderResource -SubscriptionName '<SUBSCRIPTION_NAME>' -ResourceName '<RESOURCE_NAME>' -ResourceSubdomain '<RESOURCE_SUBDOMAIN>' -ResourceSKU '<RESOURCE_SKU>' -ResourceLocation '<RESOURCE_LOCATION>' -ResourceGroupName '<RESOURCE_GROUP_NAME>' -ResourceGroupLocation '<RESOURCE_GROUP_LOCATION>' -AADAppDisplayName '<AAD_APP_DISPLAY_NAME>' -AADAppIdentifierUri '<AAD_APP_IDENTIFIER_URI>' -AADAppClientSecret '<AAD_APP_CLIENT_SECRET>' -AADAppClientSecretExpiration '<AAD_APP_CLIENT_SECRET_EXPIRATION>'
+ ```
+
+ The full command will look something like the following. Here we have put each parameter on its own line for clarity, so you can see the whole command. Do not copy or use this command as-is. Copy and use the command above with your own values. This example has dummy values for the '<PARAMETER_VALUES>' above. Yours will be different, as you will come up with your own names for these values.
+
+ ```
+ Create-ImmersiveReaderResource
+ -SubscriptionName 'MyOrganizationSubscriptionName'
+ -ResourceName 'MyOrganizationImmersiveReader'
+ -ResourceSubdomain 'MyOrganizationImmersiveReader'
+ -ResourceSKU 'S0'
+ -ResourceLocation 'westus2'
+ -ResourceGroupName 'MyResourceGroupName'
+ -ResourceGroupLocation 'westus2'
+ -AADAppDisplayName 'MyOrganizationImmersiveReaderAADApp'
+ -AADAppIdentifierUri 'https://MyOrganizationImmersiveReaderAADApp'
+ -AADAppClientSecret 'SomeStrongPassword'
+ -AADAppClientSecretExpiration '2021-12-31'
+ ```
+
+ | Parameter | Comments |
+ | | |
+ | SubscriptionName |Name of the Azure subscription to use for your Immersive Reader resource. You must have a subscription in order to create a resource. |
+ | ResourceName | Must be alphanumeric, and may contain '-', as long as the '-' is not the first or last character. Length may not exceed 63 characters.|
+ | ResourceSubdomain |A custom subdomain is needed for your Immersive Reader resource. The subdomain is used by the SDK when calling the Immersive Reader service to launch the Reader. The subdomain must be globally unique. The subdomain must be alphanumeric, and may contain '-', as long as the '-' is not the first or last character. Length may not exceed 63 characters. This parameter is optional if the resource already exists. |
+ | ResourceSKU |Options: `S0` (Standard tier) or `S1` (Education/Nonprofit organizations). Visit our [Cognitive Services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/immersive-reader/) to learn more about each available SKU. This parameter is optional if the resource already exists. |
+ | ResourceLocation |Options: `eastus`, `eastus2`, `southcentralus`, `westus`, `westus2`, `australiaeast`, `southeastasia`, `centralindia`, `japaneast`, `northeurope`, `uksouth`, `westeurope`. This parameter is optional if the resource already exists. |
+ | ResourceGroupName |Resources are created in resource groups within subscriptions. Supply the name of an existing resource group. If the resource group does not already exist, a new one with this name will be created. |
+ | ResourceGroupLocation |If your resource group doesn't exist, you need to supply a location in which to create the group. To find a list of locations, run `az account list-locations`. Use the *name* property (without spaces) of the returned result. This parameter is optional if your resource group already exists. |
+ | AADAppDisplayName |The Azure Active Directory application display name. If an existing Azure AD application is not found, a new one with this name will be created. This parameter is optional if the Azure AD application already exists. |
+ | AADAppIdentifierUri |The URI for the Azure AD app. If an existing Azure AD app is not found, a new one with this URI will be created. For example, `https://immersivereaderaad-mycompany`. |
+ | AADAppClientSecret |A password you create that will be used later to authenticate when acquiring a token to launch the Immersive Reader. The password must be at least 16 characters long, contain at least 1 special character, and contain at least 1 numeric character. To manage Azure AD application client secrets after you've created this resource please visit https://portal.azure.com and go to Home -> Azure Active Directory -> App Registrations -> `[AADAppDisplayName]` -> Certificates and Secrets blade -> Client Secrets section (as shown in the "Manage your Azure AD application secrets" screenshot below). |
+ | AADAppClientSecretExpiration |The date or datetime after which your `[AADAppClientSecret]` will expire (e.g. '2020-12-31T11:59:59+00:00' or '2020-12-31'). |
+
+ Manage your Azure AD application secrets
+
+ ![Azure Portal Certificates and Secrets blade](./media/client-secrets-blade.png)
+
+1. Copy the JSON output into a text file for later use. The output should look like the following.
+
+ ```json
+ {
+ "TenantId": "...",
+ "ClientId": "...",
+ "ClientSecret": "...",
+ "Subdomain": "..."
+ }
+ ```
+
+## Next steps
+
+* View the [Node.js quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
+* View the [Android tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](./how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Customize Launch Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-customize-launch-button.md
+
+ Title: "Edit the Immersive Reader launch button"
+
+description: This article will show you how to customize the button that launches the Immersive Reader.
+++++++ Last updated : 03/08/2021+++
+# How to customize the Immersive Reader button
+
+This article demonstrates how to customize the button that launches the Immersive Reader to fit the needs of your application.
+
+## Add the Immersive Reader button
+
+The Immersive Reader SDK provides default styling for the button that launches the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling.
+
+```html
+<div class='immersive-reader-button'></div>
+```
+
+## Customize the button style
+
+Use the `data-button-style` attribute to set the style of the button. The allowed values are `icon`, `text`, and `iconAndText`. The default value is `icon`.
+
+### Icon button
+
+```html
+<div class='immersive-reader-button' data-button-style='icon'></div>
+```
+
+This renders the following:
+
+![This is the rendered Text button](./media/button-icon.png)
+
+### Text button
+
+```html
+<div class='immersive-reader-button' data-button-style='text'></div>
+```
+
+This renders the following:
+
+![This is the rendered Immersive Reader button.](./media/button-text.png)
+
+### Icon and text button
+
+```html
+<div class='immersive-reader-button' data-button-style='iconAndText'></div>
+```
+
+This renders the following:
+
+![Icon button](./media/button-icon-and-text.png)
+
+## Customize the button text
+
+Configure the language and the alt text for the button using the `data-locale` attribute. The default language is English.
+
+```html
+<div class='immersive-reader-button' data-locale='fr-FR'></div>
+```
+
+## Customize the size of the icon
+
+The size of the Immersive Reader icon can be configured using the `data-icon-px-size` attribute. This sets the size of the icon in pixels. The default size is 20px.
+
+```html
+<div class='immersive-reader-button' data-icon-px-size='50'></div>
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services How To Launch Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-launch-immersive-reader.md
+
+ Title: "How to launch the Immersive Reader"
+
+description: Learn how to launch the Immersive reader using JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
+++++ Last updated : 03/04/2021++
+zone_pivot_groups: immersive-reader-how-to-guides
++
+# How to launch the Immersive Reader
+
+In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This article demonstrates how to launch the Immersive Reader JavaScript, Python, Android, or iOS.
++++++++++++
applied-ai-services How To Multiple Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-multiple-resources.md
+
+ Title: "Integrate multiple Immersive Reader resources"
+
+description: In this tutorial, you'll create a Node.js application that launches the Immersive Reader using multiple Immersive Reader resources.
++++++ Last updated : 01/14/2020++
+#Customer intent: As a developer, I want to learn more about the Immersive Reader SDK so that I can fully utilize all that the SDK has to offer.
++
+# Integrate multiple Immersive Reader resources
+
+In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. In the [quickstart](./quickstarts/client-libraries.md), you learned how to use Immersive Reader with a single resource. This tutorial covers how to integrate multiple Immersive Reader resources in the same application. In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create multiple Immersive Reader resource under an existing resource group
+> * Launch the Immersive Reader using multiple resources
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+* Follow the [quickstart](./quickstarts/client-libraries.md?pivots=programming-language-nodejs) to create a web app that launches the Immersive Reader with NodeJS. In that quickstart, you configure a single Immersive Reader resource. We will build on top of that in this tutorial.
+
+## Create the Immersive Reader resources
+
+Follow [these instructions](./how-to-create-immersive-reader.md) to create each Immersive Reader resource. The **Create-ImmersiveReaderResource** script has `ResourceName`, `ResourceSubdomain`, and `ResourceLocation` as parameters. These should be unique for each resource being created. The remaining parameters should be the same as what you used when setting up your first Immersive Reader resource. This way, each resource can be linked to the same Azure resource group and Azure AD application.
+
+The example below shows how to create two resources, one in WestUS, and another in EastUS. Notice the unique values for `ResourceName`, `ResourceSubdomain`, and `ResourceLocation`.
+
+```azurepowershell-interactive
+Create-ImmersiveReaderResource
+ -SubscriptionName <SUBSCRIPTION_NAME> `
+ -ResourceName Resource_name_wus `
+ -ResourceSubdomain resource-subdomain-wus `
+ -ResourceSKU <RESOURCE_SKU> `
+ -ResourceLocation westus `
+ -ResourceGroupName <RESOURCE_GROUP_NAME> `
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
+ -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
+ -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
+ -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+
+Create-ImmersiveReaderResource
+ -SubscriptionName <SUBSCRIPTION_NAME> `
+ -ResourceName Resource_name_eus `
+ -ResourceSubdomain resource-subdomain-eus `
+ -ResourceSKU <RESOURCE_SKU> `
+ -ResourceLocation eastus `
+ -ResourceGroupName <RESOURCE_GROUP_NAME> `
+ -ResourceGroupLocation <RESOURCE_GROUP_LOCATION> `
+ -AADAppDisplayName <AAD_APP_DISPLAY_NAME> `
+ -AADAppIdentifierUri <AAD_APP_IDENTIFIER_URI> `
+ -AADAppClientSecret <AAD_APP_CLIENT_SECRET>
+```
+
+## Add resources to environment configuration
+
+In the quickstart, you created an environment configuration file that contains the `TenantId`, `ClientId`, `ClientSecret`, and `Subdomain` parameters. Since all of your resources use the same Azure AD application, we can use the same values for the `TenantId`, `ClientId`, and `ClientSecret`. The only change that needs to be made is to list each subdomain for each resource.
+
+Your new __.env__ file should now look something like the following:
+
+```text
+TENANT_ID={YOUR_TENANT_ID}
+CLIENT_ID={YOUR_CLIENT_ID}
+CLIENT_SECRET={YOUR_CLIENT_SECRET}
+SUBDOMAIN_WUS={YOUR_WESTUS_SUBDOMAIN}
+SUBDOMAIN_EUS={YOUR_EASTUS_SUBDOMAIN}
+```
+
+Be sure not to commit this file into source control, as it contains secrets that should not be made public.
+
+Next, we're going to modify the _routes\index.js_ file that we created to support our multiple resources. Replace its content with the following code.
+
+As before, this code creates an API endpoint that acquires an Azure AD authentication token using your service principal password. This time, it allows the user to specify a resource location and pass it in as a query parameter. It then returns an object containing the token and the corresponding subdomain.
+
+```javascript
+var express = require('express');
+var router = express.Router();
+var request = require('request');
+
+/* GET home page. */
+router.get('/', function(req, res, next) {
+ res.render('index', { Title: 'Express' });
+});
+
+router.get('/GetTokenAndSubdomain', function(req, res) {
+ try {
+ request.post({
+ headers: {
+ 'content-type': 'application/x-www-form-urlencoded'
+ },
+ url: `https://login.windows.net/${process.env.TENANT_ID}/oauth2/token`,
+ form: {
+ grant_type: 'client_credentials',
+ client_id: process.env.CLIENT_ID,
+ client_secret: process.env.CLIENT_SECRET,
+ resource: 'https://cognitiveservices.azure.com/'
+ }
+ },
+ function(err, resp, tokenResult) {
+ if (err) {
+ console.log(err);
+ return res.status(500).send('CogSvcs IssueToken error');
+ }
+
+ var tokenResultParsed = JSON.parse(tokenResult);
+
+ if (tokenResultParsed.error) {
+ console.log(tokenResult);
+ return res.send({error : "Unable to acquire Azure AD token. Check the debugger for more information."})
+ }
+
+ var token = tokenResultParsed.access_token;
+
+ var subdomain = "";
+ var region = req.query && req.query.region;
+ switch (region) {
+ case "eus":
+ subdomain = process.env.SUBDOMAIN_EUS
+ break;
+ case "wus":
+ default:
+ subdomain = process.env.SUBDOMAIN_WUS
+ }
+
+ return res.send({token, subdomain});
+ });
+ } catch (err) {
+ console.log(err);
+ return res.status(500).send('CogSvcs IssueToken error');
+ }
+});
+
+module.exports = router;
+```
+
+The **getimmersivereaderlaunchparams** API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
+
+## Launch the Immersive Reader with sample content
+
+1. Open _views\index.pug_, and replace its content with the following code. This code populates the page with some sample content, and adds two buttons that launches the Immersive Reader. One for launching Immersive Reader for the EastUS resource, and another for the WestUS resource.
+
+ ```pug
+ doctype html
+ html
+ head
+ title Immersive Reader Quickstart Node.js
+
+ link(rel='stylesheet', href='https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css')
+
+ // A polyfill for Promise is needed for IE11 support.
+ script(src='https://cdn.jsdelivr.net/npm/promise-polyfill@8/dist/polyfill.min.js')
+
+ script(src='https://contentstorage.onenote.office.net/onenoteltir/immersivereadersdk/immersive-reader-sdk.1.0.0.js')
+ script(src='https://code.jquery.com/jquery-3.3.1.min.js')
+
+ style(type="text/css").
+ .immersive-reader-button {
+ background-color: white;
+ margin-top: 5px;
+ border: 1px solid black;
+ float: right;
+ }
+ body
+ div(class="container")
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("wus")') WestUS Immersive Reader
+ button(class="immersive-reader-button" data-button-style="icon" data-locale="en" onclick='handleLaunchImmersiveReader("eus")') EastUS Immersive Reader
+
+ h1(id="ir-title") About Immersive Reader
+ div(id="ir-content" lang="en-us")
+ p Immersive Reader is a tool that implements proven techniques to improve reading comprehension for emerging readers, language learners, and people with learning differences. The Immersive Reader is designed to make reading more accessible for everyone. The Immersive Reader
+
+ ul
+ li Shows content in a minimal reading view
+ li Displays pictures of commonly used words
+ li Highlights nouns, verbs, adjectives, and adverbs
+ li Reads your content out loud to you
+ li Translates your content into another language
+ li Breaks down words into syllables
+
+ h3 The Immersive Reader is available in many languages.
+
+ p(lang="es-es") El Lector inmersivo está disponible en varios idiomas.
+ p(lang="zh-cn") 沉浸式阅读器支持许多语言
+ p(lang="de-de") Der plastische Reader ist in vielen Sprachen verf├╝gbar.
+ p(lang="ar-eg" dir="rtl" style="text-align:right") يتوفر \"القارئ الشامل\" في العديد من اللغات.
+
+ script(type="text/javascript").
+ function getTokenAndSubdomainAsync(region) {
+ return new Promise(function (resolve, reject) {
+ $.ajax({
+ url: "/GetTokenAndSubdomain",
+ type: "GET",
+ data: {
+ region: region
+ },
+ success: function (data) {
+ if (data.error) {
+ reject(data.error);
+ } else {
+ resolve(data);
+ }
+ },
+ error: function (err) {
+ reject(err);
+ }
+ });
+ });
+ }
+
+ function handleLaunchImmersiveReader(region) {
+ getTokenAndSubdomainAsync(region)
+ .then(function (response) {
+ const token = response["token"];
+ const subdomain = response["subdomain"];
+ // Learn more about chunk usage and supported MIME types https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#chunk
+ const data = {
+ Title: $("#ir-title").text(),
+ chunks: [{
+ content: $("#ir-content").html(),
+ mimeType: "text/html"
+ }]
+ };
+ // Learn more about options https://docs.microsoft.com/azure/cognitive-services/immersive-reader/reference#options
+ const options = {
+ "onExit": exitCallback,
+ "uiZIndex": 2000
+ };
+ ImmersiveReader.launchAsync(token, subdomain, data, options)
+ .catch(function (error) {
+ alert("Error in launching the Immersive Reader. Check the console.");
+ console.log(error);
+ });
+ })
+ .catch(function (error) {
+ alert("Error in getting the Immersive Reader token and subdomain. Check the console.");
+ console.log(error);
+ });
+ }
+
+ function exitCallback() {
+ console.log("This is the callback function. It is executed when the Immersive Reader closes.");
+ }
+ ```
+
+3. Our web app is now ready. Start the app by running:
+
+ ```bash
+ npm start
+ ```
+
+4. Open your browser and navigate to `http://localhost:3000`. You should see the above content on the page. Click on either the **EastUS Immersive Reader** button or the **WestUS Immersive Reader** button to launch the Immersive Reader using those respective resources.
+
+## Next steps
+
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](./reference.md)
+* View code samples on [GitHub](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/advanced-csharp)
applied-ai-services How To Prepare Html https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-prepare-html.md
+
+ Title: "How to prepare HTML content for Immersive Reader"
+
+description: Learn how to launch the Immersive reader using HTML, JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
+++++ Last updated : 03/04/2021+++
+# How to prepare HTML content for Immersive Reader
+
+This article shows you how to structure your HTML and retrieve the content, so that it can be used by Immersive Reader.
+
+## Prepare the HTML content
+
+Place the content that you want to render in the Immersive Reader inside of a container element. Be sure that the container element has a unique `id`. The Immersive Reader provides support for basic HTML elements, see the [reference](reference.md#html-support) for more information.
+
+```html
+<div id='immersive-reader-content'>
+ <b>Bold</b>
+ <i>Italic</i>
+ <u>Underline</u>
+ <strike>Strikethrough</strike>
+ <code>Code</code>
+ <sup>Superscript</sup>
+ <sub>Subscript</sub>
+ <ul><li>Unordered lists</li></ul>
+ <ol><li>Ordered lists</li></ol>
+</div>
+```
+
+## Get the HTML content in JavaScript
+
+Use the `id` of the container element to get the HTML content in your JavaScript code.
+
+```javascript
+const htmlContent = document.getElementById('immersive-reader-content').innerHTML;
+```
+
+## Launch the Immersive Reader with your HTML content
+
+When calling `ImmersiveReader.launchAsync`, set the chunk's `mimeType` property to `text/html` to enable rendering HTML.
+
+```javascript
+const data = {
+ chunks: [{
+ content: htmlContent,
+ mimeType: 'text/html'
+ }]
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](reference.md)
applied-ai-services How To Store User Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to-store-user-preferences.md
+
+ Title: "Store user preferences"
+
+description: This article will show you how to store the user's preferences.
++++++ Last updated : 06/29/2020+++
+# How to store user preferences
+
+This article demonstrates how to store the user's UI settings, formally known as **user preferences**, via the [-preferences](./reference.md#options) and [-onPreferencesChanged](./reference.md#options) Immersive Reader SDK options.
+
+When the [CookiePolicy](./reference.md#cookiepolicy-options) SDK option is set to *Enabled*, the Immersive Reader application stores the **user preferences** (text size, theme color, font, and so on) in cookies, which are local to a specific browser and device. Each time the user launches the Immersive Reader on the same browser and device, it will open with the user's preferences from their last session on that device. However, if the user opens the Immersive Reader on a different browser or device, the settings will initially be configured with the Immersive Reader's default settings, and the user will have to set their preferences again, and so on for each device they use. The `-preferences` and `-onPreferencesChanged` Immersive Reader SDK options provide a way for applications to roam a user's preferences across various browsers and devices, so that the user has a consistent experience wherever they use the application.
+
+First, by supplying the `-onPreferencesChanged` callback SDK option when launching the Immersive Reader application, the Immersive Reader will send a `-preferences` string back to the host application each time the user changes their preferences during the Immersive Reader session. The host application is then responsible for storing the user preferences in their own system. Then, when that same user launches the Immersive Reader again, the host application can retrieve that user's preferences from storage, and supply them as the `-preferences` string SDK option when launching the Immersive Reader application, so that the user's preferences are restored.
+
+This functionality may be used as an alternate means to storing **user preferences** in the case where using cookies is not desirable or feasible.
+
+> [!CAUTION]
+> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
+
+## How to enable storing user preferences
+
+the Immersive Reader SDK [launchAsync](./reference.md#launchasync) `options` parameter contains the `-onPreferencesChanged` callback. This function will be called anytime the user changes their preferences. The `value` parameter contains a string, which represents the user's current preferences. This string is then stored, for that user, by the host application.
+
+```typescript
+const options = {
+ onPreferencesChanged: (value: string) => {
+ // Store user preferences here
+ }
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## How to load user preferences into the Immersive Reader
+
+Pass in the user's preferences to the Immersive Reader using the `-preferences` option. A trivial example to store and load the user's preferences is as follows:
+
+```typescript
+const storedUserPreferences = localStorage.getItem("USER_PREFERENCES");
+let userPreferences = storedUserPreferences === null ? null : storedUserPreferences;
+const options = {
+ preferences: userPreferences,
+ onPreferencesChanged: (value: string) => {
+ userPreferences = value;
+ localStorage.setItem("USER_PREFERENCES", userPreferences);
+ }
+};
+```
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services Display Math https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to/display-math.md
+
+ Title: "Display math in the Immersive Reader"
+
+description: This article will show you how to display math in the Immersive Reader.
++++++ Last updated : 01/14/2020++++
+# How to display math in the Immersive Reader
+
+The Immersive Reader can display math when provided in the form of Mathematical Markup Language ([MathML](https://developer.mozilla.org/docs/Web/MathML)).
+The MIME type can be set through the Immersive Reader [chunk](../reference.md#chunk). See [supported MIME types](../reference.md#supported-mime-types) for more information.
+
+## Send Math to the Immersive Reader
+In order to send math to the Immersive Reader, supply a chunk containing MathML, and set the MIME type to ```application/mathml+xml```;
+
+For example, if your content were the following:
+
+```html
+<div id='ir-content'>
+ <math xmlns='http://www.w3.org/1998/Math/MathML'>
+ <mfrac>
+ <mrow>
+ <msup>
+ <mi>x</mi>
+ <mn>2</mn>
+ </msup>
+ <mo>+</mo>
+ <mn>3</mn>
+ <mi>x</mi>
+ <mo>+</mo>
+ <mn>2</mn>
+ </mrow>
+ <mrow>
+ <mi>x</mi>
+ <mo>−</mo>
+ <mn>3</mn>
+ </mrow>
+ </mfrac>
+ <mo>=</mo>
+ <mn>4</mn>
+ </math>
+</div>
+```
+
+Then you could display your content by using the following JavaScript.
+
+```javascript
+const data = {
+ Title: 'My Math',
+ chunks: [{
+ content: document.getElementById('ir-content').innerHTML.trim(),
+ mimeType: 'application/mathml+xml'
+ }]
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, data, YOUR_OPTIONS);
+```
+
+When you launch the Immersive Reader, you should see:
+
+![Math in Immersive Reader](../media/how-tos/1-math.png)
+
+## Next steps
+
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
applied-ai-services Set Cookie Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/how-to/set-cookie-policy.md
+
+ Title: "Set Immersive Reader Cookie Policy"
+
+description: This article will show you how to set the cookie policy for the Immersive Reader.
+++++++ Last updated : 01/06/2020++++
+# How to set the cookie policy for the Immersive Reader
+
+The Immersive Reader will disable cookie usage by default. If you enable cookie usage, then the Immersive Reader may use cookies to maintain user preferences and track feature usage. If you enable cookie usage in the Immersive Reader, please consider the requirements of EU Cookie Compliance Policy. It is the responsibility of the host application to obtain any necessary user consent in accordance with EU Cookie Compliance Policy.
+
+The cookie policy can be set through the Immersive Reader [options](../reference.md#options).
+
+## Enable Cookie Usage
+
+```javascript
+var options = {
+ 'cookiePolicy': ImmersiveReader.CookiePolicy.Enable
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## Disable Cookie Usage
+
+```javascript
+var options = {
+ 'cookiePolicy': ImmersiveReader.CookiePolicy.Disable
+};
+
+ImmersiveReader.launchAsync(YOUR_TOKEN, YOUR_SUBDOMAIN, YOUR_DATA, options);
+```
+
+## Next steps
+
+* View the [Node.js quickstart](../quickstarts/client-libraries.md?pivots=programming-language-nodejs) to see what else you can do with the Immersive Reader SDK using Node.js
+* View the [Android tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Java or Kotlin for Android
+* View the [iOS tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Swift for iOS
+* View the [Python tutorial](../how-to-launch-immersive-reader.md) to see what else you can do with the Immersive Reader SDK using Python
+* Explore the [Immersive Reader SDK](https://github.com/microsoft/immersive-reader-sdk) and the [Immersive Reader SDK Reference](../reference.md)
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/language-support.md
+
+ Title: Language support - Immersive Reader
+
+description: Learn more about the human languages that are available with Immersive Reader.
++++++ Last updated : 04/13/2020+++
+# Language support for Immersive Reader
+
+This article lists supported human languages for Immersive Reader features.
++
+## Text to speech
+
+| Language | Tag |
+|-|--|
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Saudi Arabia) | ar-SA |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Chinese | zh |
+| Chinese (China) | zh-CN |
+| Chinese (Hong Kong SAR) | zh-HK |
+| Chinese (Macao SAR) | zh-MO |
+| Chinese (Singapore) | zh-SG |
+| Chinese (Taiwan) | zh-TW |
+| Chinese Simplified | zh-Hans |
+| Chinese Simplified (China) | zh-Hans-CN |
+| Chinese Simplified (Singapore) | zh-Hans-SG |
+| Chinese Traditional | zh-Hant |
+| Chinese Traditional (China) | zh-Hant-CN |
+| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
+| Chinese Traditional (Macao SAR) | zh-Hant-MO |
+| Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Belgium) | nl-BE |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (Philippines) | en-PH |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et-EE |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Belgium) | fr-BE |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Hebrew | he |
+| Hebrew (Israel) | he-IL |
+| Hindi | hi |
+| Hindi (India) | hi-IN |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Indonesian | id |
+| Indonesian (Indonesia) | id-ID |
+| Irish | ga-IE |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Latvian | Lv-LV |
+| Lithuanian | lt-LT |
+| Malay | ms |
+| Malay (Malaysia) | ms-MY |
+| Maltese | Mt-MT |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Tamil | ta |
+| Tamil (India) | ta-IN |
+| Telugu | te |
+| Telugu (India) | te-IN |
+| Thai | th |
+| Thai (Thailand) | th-TH |
+| Turkish | tr |
+| Turkish (Turkey) | tr-TR |
+| Ukrainian | ur-PK |
+| Vietnamese | vi |
+| Vietnamese (Vietnam) | vi-VN |
+| Welsh | Cy-GB |
+
+## Translation
+
+| Language | Tag |
+|-|--|
+| Afrikaans | af |
+| Albanian | sq |
+| Amharic | am |
+| Arabic | ar |
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Saudi Arabia) | ar-SA |
+| Armenian | hy |
+| Azerbaijani | az |
+| Afrikaans | af |
+| Bangla | bn |
+| Bosnian | bs |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Burmese | my |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Chinese | zh |
+| Chinese (China) | zh-CN |
+| Chinese (Hong Kong SAR) | zh-HK |
+| Chinese (Macao SAR) | zh-MO |
+| Chinese (Singapore) | zh-SG |
+| Chinese (Taiwan) | zh-TW |
+| Chinese Simplified | zh-Hans |
+| Chinese Simplified (China) | zh-Hans-CN |
+| Chinese Simplified (Singapore) | zh-Hans-SG |
+| Chinese Traditional | zh-Hant |
+| Chinese Traditional (China) | zh-Hant-CN |
+| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
+| Chinese Traditional (Macao SAR) | zh-Hant-MO |
+| Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dari (Afghanistan) | prs |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et |
+| Fijian | fj |
+| Filipino | fil |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Gujarati | gu |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Haitian (Creole) | ht |
+| Hebrew | he |
+| Hebrew (Israel) | he-IL |
+| Hindi | hi |
+| Hindi (India) | hi-IN |
+| Hmong Daw | mww |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Indonesian | id |
+| Indonesian (Indonesia) | id-ID |
+| Irish | ga |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Kannada | kn |
+| Kazakh | kk |
+| Khmer | km |
+| Kiswahili | sw |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Kurdish (Central) | ku |
+| Kurdish (Northern) | kmr |
+| Lao | lo |
+| Latvian | lv |
+| Lithuanian | lt |
+| Malagasy | mg |
+| Malay | ms |
+| Malay (Malaysia) | ms-MY |
+| Malayalam | ml |
+| Maltese | mt |
+| Maori | mi |
+| Marathi | mr |
+| Nepali | ne |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Odia | or |
+| Pashto (Afghanistan) | ps |
+| Persian | fa |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Punjabi | pa |
+| Querétaro Otomi | otq |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Samoan | sm |
+| Serbian | sr |
+| Serbian(Cyrillic) | sr-Cyrl |
+| Serbian (Latin) | sr-Latn |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Tahitian | ty |
+| Tamil | ta |
+| Tamil (India) | ta-IN |
+| Telugu | te |
+| Telugu (India) | te-IN |
+| Thai | th |
+| Thai (Thailand) | th-TH |
+| Tigrinya | ti |
+| Tongan | to |
+| Turkish | tr |
+| Turkish (Turkey) | tr-TR |
+| Ukrainian | uk |
+| Urdu | ur |
+| Vietnamese | vi |
+| Vietnamese (Vietnam) | vi-VN |
+| Welsh | cy |
+| Yucatec Maya | yua |
+| Yue Chinese | yue |
++
+## Language detection
+
+| Language | Tag |
+|-|--|
+| Arabic | ar |
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Saudi Arabia) | ar-SA |
+| Basque | eu |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Chinese Simplified | zh-Hans |
+| Chinese Simplified (China) | zh-Hans-CN |
+| Chinese Simplified (Singapore) | zh-Hans-SG |
+| Chinese Traditional | zh-Hant-CN |
+| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK |
+| Chinese Traditional (Macao SAR) | zh-Hant-MO |
+| Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| Galician | gl |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Hebrew | he |
+| Hebrew (Israel) | he-IL |
+| Hindi | hi |
+| Hindi (India) | hi-IN |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Indonesian | id |
+| Indonesian (Indonesia) | id-ID |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Kazakh | kk |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Latvian | lv |
+| Lithuanian | lt |
+| Malay | ms |
+| Malay (Malaysia) | ms-MY |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Norwegian Nynorsk | nn |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Serbian(Cyrillic) | sr-Cyrl |
+| Serbian (Latin) | sr-Latn |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Tamil | ta |
+| Tamil (India) | ta-IN |
+| Telugu | te |
+| Telugu (India) | te-IN |
+| Thai | th |
+| Thai (Thailand) | th-TH |
+| Turkish | tr |
+| Turkish (Turkey) | tr-TR |
+| Ukrainian | uk |
+| Vietnamese | vi |
+| Vietnamese (Vietnam) | vi-VN |
+| Welsh | cy |
+
+## Syllabification
+
+| Language | Tag |
+|-|--|
+| Basque | eu |
+| Bulgarian | bg |
+| Bulgarian (Bulgaria) | bg-BG |
+| Catalan | ca |
+| Catalan (Catalan) | ca-ES |
+| Croatian | hr |
+| Croatian (Croatia) | hr-HR |
+| Czech | cs |
+| Czech (Czech Republic) | cs-CZ |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Estonian | et |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| Galician | gl |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Greek | el |
+| Greek (Greece) | el-GR |
+| Hungarian | hu |
+| Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Kazakh | kk |
+| Latvian | lv |
+| Lithuanian | lt |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Norwegian Nynorsk | nn |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Portuguese (Portugal) | pt-PT |
+| Romanian | ro |
+| Romanian (Romania) | ro-RO |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Serbian | sr |
+| Serbian(Cyrillic) | sr-Cyrl |
+| Serbian (Latin) | sr-Latn |
+| Slovak | sk |
+| Slovak (Slovakia) | sk-SK |
+| Slovenian | sl |
+| Slovenian (Slovenia) | sl-SI |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+| Turkish | tr |
+| Turkish (Turkey) | tr-TR |
+| Ukrainian | uk |
+| Welsh | cy |
+
+## Picture dictionary
+
+| Language | Tag |
+|-|--|
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
+
+## Parts of speech
+
+| Language | Tag |
+|-|--|
+| Arabic (Egyptian) | ar-EG |
+| Arabic (Saudi Arabia) | ar-SA |
+| Danish | da |
+| Danish (Denmark) | da-DK |
+| Dutch | nl |
+| Dutch (Netherlands) | nl-NL |
+| English | en |
+| English (Australia) | en-AU |
+| English (Canada) | en-CA |
+| English (Hong Kong SAR) | en-HK |
+| English (India) | en-IN |
+| English (Ireland) | en-IE |
+| English (New Zealand) | en-NZ |
+| English (United Kingdom) | en-GB |
+| English (United States) | en-US |
+| Finnish | fi |
+| Finnish (Finland) | fi-FI |
+| French | fr |
+| French (Canada) | fr-CA |
+| French (France) | fr-FR |
+| French (Switzerland) | fr-CH |
+| German | de |
+| German (Austria) | de-AT |
+| German (Germany) | de-DE |
+| German (Switzerland)| de-CH |
+| Italian | it |
+| Italian (Italy) | it-IT |
+| Japanese | ja |
+| Japanese (Japan) | ja-JP |
+| Korean | ko |
+| Korean (Korea) | ko-KR |
+| Norwegian Bokmal| nb |
+| Norwegian Bokmal (Norway) | nb-NO |
+| Norwegian Nynorsk | nn |
+| Polish | pl |
+| Polish (Poland) | pl-PL |
+| Portuguese | pt |
+| Portuguese (Brazil) | pt-BR |
+| Russian | ru |
+| Russian (Russia) | ru-RU |
+| Spanish | es |
+| Spanish (Latin America) | es-419 |
+| Spanish (Mexico) | es-MX |
+| Spanish (Spain) | es-ES |
+| Swedish | sv |
+| Swedish (Sweden) | sv-SE |
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/overview.md
+
+ Title: What is Azure Immersive Reader?
+
+description: Immersive Reader is a tool that is designed to help people with learning differences or help new readers and language learners with reading comprehension.
+++++++ Last updated : 01/4/2020++
+keywords: readers, language learners, display pictures, improve reading, read content, translate
+#Customer intent: As a developer, I want to learn more about the Immersive Reader, which is a new offering in Cognitive Services, so that I can embed this package of content into a document to accommodate users with reading differences.
++
+# What is Azure Immersive Reader?
+
+[Immersive Reader](https://www.onenote.com/learningtools) is part of [Azure Applied AI Services](../../applied-ai-services/what-are-applied-ai-services.md), and is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. With the Immersive Reader client library, you can leverage the same technology used in Microsoft Word and Microsoft One Note to improve your web applications.
+
+This documentation contains the following types of articles:
+
+* **[Quickstarts](quickstarts/client-libraries.md)** are getting-started instructions to guide you through making requests to the service.
+* **[How-to guides](how-to-create-immersive-reader.md)** contain instructions for using the service in more specific or customized ways.
+
+## Use Immersive Reader to improve reading accessibility
+
+Immersive Reader is designed to make reading easier and more accessible for everyone. Let's take a look at a few of Immersive Reader's core features.
+
+### Isolate content for improved readability
+
+Immersive Reader isolates content to improve readability.
+
+ ![Isolate content for improved readability with Immersive Reader](./media/immersive-reader.png)
+
+### Display pictures for common words
+
+For commonly used terms, the Immersive Reader will display a picture.
+
+ ![Picture Dictionary with Immersive Reader](./media/picture-dictionary.png)
+
+### Highlight parts of speech
+
+Immersive Reader can be use to help learners understand parts of speech and grammar by highlighting verbs, nouns, pronouns, and more.
+
+ ![Show parts of speech with Immersive Reader](./media/parts-of-speech.png)
+
+### Read content aloud
+
+Speech synthesis (or text-to-speech) is baked into the Immersive Reader service, which lets your readers select text to be read aloud.
+
+ ![Read text aloud with Immersive Reader](./media/read-aloud.png)
+
+### Translate content in real-time
+
+Immersive Reader can translate text into many languages in real-time. This is helpful to improve comprehension for readers learning a new language.
+
+ ![Translate text with Immersive Reader](./media/translation.png)
+
+### Split words into syllables
+
+With Immersive Reader you can break words into syllables to improve readability or to sound out new words.
+
+ ![Break words into syllables with Immersive Reader](./media/syllabification.png)
+
+## How does Immersive Reader work?
+
+Immersive Reader is a standalone web application. When invoked using the Immersive Reader client library is displayed on top of your existing web application in an `iframe`. When your wep application calls the Immersive Reader service, you specify the content to show the reader. The Immersive Reader client library handles the creation and styling of the `iframe` and communication with the Immersive Reader backend service. The Immersive Reader service processes the content for parts of speech, text to speech, translation, and more.
+
+## Get started with Immersive Reader
+
+The Immersive Reader client library is available in C#, JavaScript, Java (Android), Kotlin (Android), and Swift (iOS). Get started with:
+
+* [Quickstart: Use the Immersive Reader client library](quickstarts/client-libraries.md)
applied-ai-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/quickstarts/client-libraries.md
+
+ Title: "Quickstart: Immersive Reader client library"
+
+description: "The Immersive Reader client library makes it easy to integrate the Immersive Reader service into your web applications to improve reading comprehension. In this quickstart, you'll learn how to use Immersive Reader for text selection, recognizing parts of speech, reading selected text out loud, translation, and more."
+++
+zone_pivot_groups: programming-languages-set-twenty
+++ Last updated : 03/08/2021++
+keywords: display pictures, parts of speech, read selected text, translate words, reading comprehension
++
+# Quickstart: Get started with Immersive Reader
+++++++++++++++
applied-ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/reference.md
+
+ Title: "Immersive Reader SDK Reference"
+
+description: The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
+++++++ Last updated : 06/20/2019+++
+# Immersive Reader JavaScript SDK Reference (v1.1)
+
+The Immersive Reader SDK contains a JavaScript library that allows you to integrate the Immersive Reader into your application.
+
+## Functions
+
+The SDK exposes the functions:
+
+- [`ImmersiveReader.launchAsync(token, subdomain, content, options)`](#launchasync)
+
+- [`ImmersiveReader.close()`](#close)
+
+- [`ImmersiveReader.renderButtons(options)`](#renderbuttons)
+
+<br>
+
+## launchAsync
+
+Launches the Immersive Reader within an `iframe` in your web application. Note that the size of your content is limited to a maximum of 50 MB.
+
+```typescript
+launchAsync(token: string, subdomain: string, content: Content, options?: Options): Promise<LaunchResponse>;
+```
+
+#### launchAsync Parameters
+
+| Name | Type | Description |
+| - | - | |
+| `token` | string | The Azure AD authentication token. See [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md) for more details. |
+| `subdomain` | string | The custom subdomain of your Immersive Reader resource in Azure. See [How-To Create an Immersive Reader Resource](./how-to-create-immersive-reader.md) for more details. |
+| `content` | [Content](#content) | An object containing the content to be shown in the Immersive Reader. |
+| `options` | [Options](#options) | Options for configuring certain behaviors of the Immersive Reader. Optional. |
+
+#### Returns
+
+Returns a `Promise<LaunchResponse>`, which resolves when the Immersive Reader is loaded. The `Promise` resolves to a [`LaunchResponse`](#launchresponse) object.
+
+#### Exceptions
+
+The returned `Promise` will be rejected with an [`Error`](#error) object if the Immersive Reader fails to load. For more information, see the [error codes](#error-codes).
+
+<br>
+
+## close
+
+Closes the Immersive Reader.
+
+An example use case for this function is if the exit button is hidden by setting ```hideExitButton: true``` in [options](#options). Then, a different button (for example a mobile header's back arrow) can call this ```close``` function when it is clicked.
+
+```typescript
+close(): void;
+```
+
+<br>
+
+## Immersive Reader Launch Button
+
+The SDK provides default styling for the button for launching the Immersive Reader. Use the `immersive-reader-button` class attribute to enable this styling. See [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md) for more details.
+
+```html
+<div class='immersive-reader-button'></div>
+```
+
+#### Optional attributes
+
+Use the following attributes to configure the look and feel of the button.
+
+| Attribute | Description |
+| | -- |
+| `data-button-style` | Sets the style of the button. Can be `icon`, `text`, or `iconAndText`. Defaults to `icon`. |
+| `data-locale` | Sets the locale. For example, `en-US` or `fr-FR`. Defaults to English `en`. |
+| `data-icon-px-size` | Sets the size of the icon in pixels. Defaults to 20px. |
+
+<br>
+
+## renderButtons
+
+The ```renderButtons``` function isn't necessary if you are using the [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md) guidance.
+
+This function styles and updates the document's Immersive Reader button elements. If ```options.elements``` is provided, then the buttons will be rendered within each element provided in ```options.elements```. Using the ```options.elements``` parameter is useful when you have multiple sections in your document on which to launch the Immersive Reader, and want a simplified way to render multiple buttons with the same styling, or want to render the buttons with a simple and consistent design pattern. To use this function with the [renderButtons options](#renderbuttons-options) parameter, call ```ImmersiveReader.renderButtons(options: RenderButtonsOptions);``` on page load as demonstrated in the below code snippet. Otherwise, the buttons will be rendered within the document's elements which have the class ```immersive-reader-button``` as shown in [How-To Customize the Immersive Reader button](./how-to-customize-launch-button.md) .
+
+```typescript
+// This snippet assumes there are two empty div elements in
+// the page HTML, button1 and button2.
+const btn1: HTMLDivElement = document.getElementById('button1');
+const btn2: HTMLDivElement = document.getElementById('button2');
+const btns: HTMLDivElement[] = [btn1, btn2];
+ImmersiveReader.renderButtons({elements: btns});
+```
+
+See the above [Optional Attributes](#optional-attributes) for more rendering options. To use these options, add any of the option attributes to each ```HTMLDivElement``` in your page HTML.
+
+```typescript
+renderButtons(options?: RenderButtonsOptions): void;
+```
+
+#### renderButtons Parameters
+
+| Name | Type | Description |
+| - | - | |
+| `options` | [renderButtons options](#renderbuttons-options) | Options for configuring certain behaviors of the renderButtons function. Optional. |
+
+### renderButtons Options
+
+Options for rendering the Immersive Reader buttons.
+
+```typescript
+{
+ elements: HTMLDivElement[];
+}
+```
+
+#### renderButtons Options Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| elements | HTMLDivElement[] | Elements to render the Immersive Reader buttons in. |
+
+##### `elements`
+```Parameters
+Type: HTMLDivElement[]
+Required: false
+```
+
+<br>
+
+## LaunchResponse
+
+Contains the response from the call to `ImmersiveReader.launchAsync`. Note that a reference to the `iframe` that contains the Immersive Reader can be accessed via `container.firstChild`.
+
+```typescript
+{
+ container: HTMLDivElement;
+ sessionId: string;
+}
+```
+
+#### LaunchResponse Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| container | HTMLDivElement | HTML element which contains the Immersive Reader iframe. |
+| sessionId | String | Globally unique identifier for this session, used for debugging. |
+
+## Error
+
+Contains information about an error.
+
+```typescript
+{
+ code: string;
+ message: string;
+}
+```
+
+#### Error Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| code | String | One of a set of error codes. See [Error codes](#error-codes). |
+| message | String | Human-readable representation of the error. |
+
+#### Error codes
+
+| Code | Description |
+| - | -- |
+| BadArgument | Supplied argument is invalid, see `message` parameter of the [Error](#error). |
+| Timeout | The Immersive Reader failed to load within the specified timeout. |
+| TokenExpired | The supplied token is expired. |
+| Throttled | The call rate limit has been exceeded. |
+
+<br>
+
+## Types
+
+### Content
+
+Contains the content to be shown in the Immersive Reader.
+
+```typescript
+{
+ title?: string;
+ chunks: Chunk[];
+}
+```
+
+#### Content Parameters
+
+| Name | Type | Description |
+| - | - | |
+| title | String | Title text shown at the top of the Immersive Reader (optional) |
+| chunks | [Chunk[]](#chunk) | Array of chunks |
+
+##### `title`
+```Parameters
+Type: String
+Required: false
+Default value: "Immersive Reader"
+```
+
+##### `chunks`
+```Parameters
+Type: Chunk[]
+Required: true
+Default value: null
+```
+
+<br>
+
+### Chunk
+
+A single chunk of data, which will be passed into the Content of the Immersive Reader.
+
+```typescript
+{
+ content: string;
+ lang?: string;
+ mimeType?: string;
+}
+```
+
+#### Chunk Parameters
+
+| Name | Type | Description |
+| - | - | |
+| content | String | The string that contains the content sent to the Immersive Reader. |
+| lang | String | Language of the text, the value is in IETF BCP 47 language tag format, e.g. en, es-ES. Language will be detected automatically if not specified. See [Supported Languages](#supported-languages). |
+| mimeType | string | Plain text, MathML, HTML & Microsoft Word DOCX formats are supported. See [Supported MIME types](#supported-mime-types) for more details. |
+
+##### `content`
+```Parameters
+Type: String
+Required: true
+Default value: null
+```
+
+##### `lang`
+```Parameters
+Type: String
+Required: false
+Default value: Automatically detected
+```
+
+##### `mimeType`
+```Parameters
+Type: String
+Required: false
+Default value: "text/plain"
+```
+
+#### Supported MIME types
+
+| MIME Type | Description |
+| | -- |
+| text/plain | Plain text. |
+| text/html | HTML content. [Learn more](#html-support)|
+| application/mathml+xml | Mathematical Markup Language (MathML). [Learn more](./how-to/display-math.md).
+| application/vnd.openxmlformats-officedocument.wordprocessingml.document | Microsoft Word .docx format document.
++
+<br>
+
+## Options
+
+Contains properties that configure certain behaviors of the Immersive Reader.
+
+```typescript
+{
+ uiLang?: string;
+ timeout?: number;
+ uiZIndex?: number;
+ useWebview?: boolean;
+ onExit?: () => any;
+ allowFullscreen?: boolean;
+ hideExitButton?: boolean;
+ cookiePolicy?: CookiePolicy;
+ disableFirstRun?: boolean;
+ readAloudOptions?: ReadAloudOptions;
+ translationOptions?: TranslationOptions;
+ displayOptions?: DisplayOptions;
+ preferences?: string;
+ onPreferencesChanged?: (value: string) => any;
+ customDomain?: string;
+}
+```
+
+#### Options Parameters
+
+| Name | Type | Description |
+| - | - | |
+| uiLang | String | Language of the UI, the value is in IETF BCP 47 language tag format, e.g. en, es-ES. Defaults to browser language if not specified. |
+| timeout | Number | Duration (in milliseconds) before [launchAsync](#launchasync) fails with a timeout error (default is 15000 ms). This timeout only applies to the initial launch of the Reader page, where success is observed when the Reader page opens and the spinner starts. Adjustment of the timeout should not be necessary. |
+| uiZIndex | Number | Z-index of the iframe that will be created (default is 1000). |
+| useWebview | Boolean| Use a webview tag instead of an iframe, for compatibility with Chrome Apps (default is false). |
+| onExit | Function | Executes when the Immersive Reader exits. |
+| allowFullscreen | Boolean | The ability to toggle fullscreen (default is true). |
+| hideExitButton | Boolean | Whether or not to hide the Immersive Reader's exit button arrow (default is false). This should only be true if there is an alternative mechanism provided to exit the Immersive Reader (e.g a mobile toolbar's back arrow). |
+| cookiePolicy | [CookiePolicy](#cookiepolicy-options) | Setting for the Immersive Reader's cookie usage (default is *CookiePolicy.Disable*). It's the responsibility of the host application to obtain any necessary user consent in accordance with EU Cookie Compliance Policy. See [Cookie Policy Options](#cookiepolicy-options). |
+| disableFirstRun | Boolean | Disable the first run experience. |
+| readAloudOptions | [ReadAloudOptions](#readaloudoptions) | Options to configure Read Aloud. |
+| translationOptions | [TranslationOptions](#translationoptions) | Options to configure translation. |
+| displayOptions | [DisplayOptions](#displayoptions) | Options to configure text size, font, etc. |
+| preferences | String | String returned from onPreferencesChanged representing the user's preferences in the Immersive Reader. See [Settings Parameters](#settings-parameters) and [How-To Store User Preferences](./how-to-store-user-preferences.md) for more information. |
+| onPreferencesChanged | Function | Executes when the user's preferences have changed. See [How-To Store User Preferences](./how-to-store-user-preferences.md) for more information. |
+| customDomain | String | Reserved for internal use. Custom domain where the Immersive Reader webapp is hosted (default is null). |
+
+##### `uiLang`
+```Parameters
+Type: String
+Required: false
+Default value: User's browser language
+```
+
+##### `timeout`
+```Parameters
+Type: Number
+Required: false
+Default value: 15000
+```
+
+##### `uiZIndex`
+```Parameters
+Type: Number
+Required: false
+Default value: 1000
+```
+
+##### `onExit`
+```Parameters
+Type: Function
+Required: false
+Default value: null
+```
+
+##### `preferences`
+
+> [!CAUTION]
+> **IMPORTANT** Do not attempt to programmatically change the values of the `-preferences` string sent to and from the Immersive Reader application as this may cause unexpected behavior resulting in a degraded user experience for your customers. Host applications should never assign a custom value to or manipulate the `-preferences` string. When using the `-preferences` string option, use only the exact value that was returned from the `-onPreferencesChanged` callback option.
+
+```Parameters
+Type: String
+Required: false
+Default value: null
+```
+
+##### `onPreferencesChanged`
+```Parameters
+Type: Function
+Required: false
+Default value: null
+```
+
+##### `customDomain`
+```Parameters
+Type: String
+Required: false
+Default value: null
+```
+
+<br>
+
+## ReadAloudOptions
+
+```typescript
+type ReadAloudOptions = {
+ voice?: string;
+ speed?: number;
+ autoplay?: boolean;
+};
+```
+
+#### ReadAloudOptions Parameters
+
+| Name | Type | Description |
+| - | - | |
+| voice | String | Voice, either "Female" or "Male". Note that not all languages support both genders. |
+| speed | Number | Playback speed, must be between 0.5 and 2.5, inclusive. |
+| autoPlay | Boolean | Automatically start Read Aloud when the Immersive Reader loads. |
+
+##### `voice`
+```Parameters
+Type: String
+Required: false
+Default value: "Female" or "Male" (determined by language)
+Values available: "Female", "Male"
+```
+
+##### `speed`
+```Parameters
+Type: Number
+Required: false
+Default value: 1
+Values available: 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5
+```
+
+> [!NOTE]
+> Due to browser limitations, autoplay is not supported in Safari.
+
+<br>
+
+## TranslationOptions
+
+```typescript
+type TranslationOptions = {
+ language: string;
+ autoEnableDocumentTranslation?: boolean;
+ autoEnableWordTranslation?: boolean;
+};
+```
+
+#### TranslationOptions Parameters
+
+| Name | Type | Description |
+| - | - | |
+| language | String | Sets the translation language, the value is in IETF BCP 47 language tag format, e.g. fr-FR, es-MX, zh-Hans-CN. Required to automatically enable word or document translation. |
+| autoEnableDocumentTranslation | Boolean | Automatically translate the entire document. |
+| autoEnableWordTranslation | Boolean | Automatically enable word translation. |
+
+##### `language`
+```Parameters
+Type: String
+Required: true
+Default value: null
+Values available: See the Supported Languages section
+```
+
+<br>
+
+## DisplayOptions
+
+```typescript
+type DisplayOptions = {
+ textSize?: number;
+ increaseSpacing?: boolean;
+ fontFamily?: string;
+};
+```
+
+#### DisplayOptions Parameters
+
+| Name | Type | Description |
+| - | - | |
+| textSize | Number | Sets the chosen text size. |
+| increaseSpacing | Boolean | Sets whether text spacing is toggled on or off. |
+| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
+
+##### `textSize`
+```Parameters
+Type: Number
+Required: false
+Default value: 20, 36 or 42 (Determined by screen size)
+Values available: 14, 20, 28, 36, 42, 48, 56, 64, 72, 84, 96
+```
+
+##### `fontFamily`
+```Parameters
+Type: String
+Required: false
+Default value: "Calibri"
+Values available: "Calibri", "Sitka", "ComicSans"
+```
+
+<br>
+
+## CookiePolicy Options
+
+```typescript
+enum CookiePolicy { Disable, Enable }
+```
+
+**The settings listed below are for informational purposes only**. The Immersive Reader stores its settings, or user preferences, in cookies. This *cookiePolicy* option **disables** the use of cookies by default in order to comply with EU Cookie Compliance laws. Should you want to re-enable cookies and restore the default functionality for Immersive Reader user preferences, you will need to ensure that your website or application obtains the proper consent from the user to enable cookies. Then, to re-enable cookies in the Immersive Reader, you must explicitly set the *cookiePolicy* option to *CookiePolicy.Enable* when launching the Immersive Reader. The table below describes what settings the Immersive Reader stores in its cookie when the *cookiePolicy* option is enabled.
+
+#### Settings Parameters
+
+| Setting | Type | Description |
+| - | - | -- |
+| textSize | Number | Sets the chosen text size. |
+| fontFamily | String | Sets the chosen font ("Calibri", "ComicSans", or "Sitka"). |
+| textSpacing | Number | Sets whether text spacing is toggled on or off. |
+| formattingEnabled | Boolean | Sets whether HTML formatting is toggled on or off. |
+| theme | String | Sets the chosen theme (e.g "Light", "Dark"...). |
+| syllabificationEnabled | Boolean | Sets whether syllabification toggled on or off. |
+| nounHighlightingEnabled | Boolean | that sets whether noun highlighting is toggled on or off. |
+| nounHighlightingColor | String | Sets the chosen noun highlighting color. |
+| verbHighlightingEnabled | Boolean | Sets whether verb highlighting is toggled on or off. |
+| verbHighlightingColor | String | Sets the chosen verb highlighting color. |
+| adjectiveHighlightingEnabled | Boolean | Sets whether adjective highlighting is toggled on or off. |
+| adjectiveHighlightingColor | String | Sets the chosen adjective highlighting color. |
+| adverbHighlightingEnabled | Boolean | Sets whether adverb highlighting is toggled on or off. |
+| adverbHighlightingColor | String | Sets the chosen adverb highlighting color. |
+| pictureDictionaryEnabled | Boolean | Sets whether Picture Dictionary is toggled on or off. |
+| posLabelsEnabled | Boolean | Sets whether the superscript text label of each highlighted Part of Speech is toggled on or off. |
+
+<br>
+
+## Supported Languages
+
+The translation feature of Immersive Reader supports many languages. See [Language Support](./language-support.md) for more details.
+
+<br>
+
+## HTML support
+
+When formatting is enabled, the following content will be rendered as HTML in the Immersive Reader.
+
+| HTML | Supported Content |
+| | -- |
+| Font Styles | Bold, Italic, Underline, Code, Strikethrough, Superscript, Subscript |
+| Unordered Lists | Disc, Circle, Square |
+| Ordered Lists | Decimal, Upper-Alpha, Lower-Alpha, Upper-Roman, Lower-Roman |
+
+Unsupported tags will be rendered comparably. Images and tables are currently not supported.
+
+<br>
+
+## Browser support
+
+Use the most recent versions of the following browsers for the best experience with the Immersive Reader.
+
+* Microsoft Edge
+* Internet Explorer 11
+* Google Chrome
+* Mozilla Firefox
+* Apple Safari
+
+<br>
+
+## Next steps
+
+* Explore the [Immersive Reader SDK on GitHub](https://github.com/microsoft/immersive-reader-sdk)
+* [Quickstart: Create a web app that launches the Immersive Reader (C#)](./quickstarts/client-libraries.md?pivots=programming-language-csharp)
applied-ai-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/release-notes.md
+
+ Title: "Immersive Reader SDK Release Notes"
+
+description: Learn more about what's new in the Immersive Reader JavaScript SDK.
+++++++ Last updated : 10/12/2020+++
+# Immersive Reader JavaScript SDK Release Notes
+
+## Version 1.1.0
+
+This release contains new features, security vulnerability fixes, bug fixes, updates to code samples, and configuration options.
+
+#### New Features
+
+* Enable saving and loading user preferences across different browsers and devices
+* Enable configuring default display options
+* Add option to set the translation language, enable word translation, and enable document translation when launching Immersive Reader
+* Add support for configuring Read Aloud via options
+* Add ability to disable first run experience
+* Add ImmersiveReaderView for UWP
+
+#### Improvements
+
+* Update the Android code sample HTML to work with the latest SDK
+* Update launch response to return the number of characters processed
+* Update code samples to use v1.1.0
+* Do not allow launchAsync to be called when already loading
+* Check for invalid content by ignoring messages where the data is not a string
+* Wrap call to window in an if clause to check browser support of Promise
+
+#### Fixes
+
+* Fix dependabot by removing yarn.lock from gitignore
+* Fix security vulnerability by upgrading pug to v3.0.0 in quickstart-nodejs code sample
+* Fix multiple security vulnerabilities by upgrading Jest and TypeScript packages
+* Fix a security vulnerability by upgrading Microsoft.IdentityModel.Clients.ActiveDirectory to v5.2.0
+
+<br>
+
+## Version 1.0.0
+
+This release contains breaking changes, new features, code sample improvements, and bug fixes.
+
+#### Breaking Changes
+
+* Require Azure AD token and subdomain, and deprecates tokens used in previous versions.
+* Set CookiePolicy to disabled. Retention of user preferences is disabled by default. The Reader launches with default settings every time, unless the CookiePolicy is set to enabled.
+
+#### New Features
+
+* Add support to enable or disable cookies
+* Add Android Kotlin quick start code sample
+* Add Android Java quick start code sample
+* Add Node.js quick start code sample
+
+#### Improvements
+
+* Update Node.js advanced README.md
+* Change Python code sample from advanced to quick start
+* Move iOS Swift code sample into js/samples
+* Update code samples to use v1.0.0
+
+#### Fixes
+
+* Fix for Node.js advanced code sample
+* Add missing files for advanced-csharp-multiple-resources
+* Remove en-us from hyperlinks
+
+<br>
+
+## Version 0.0.3
+
+This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
+
+#### New Features
+
+* Add iOS Swift code sample
+* Add C# advanced code sample demonstrating use of multiple resources
+* Add support to disable the full screen toggle feature
+* Add support to hide the Immersive Reader application exit button
+* Add a callback function that may be used by the host application upon exiting the Immersive Reader
+* Update code samples to use Azure Active Directory Authentication
+
+#### Improvements
+
+* Update C# advanced code sample to include Word document
+* Update code samples to use v0.0.3
+
+#### Fixes
+
+* Upgrade lodash to version 4.17.14 to fix security vulnerability
+* Update C# MSAL library to fix security vulnerability
+* Upgrade mixin-deep to version 1.3.2 to fix security vulnerability
+* Upgrade jest, webpack and webpack-cli which were using vulnerable versions of set-value and mixin-deep to fix security vulnerability
+
+<br>
+
+## Version 0.0.2
+
+This release contains new features, improvements to code samples, security vulnerability fixes, and bug fixes.
+
+#### New Features
+
+* Add Python advanced code sample
+* Add Java quick start code sample
+* Add simple code sample
+
+#### Improvements
+
+* Rename resourceName to cogSvcsSubdomain
+* Move secrets out of code and use environment variables
+* Update code samples to use v0.0.2
+
+#### Fixes
+
+* Fix Immersive Reader button accessibility bugs
+* Fix broken scrolling
+* Upgrade handlebars package to version 4.1.2 to fix security vulnerability
+* Fixes bugs in SDK unit tests
+* Fixes JavaScript Internet Explorer 11 compatibility bugs
+* Updates SDK urls
+
+<br>
+
+## Version 0.0.1
+
+The initial release of the Immersive Reader JavaScript SDK.
+
+* Add Immersive Reader JavaScript SDK
+* Add support to specify the UI language
+* Add a timeout to determine when the launchAsync function should fail with a timeout error
+* Add support to specify the z-index of the Immersive Reader iframe
+* Add support to use a webview tag instead of an iframe, for compatibility with Chrome Apps
+* Add SDK unit tests
+* Add Node.js advanced code sample
+* Add C# advanced code sample
+* Add C# quick start code sample
+* Add package configuration, Yarn and other build files
+* Add git configuration files
+* Add README.md files to code samples and SDK
+* Add MIT License
+* Add Contributor instructions
+* Add static icon button SVG assets
+
+## Next steps
+
+Get started with Immersive Reader:
+
+* Read the [Immersive Reader client library Reference](./reference.md)
+* Explore the [Immersive Reader client library on GitHub](https://github.com/microsoft/immersive-reader-sdk)
applied-ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
+
+ Title: "Tutorial: Create an iOS app that takes a photo and launches it in the Immersive Reader (Swift)"
+
+description: In this tutorial, you will build an iOS app from scratch and add the Picture to Immersive Reader functionality.
++++++ Last updated : 01/14/2020+
+#Customer intent: As a developer, I want to integrate two Cognitive Services, the Immersive Reader and the Read API into my iOS application so that I can view any text from a photo in the Immersive Reader.
++
+# Tutorial: Create an iOS app that launches the Immersive Reader with content from a photo (Swift)
+
+The [Immersive Reader](https://www.onenote.com/learningtools) is an inclusively designed tool that implements proven techniques to improve reading comprehension.
+
+The [Computer Vision Cognitive Services Read API](../../cognitive-services/computer-vision/overview-ocr.md) detects text content in an image using Microsoft's latest recognition models and converts the identified text into a machine-readable character stream.
+
+In this tutorial, you will build an iOS app from scratch and integrate the Read API, and the Immersive Reader by using the Immersive Reader SDK. A full working sample of this tutorial is available [here](https://github.com/microsoft/immersive-reader-sdk/tree/master/js/samples/ios).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
+
+## Prerequisites
+
+* [Xcode](https://apps.apple.com/us/app/xcode/id497799835?mt=12)
+* An Immersive Reader resource configured for Azure Active Directory authentication. Follow [these instructions](./how-to-create-immersive-reader.md) to get set up. You will need some of the values created here when configuring the sample project properties. Save the output of your session into a text file for future reference.
+* Usage of this sample requires an Azure subscription to the Computer Vision Cognitive Service. [Create a Computer Vision Cognitive Service resource in the Azure portal](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision).
+
+## Create an Xcode project
+
+Create a new project in Xcode.
+
+![New Project](./media/ios/xcode-create-project.png)
+
+Choose **Single View App**.
+
+![New Single View App](./media/ios/xcode-single-view-app.png)
+
+## Get the SDK CocoaPod
+The easiest way to use the Immersive Reader SDK is via CocoaPods. To install via Cocoapods:
+1. [Install CocoaPods](http://guides.cocoapods.org/using/getting-started.html) - Follow the getting started guide to install Cocoapods.
+2. Create a Podfile by running `pod init` in your Xcode project's root directory.
+3. Add the CocoaPod to your Podfile by adding `pod 'immersive-reader-sdk', :path => 'https://github.com/microsoft/immersive-reader-sdk/tree/master/iOS/immersive-reader-sdk'`. Your Podfile should look like the following, with your target's name replacing picture-to-immersive-reader-swift:
+ ```ruby
+ platform :ios, '9.0'
+
+ target 'picture-to-immersive-reader-swift' do
+ use_frameworks!
+ # Pods for picture-to-immersive-reader-swift
+ pod 'immersive-reader-sdk', :git => 'https://github.com/microsoft/immersive-reader-sdk.git'
+ end
+```
+4. In the terminal, in the directory of your Xcode project, run the command `pod install` to install the Immersive Reader SDK pod.
+5. Add `import immersive_reader_sdk` to all files that need to reference the SDK.
+6. Ensure to open the project by opening the `.xcworkspace` file and not the `.xcodeproj` file.
+
+## Acquire an Azure AD authentication token
+
+You need some values from the Azure AD authentication configuration prerequisite step above for this part. Refer back to the text file you saved of that session.
+
+````text
+TenantId => Azure subscription TenantId
+ClientId => Azure AD ApplicationId
+ClientSecret => Azure AD Application Service Principal password
+Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI Powershell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/')
+````
+
+In the main project folder, which contains the ViewController.swift file, create a Swift class file called Constants.swift. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control, as it contains secrets that should not be made public. It is recommended that you do not keep secrets in your app. Instead, we recommend using a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
+
+## Set up the app to run without a storyboard
+
+Open AppDelegate.swift and replace the file with the following code.
+
+## Add functionality for taking and uploading photos
+
+Rename ViewController.swift to PictureLaunchViewController.swift and replace the file with the following code.
+
+## Build and run the app
+
+Set the archive scheme in Xcode by selecting a simulator or device target.
+![Archive scheme](./media/ios/xcode-archive-scheme.png)<br/>
+![Select Target](./media/ios/xcode-select-target.png)
+
+In Xcode, press Ctrl + R or click on the play button to run the project and the app should launch on the specified simulator or device.
+
+In your app, you should see:
+
+![Sample app](./media/ios/picture-to-immersive-reader-ipad-app.png)
+
+Inside the app, take or upload a photo of text by pressing the 'Take Photo' button or 'Choose Photo from Library' button and the Immersive Reader will then launch displaying the text from the photo.
+
+![Immersive Reader](./media/ios/picture-to-immersive-reader-ipad.png)
+
+## Next steps
+
+* Explore the [Immersive Reader SDK Reference](./reference.md)
applied-ai-services What Are Applied Ai Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/what-are-applied-ai-services.md
Unlock valuable information lying latent in all your content in order to perform
Enhance reading comprehension and achievement with AI. Azure Immersive Reader is an inclusively designed tool that implements proven techniques to improve reading comprehension for new readers, language learners, and people with learning differences such as dyslexia. With the Immersive Reader client library, you can leverage the same technology used in Microsoft Word and Microsoft OneNote to improve your web applications. Azure Immersive Reader is built using Translation and Text to Speech from Azure Cognitive Services.
-[Learn more about Azure Immersive Reader](../cognitive-services/immersive-reader/index.yml)
+[Learn more about Azure Immersive Reader](./immersive-reader/index.yml)
## Azure Bot Service
azure-arc Connect Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/connect-managed-instance.md
This article explains how you can connect to your Azure Arc-enabled SQL Managed
To view the Azure Arc-enabled SQL Managed Instance and the external endpoints use the following command: ```azurecli
-az sql mi-arc list
+az sql mi-arc list --k8s-namespace <namespace> --use-k8s -o table
``` Output should look like this: ```console
-Name Replicas ExternalEndpoint State
- - - -
-sqldemo 1/1 10.240.0.4:32023 Ready
+Name PrimaryEndpoint Replicas State
+ - - -
+sqldemo 10.240.0.107,1433 1/1 Ready
``` If you are using AKS or kubeadm or OpenShift etc., you can copy the external IP and port number from here and connect to it using your favorite tool for connecting to a SQL Sever/Azure SQL instance such as Azure Data Studio or SQL Server Management Studio. However, if you are using the quick start VM, see below for special information about how to connect to that VM from outside of Azure.
azure-cache-for-redis Cache Administration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-administration.md
This article describes how to do administration tasks such as [rebooting](#reboo
On the left, **Reboot** allows you to reboot one or more nodes of your cache. This reboot capability enables you to test your application for resiliency if there's a failure of a cache node.
-![Screenshot that highlights the Reboot menu option.](./media/cache-administration/redis-cache-administration-reboot.png)
Select the nodes to reboot and select **Reboot**.
-![Screenshot that shows which nodes you can reboot.](./media/cache-administration/redis-cache-reboot.png)
If you have a premium cache with clustering enabled, you can select which shards of the cache to reboot.
-![Reboot](./media/cache-administration/redis-cache-reboot-cluster.png)
To reboot one or more nodes of your cache, select the nodes and select **Reboot**. If you have a premium cache with clustering enabled, select the shards to reboot and then select **Reboot**. After a few minutes, the selected nodes reboot, and are back online a few minutes later.
On the left, **Schedule updates** allows you to choose a maintenance window for
> Currently, no option is available to configure a reboot or scheduled updates for an Enterprise tier cache. >
-![Schedule updates](./media/cache-administration/redis-schedule-updates.png)
To specify a maintenance window, check the days you want and specify the maintenance window start hour for each day. Then, select **OK**. The maintenance window time is in UTC.
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-private-link.md
PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/
} ```
+### How can I migrate my VNet injected cache to a Private Link cache?
+
+Please refer to our [migration guide](cache-vnet-migration.md) for different approaches on how to migrate your VNet injected caches to Private Link caches.
+ ### How can I have multiple endpoints in different virtual networks? To have multiple private endpoints in different virtual networks, the private DNS zone must be manually configured to the multiple virtual networks _before_ creating the private endpoint. For more information, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md).
Control the traffic by using NSG rules for outbound traffic on source clients. D
It's only linked to your VNet. Because it's not in your VNet, NSG rules don't need to be modified for dependent endpoints.
-### How can I migrate my VNet injected cache to a private endpoint cache?
-
-Delete your VNet injected cache and create a new cache instance with a private endpoint. For more information, see [migrate to Azure Cache for Redis](cache-migration-guide.md)
## Next steps
azure-cache-for-redis Cache Vnet Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-vnet-migration.md
This article describes a number of approaches to migrate an Azure Cache for Redi
[Azure Private Link](../private-link/private-link-overview.md) simplifies the network architecture and secures the connection between endpoints in Azure. You can connect to an Azure Cache instance from your virtual network via a private endpoint, which is assigned a private IP address in a subnet within the virtual network. Advantages of using Azure Private Link for Azure Cache for Redis include:
-* **Tier flexibility** ΓÇô Azure Private Link is supported on all our tiers; Basic, Standard, Premium, Enterprise, and Enterprise Flash. Compared to Virtual Network injection, which is only offered on our premium tier.
+* **Tier flexibility** ΓÇô Azure Private Link is supported on all our tiers; Basic, Standard, Premium, Enterprise, and Enterprise Flash. Compared to Virtual Network injection, which is only offered on our premium tier.
-* **Azure Policy Support** ΓÇô Ensure all caches in your organization are created with Private Link and audit your organizationΓÇÖs existing caches to verify they all utilize Private Link.
+* **Simplified Network Security Group (NSG) Rule Management** - NSG rules do not need to be configured to adhere to requirements from Azure Cache for Redis.
-* **Simplified Network Security Group (NSG) Rule Management** - NSG rules do not need to be configured such that the client's network traffic is allowed to reach the Azure Cache for Redis instance.
+* **Azure Policy Support** ΓÇô Ensure all caches in your organization are created with Private Link and audit your organizationΓÇÖs existing caches to verify they all utilize Private Link.
## Migration options
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-timer.md
Here's the JavaScript code:
module.exports = function (context, myTimer) { var timeStamp = new Date().toISOString();
- if (myTimer.IsPastDue)
+ if (myTimer.isPastDue)
{ context.log('Node is running late!'); }
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-core-tools-reference.md
Core Tools commands are organized into the following contexts, each providing a
| [`func extensions`](#func-extensions-install) | Commands for installing and managing extensions. | | [`func kubernetes`](#func-kubernetes-deploy) | Commands for working with Kubernetes and Azure Functions. | | [`func settings`](#func-settings-decrypt) | Commands for managing environment settings for the local Functions host. |
-| `func templates` | Commands for listing available function templates. |
+| [`func templates`](#func-templates-list) | Commands for listing available function templates. |
Before using the commands in this article, you must [install the Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
azure-functions Functions Twitter Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-twitter-email.md
Create a connection to Twitter so your app can poll for new tweets.
| Setting | Value | | - | -- | | Search text | **#my-twitter-tutorial** |
- | How oven do you want to check for items? | **1** in the textbox, and <br> **Hour** in the dropdown. You may enter different values but be sure to review the current [limitations](/connectors/twitterconnector/#limits) of the Twitter connector. |
+ | How often do you want to check for items? | **1** in the textbox, and <br> **Hour** in the dropdown. You may enter different values but be sure to review the current [limitations](/connectors/twitterconnector/#limits) of the Twitter connector. |
1. Select the **Save** button on the toolbar to save your progress.
Optionally, you may want to return to your Twitter account and delete any test t
## Next steps > [!div class="nextstepaction"]
-> [Create a serverless API using Azure Functions](functions-create-serverless-api.md)
+> [Create a serverless API using Azure Functions](functions-create-serverless-api.md)
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/deploy.md
Perform the steps in this topic in sequence to install the Start/Stop VMs v2 (pr
> [!NOTE] > If you run into problems during deployment, you encounter an issue when using Start/Stop VMs v2 (preview), or if you have a related question, you can submit an issue on [GitHub](https://github.com/microsoft/startstopv2-deployments/issues). Filing an Azure support incident from the [Azure support site](https://azure.microsoft.com/support/options/) is not available for this preview version.
+## Permissions considerations
+Please keep the following in mind before and during deployment:
++ The solution allows those with appropriate role-based access control (RBAC) permissions on the Start/Stop v2 deployment to add, remove, and manage schedules for virtual machines under the scope of the Start/Stop v2. This behavior is by design. In practice, this means a user who doesn't have direct RBAC permission on a virtual machine could still create start, stop, and autostop operations on that virtual machine when they have the RBAC permission to modify the Start/Stop v2 solution managing it.++ Any users with access to the Start/Stop v2 solution could uncover cost, savings, operation history, and other data that is stored in the Application Insights instance used by the Start/Stop v2 application.++ When managing a Start/Stop v2 solution, you should consider the permissions of users to the Start/Stop v2 solution, particularly when whey don't have permission to directly modify the target virtual machines. ## Deploy feature The deployment is initiated from the Start/Stop VMs v2 GitHub organization [here](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md). While this feature is intended to manage all of your VMs in your subscription across all resource groups from a single deployment within the subscription, you can install another instance of it based on the operations model or requirements of your organization. It also can be configured to centrally manage VMs across multiple subscriptions.
To simplify management and removal, we recommend you deploy Start/Stop VMs v2 (p
:::image type="content" source="media/deploy/deployment-results-resource-list.png" alt-text="Start/Stop VMs template deployment resource list"::: > [!NOTE]
-> The naming format for the function app and storage account has changed. To guarantee global uniqueness, a random and unique string is now appended to the names of these resource.
+> The naming format for the function app and storage account has changed. To guarantee global uniqueness, a random and unique string is now appended to the names of these resource.
+
+> [!NOTE]
+> We are collecting operation and heartbeat telemetry to better assist you if you reach the support team for any troubleshooting. We are also collecting virtual machine event history to verify when the service acted on a virtual machine and how long a virtual machine was snoozed in order to determine the efficacy of the service.
## Enable multiple subscriptions
azure-functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/start-stop-vms/overview.md
An HTTP trigger endpoint function is created to support the schedule and sequenc
|Name |Trigger |Description | |--|--||
-|AlertAvailabilityTest |Timer |This function performs the availability test to make sure the primary function **AutoStopVM** is always available.|
+|Scheduled |HTTP |This function is for both scheduled and sequenced scenario (differentiated by the payload schema). It is the entry point function called from the Logic App and takes the payload to process the VM start or stop operation. |
|AutoStop |HTTP |This function supports the **AutoStop** scenario, which is the entry point function that is called from Logic App.|
-|AutoStopAvailabilityTest |Timer |This function performs the availability test to make sure the primary function **AutoStop** is always available.|
|AutoStopVM |HTTP |This function is triggered automatically by the VM alert when the alert condition is true.|
-|CreateAutoStopAlertExecutor |Queue |This function gets the payload information from the **AutoStop** function to create the alert on the VM.|
-|Scheduled |HTTP |This function is for both scheduled and sequenced scenario (differentiated by the payload schema). It is the entry point function called from the Logic App and takes the payload to process the VM start or stop operation. |
-|ScheduledAvailabilityTest |Timer |This function performs the availability test to make sure the primary function **Scheduled** is always available.|
-|VirtualMachineRequestExecutor |Queue |This function performs the actual start and stop operation on the VM.|
|VirtualMachineRequestOrchestrator |Queue |This function gets the payload information from the **Scheduled** function and orchestrates the VM start and stop requests.|
+|VirtualMachineRequestExecutor |Queue |This function performs the actual start and stop operation on the VM.|
+|CreateAutoStopAlertExecutor |Queue |This function gets the payload information from the **AutoStop** function to create the alert on the VM.|
+|HeartBeatAvailabilityTest |Timer |This function monitors the availability of the primary HTTP functions.|
+|CostAnalyticsFunction |Timer |This function calculates the cost to run the Start/Stop V2 solution on a monthly basis.|
+|SavingsAnalyticsFunction |Timer |This function calculates the total savings achieved by the Start/Stop V2 solution on a monthly basis.|
+|VirtualMachineSavingsFunction |Queue |This function performs the actual savings calculation on a VM achieved by the Start/Stop V2 solution.|
For example, **Scheduled** HTTP trigger function is used to handle schedule and sequence scenarios. Similarly, **AutoStop** HTTP trigger function handles the auto stop scenario.
Specifying a list of VMs can be used when you need to perform the start and stop
## Next steps
-To deploy this feature, see [Deploy Start/Stop VMs](deploy.md) (preview).
+To deploy this feature, see [Deploy Start/Stop VMs](deploy.md) (preview).
azure-monitor Source Map Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/source-map-support.md
Title: Source map support for JavaScript applications - Azure Monitor Application Insights description: Learn how to upload source maps to your own storage account Blob container using Application Insights. -- Last updated 06/23/2020
If you are using Azure Pipelines to continuously build and deploy your applicati
From the end-to-end transaction details tab, you can click on *Unminify* and it will display a prompt to configure if your resource is unconfigured. 1. In the Portal, view the details of an exception that is minified.
-2. Click on *Unminify*
+2. Select *Unminify*.
3. If your resource has not been configured, a message will appear, prompting you to configure. ### From the properties page
From the end-to-end transaction details tab, you can click on *Unminify* and it
If you would like to configure or change the storage account or Blob container that is linked to your Application Insights Resource, you can do it by viewing the Application Insights resource's *Properties* tab. 1. Navigate to the *Properties* tab of your Application Insights resource.
-2. Click on *Change source map blob container*.
+2. Select *Change source map blob container*.
3. Select a different Blob container as your source maps container.
-4. Click `Apply`.
+4. Select `Apply`.
> [!div class="mx-imgBorder"] > ![Reconfigure your selected Azure Blob Container by navigating to the Properties Blade](./media/source-map-support/reconfigure.png)
azure-monitor Sql Insights Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/sql-insights-enable.md
To learn how to enable SQL Insights, you can also refer to this Data Exposed epi
> [!VIDEO https://channel9.msdn.com/Shows/Data-Exposed/How-to-Set-up-Azure-Monitor-for-SQL-Insights/player?format=ny] ## Create Log Analytics workspace
-SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
+SQL insights stores its data in one or more [Log Analytics workspaces](../logs/data-platform-logs.md#log-analytics-and-workspaces). Before you can enable SQL Insights, you need to either [create a workspace](../logs/quick-create-workspace.md) or select an existing one. A single workspace can be used with multiple monitoring profiles, but the workspace and profiles must be located in the same Azure region. To enable and access the features in SQL insights, you must have the [Log Analytics contributor role](../logs/manage-access.md) in the workspace.
## Create monitoring user You need a user on the SQL deployments that you want to monitor. Follow the procedures below for different types of SQL deployments.
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/data-platform-logs.md
Title: Azure Monitor Logs
-description: Describes Azure Monitor Logs which are used for advanced analysis of monitoring data.
+description: Learn the basics of Azure Monitor Logs, which is used for advanced analysis of monitoring data.
documentationcenter: '' na
# Azure Monitor Logs overview
-Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from [monitored resources](../monitor-reference.md). Data from different sources such as [platform logs](../essentials/platform-logs-overview.md) from Azure services, log and performance data from [virtual machines agents](../agents/agents-overview.md), and usage and performance data from [applications](../app/app-insights-overview.md) can be consolidated into a single workspace so they can be analyzed together using a sophisticated query language that's capable of quickly analyzing millions of records. You may perform a simple query that just retrieves a specific set of records or perform sophisticated data analysis to identify critical patterns in your monitoring data. Work with log queries and their results interactively using Log Analytics, use them in an alert rules to be proactively notified of issues, or visualize their results in a workbook or dashboard.
+Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from [monitored resources](../monitor-reference.md). Data from multiple sources can be consolidated into a single workspace. These sources include:
-> [!NOTE]
-> Azure Monitor Logs is one half of the data platform supporting Azure Monitor. The other is [Azure Monitor Metrics](../essentials/data-platform-metrics.md) which stores numeric data in a time-series database. This makes this data more lightweight than data in Azure Monitor Logs and capable of supporting near real-time scenarios making them particularly useful for alerting and fast detection of issues. Metrics though can only store numeric data in a particular structure, while Logs can store a variety of different data types each with their own structure. You can also perform complex analysis on Logs data using log queries which cannot be used for analysis of Metrics data.
+- [Platform logs](../essentials/platform-logs-overview.md) from Azure services.
+- Log and performance data from [virtual machine agents](../agents/agents-overview.md).
+- Usage and performance data from [applications](../app/app-insights-overview.md).
+
+You can then analyze the data by using a sophisticated query language that's capable of quickly analyzing millions of records.
+
+You might perform a simple query that retrieves a specific set of records or perform sophisticated data analysis to identify critical patterns in your monitoring data. Work with log queries and their results interactively by using Log Analytics, use them in alert rules to be proactively notified of issues, or visualize their results in a workbook or dashboard.
+> [!NOTE]
+> Azure Monitor Logs is one half of the data platform that supports Azure Monitor. The other is [Azure Monitor Metrics](../essentials/data-platform-metrics.md), which stores numeric data in a time-series database. Numeric data is more lightweight than data in Azure Monitor Logs. Azure Monitor Metrics can support near real-time scenarios, so it's useful for alerting and fast detection of issues.
+>
+> Azure Monitor Metrics can only store numeric data in a particular structure, whereas Azure Monitor Logs can store a variety of data types that have their own structures. You can also perform complex analysis on Azure Monitor Logs data by using log queries, which can't be used for analysis of Azure Monitor Metrics data.
## What can you do with Azure Monitor Logs?
-The following table describes some of the different ways that you can use Logs in Azure Monitor:
+The following table describes some of the ways that you can use Azure Monitor Logs:
| | Description | |:|:|
-| **Analyze** | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data using a powerful analysis engine |
+| **Analyze** | Use [Log Analytics](./log-analytics-tutorial.md) in the Azure portal to write [log queries](./log-query-overview.md) and interactively analyze log data by using a powerful analysis engine. |
| **Alert** | Configure a [log alert rule](../alerts/alerts-log.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the results of the query match a particular result. |
-| **Visualize** | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](../visualize/powerbi.md) to use different visualizations and share with users outside of Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to leverage its dashboarding and combine with other data sources.|
-| **Insights** | Support [insights](../monitor-reference.md#insights-and-core-solutions) that provide a customized monitoring experience for particular applications and services. |
-| **Retrieve** | Access log query results from a command line using [Azure CLI](/cli/azure/monitor/log-analytics).<br>Access log query results from a command line using [PowerShell cmdlets](/powershell/module/az.operationalinsights).<br>Access log query results from a custom application using [REST API](https://dev.loganalytics.io/). |
-| **Export** | Configure [automated export of log data](./logs-data-export.md) to Azure storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location using [Logic Apps](./logicapp-flow-connector.md). |
-
-![Logs overview](media/data-platform-logs/logs-overview.png)
+| **Visualize** | Pin query results rendered as tables or charts to an [Azure dashboard](../../azure-portal/azure-portal-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report. <br>Export the results of a query to [Power BI](../visualize/powerbi.md) to use different visualizations and share with users outside Azure.<br>Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to use its dashboarding and combine with other data sources.|
+| **Get insights** | Support [insights](../monitor-reference.md#insights-and-core-solutions) that provide a customized monitoring experience for particular applications and services. |
+| **Retrieve** | Access log query results from a command line by using the [Azure CLI](/cli/azure/monitor/log-analytics).<br>Access log query results from a command line by using [PowerShell cmdlets](/powershell/module/az.operationalinsights).<br>Access log query results from a custom application by using the [REST API](https://dev.loganalytics.io/). |
+| **Export** | Configure [automated export of log data](./logs-data-export.md) to an Azure storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](./logicapp-flow-connector.md). |
+![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png)
## Data collection
-Once you create a Log Analytics workspace, you must configure different sources to send their data. No data is collected automatically. This configuration will be different depending on the data source. For example, [create diagnostic settings](../essentials/diagnostic-settings.md) to send resource logs from Azure resources to the workspace. [Enable VM insights](../vm/vminsights-enable-overview.md) to collect data from virtual machines. Configure [data sources on the workspace](../agents/data-sources.md) to collect additional events and performance data.
+After you create a Log Analytics workspace, you must configure sources to send their data. No data is collected automatically.
-- See [What is monitored by Azure Monitor?](../monitor-reference.md) for a complete list of data sources that you can configure to send data to Azure Monitor Logs.
+This configuration will be different depending on the data source. For example:
+- [Create diagnostic settings](../essentials/diagnostic-settings.md) to send resource logs from Azure resources to the workspace.
+- [Enable VM insights](../vm/vminsights-enable-overview.md) to collect data from virtual machines.
+- [Configure data sources on the workspace](../agents/data-sources.md) to collect more events and performance data.
-## Log Analytics workspaces
-Data collected by Azure Monitor Logs is stored in one or more [Log Analytics workspaces](./design-logs-deployment.md). The workspace defines the geographic location of the data, access rights defining which users can access data, and configuration settings such as the pricing tier and data retention.
+For a complete list of data sources that you can configure to send data to Azure Monitor Logs, see [What is monitored by Azure Monitor?](../monitor-reference.md).
-You must create at least one workspace to use Azure Monitor Logs. A single workspace may be sufficient for all of your monitoring data, or may choose to create multiple workspaces depending on your requirements. For example, you might have one workspace for your production data and another for testing.
+## Log Analytics and workspaces
+Log Analytics is a tool in the Azure portal. Use it to edit and run log queries and interactively analyze their results. You can then use those queries to support other features in Azure Monitor, such as log query alerts and workbooks. Access Log Analytics from the **Logs** option on the Azure Monitor menu or from most other services in the Azure portal.
-- See [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md) to create a new workspace.-- See [Designing your Azure Monitor Logs deployment](design-logs-deployment.md) on considerations for creating multiple workspaces.
+For a description of Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md). To walk through using Log Analytics features to create a simple log query and analyze its results, see [Log Analytics tutorial](./log-analytics-tutorial.md).
-## Data structure
-Log queries retrieve their data from a Log Analytics workspace. Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns that are shared by the rows of data provided by the data source.
+Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./design-logs-deployment.md). A workspace defines:
-[![Azure Monitor Logs structure](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox)
+- The geographic location of the data.
+- Access rights that define which users can access data.
+- Configuration settings such as the pricing tier and data retention.
+You must create at least one workspace to use Azure Monitor Logs. A single workspace might be sufficient for all of your monitoring data, or you might choose to create multiple workspaces depending on your requirements. For example, you might have one workspace for your production data and another for testing.
-Log data from Application Insights is also stored in Azure Monitor Logs, but it's stored different depending on how your application is configured. For a workspace-based application, data is stored in a Log Analytics workspace in a standard set of tables to hold data such as application requests, exceptions, and page views. Multiple applications can use the same workspace. For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries using the same Log Analytics tool in the Azure portal. Data for classic applications though is stored separately from each other. Its general structure is the same as workspace-based applications although the table and column names are different. See [Workspace-based resource changes](../app/apm-tables.md) for a detailed comparison of the schema for workspace-based and classic applications.
+To create a new workspace, see [Create a Log Analytics workspace in the Azure portal](./quick-create-workspace.md). For considerations on creating multiple workspaces, see [Designing your Azure Monitor Logs deployment](design-logs-deployment.md).
+## Log queries
+Data is retrieved from a Log Analytics workspace through a log query, which is a read-only request to process data and return results. Log queries are written in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/). KQL is the same query language that Azure Data Explorer uses.
-> [!NOTE]
-> We still provide full backwards compatibility for your Application Insights classic resource queries, workbooks, and log-based alerts within the Application Insights experience. To query/view against the [new workspace-based table structure/schema](../app/apm-tables.md) you must first navigate to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. See [Query scope](./scope.md) for more details.
+You can write log queries in Log Analytics to interactively analyze their results, use them in alert rules to be proactively notified of issues, or include their results in workbooks or dashboards. Insights include prebuilt queries to support their views and workbooks.
+For a list of where log queries are used and references to tutorials and other documentation to get you started, see [Log queries in Azure Monitor](./log-query-overview.md).
-[![Azure Monitor Logs structure for Application Insights](media/data-platform-logs/logs-structure-ai.png)](media/data-platform-logs/logs-structure-ai.png#lightbox)
+![Screenshot that shows queries in Log Analytics.](media/data-platform-logs/log-analytics.png)
+## Data structure
+Log queries retrieve their data from a Log Analytics workspace. Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns.
-## Log queries
-Data is retrieved from a Log Analytics workspace using a log query which is a read-only request to process data and return results. Log queries are written in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), which is the same query language used by Azure Data Explorer. You can write log queries in Log Analytics to interactively analyze their results, use them in alert rules to be proactively notified of issues, or include their results in workbooks or dashboards. Insights include prebuilt queries to support their views and workbooks.
+[![Diagram that shows the Azure Monitor Logs structure.](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox)
-- See [Log queries in Azure Monitor](./log-query-overview.md) for a list of where log queries are used and references to tutorials and other documentation to get you started.
+Log data from Application Insights is also stored in Azure Monitor Logs, but it's stored differently depending on how your application is configured:
-![Log Analytics](media/data-platform-logs/log-analytics.png)
+- For a workspace-based application, data is stored in a Log Analytics workspace in a standard set of tables. The types of data include application requests, exceptions, and page views. Multiple applications can use the same workspace.
-## Log Analytics
-Use Log Analytics, which is a tool in the Azure portal, to edit and run log queries and interactively analyze their results. You can then use the queries that you create to support other features in Azure Monitor such as log query alerts and workbooks. Access Log Analytics from the **Logs** option in the Azure Monitor menu or from most other services in the Azure portal.
+- For a classic application, the data is not stored in a Log Analytics workspace. It uses the same query language, and you create and run queries by using the same Log Analytics tool in the Azure portal. Data items for classic applications are stored separately from each other. The general structure is the same as for workspace-based applications, although the table and column names are different.
-- See [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md) for a description of Log Analytics. -- See [Log Analytics tutorial](./log-analytics-tutorial.md) to walk through using Log Analytics features to create a simple log query and analyze its results.
+For a detailed comparison of the schema for workspace-based and classic applications, see [Workspace-based resource changes](../app/apm-tables.md).
+> [!NOTE]
+> The classic Application Insights experience includes backward compatibility for your resource queries, workbooks, and log-based alerts. To query or view against the [new workspace-based table structure or schema](../app/apm-tables.md), you must first go to your Log Analytics workspace. During the preview, selecting **Logs** from within the Application Insights panes will give you access to the classic Application Insights query experience. For more information, see [Query scope](./scope.md).
+[![Diagram that shows the Azure Monitor Logs structure for Application Insights.](media/data-platform-logs/logs-structure-ai.png)](media/data-platform-logs/logs-structure-ai.png#lightbox)
## Relationship to Azure Data Explorer
-Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer, tables are structured the same, and both use the same Kusto Query Language (KQL). The experience of using Log Analytics to work with Azure Monitor queries in the Azure portal is similar to the experience using the Azure Data Explorer Web UI. You can even [include data from a Log Analytics workspace in an Azure Data Explorer query](/azure/data-explorer/query-monitor-data).
+Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL.
+The experience of using Log Analytics to work with Azure Monitor queries in the Azure portal is similar to the experience of using the Azure Data Explorer Web UI. You can even [include data from a Log Analytics workspace in an Azure Data Explorer query](/azure/data-explorer/query-monitor-data).
## Next steps - Learn about [log queries](./log-query-overview.md) to retrieve and analyze data from a Log Analytics workspace. - Learn about [metrics in Azure Monitor](../essentials/data-platform-metrics.md).-- Learn about the [monitoring data available](../agents/data-sources.md) for different resources in Azure.
+- Learn about the [monitoring data available](../agents/data-sources.md) for various resources in Azure.
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/log-analytics-tutorial.md
Title: "Log Analytics tutorial"
-description: Learn from this tutorial how to use features of Log Analytics in Azure Monitor to build a run a log query and analyze its results in the Azure portal.
+description: Learn from this tutorial how to use features of Log Analytics in Azure Monitor to build and run a log query and analyze its results in the Azure portal.
Last updated 06/28/2021
# Log Analytics tutorial
-Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor Logs and interactively analyze their results. You can use Log Analytics queries to retrieve records matching particular criteria, identify trends, analyze patterns, and provide a variety of insights into your data.
+Log Analytics is a tool in the Azure portal to edit and run log queries from data collected by Azure Monitor Logs and interactively analyze their results. You can use Log Analytics queries to retrieve records that match particular criteria, identify trends, analyze patterns, and provide a variety of insights into your data.
-This tutorial walks you through the Log Analytics interface, gets you started with some basic queries, and shows you how you can work with the results. You will learn the following:
+This tutorial walks you through the Log Analytics interface, gets you started with some basic queries, and shows you how you can work with the results. You'll learn the following:
> [!div class="checklist"] > * Understand the log data schema
This tutorial walks you through the Log Analytics interface, gets you started wi
> * Load, export, and copy queries and results > [!IMPORTANT]
-> This tutorial uses features of Log Analytics to build and run a query instead of working with the query itself. You'll leverage Log Analytics features to build one query and use another example query. When you're ready to learn the syntax of queries and start directly editing the query itself, go through the [Kusto Query Language tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor). That tutorial walks through several example queries that you can edit and run in Log Analytics, leveraging several of the features that you'll learn in this tutorial.
+> In this tutorial, you'll use Log Analytics features to build one query and use another example query. When you're ready to learn the syntax of queries and start directly editing the query itself, read the [Kusto Query Language tutorial](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor). That tutorial walks through example queries that you can edit and run in Log Analytics. It uses several of the features that you'll learn in this tutorial.
## Prerequisites
-This tutorial uses the [Log Analytics demo environment](https://ms.portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), which includes plenty of sample data supporting the sample queries. You can also use your own Azure subscription, but you may not have data in the same tables.
+This tutorial uses the [Log Analytics demo environment](https://ms.portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), which includes plenty of sample data that supports the sample queries. You can also use your own Azure subscription, but you might not have data in the same tables.
## Open Log Analytics
-Open the [Log Analytics demo environment](https://ms.portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade) or select **Logs** from the Azure Monitor menu in your subscription. This will set the initial scope to a Log Analytics workspace meaning that your query will select from all data in that workspace. If you select **Logs** from an Azure resource's menu, the scope is set to only records from that resource. See [Log query scope](./scope.md) for details about the scope.
+Open the [Log Analytics demo environment](https://ms.portal.azure.com/#blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade), or select **Logs** from the Azure Monitor menu in your subscription. This step will set the initial scope to a Log Analytics workspace, so that your query will select from all data in that workspace. If you select **Logs** from an Azure resource's menu, the scope is set to only records from that resource. For details about the scope, see [Log query scope](./scope.md).
-You can view the scope in the top left corner of the screen. If you're using your own environment, you'll see an option to select a different scope, but this option isn't available in the demo environment.
+You can view the scope in the upper-left corner of the screen. If you're using your own environment, you'll see an option to select a different scope. This option isn't available in the demo environment.
-## Table schema
-The left side of the screen includes the **Tables** tab which allows you to inspect the tables that are available in the current scope. These are grouped by **Solution** by default, but you can change their grouping or filter them.
+## View table information
+The left side of the screen includes the **Tables** tab, where you can inspect the tables that are available in the current scope. These tables are grouped by **Solution** by default, but you can change their grouping or filter them.
-Expand the **Log Management** solution and locate the **AppRequests** table. You can expand the table to view its schema, or hover over its name to show additional information about it.
+Expand the **Log Management** solution and locate the **AppRequests** table. You can expand the table to view its schema, or hover over its name to show more information about it.
-Click the link below **Useful links** to go to the table reference that documents each table and its columns. Click **Preview data** to have a quick look at a few recent records in the table. This can be useful to ensure that this is the data that you're expecting before you actually run a query with it.
+Select the link below **Useful links** to go to the table reference that documents each table and its columns. Select **Preview data** to have a quick look at a few recent records in the table. This preview can be useful to ensure that this is the data that you're expecting before you run a query with it.
## Write a query
-Let's go ahead and write a query using the **AppRequests** table. Double-click its name to add it to the query window. You can also type directly in the window and even get intellisense that will help complete the names of tables in the current scope and KQL commands.
+Let's write a query by using the **AppRequests** table. Double-click its name to add it to the query window. You can also type directly in the window. You can even get IntelliSense that will help complete the names of tables in the current scope and Kusto Query Language (KQL) commands.
-This is the simplest query that we can write. It just returns all the records in a table. Run it by clicking the **Run** button or by pressing Shift+Enter with the cursor positioned anywhere in the query text.
+This is the simplest query that we can write. It just returns all the records in a table. Run it by selecting the **Run** button or by selecting Shift+Enter with the cursor positioned anywhere in the query text.
-You can see that we do have results. The number of records returned by the query is displayed in the bottom right corner.
+You can see that we do have results. The number of records that the query has returned appears in the lower-right corner.
-## Filter
+## Filter query results
-Let's add a filter to the query to reduce the number of records that are returned. Select the **Filter** tab in the left pane. This shows different columns in the query results that you can use to filter the results. The top values in those columns are displayed with the number of records with that value. Click on **200** under **ResultCode** and then **Apply & Run**.
+Let's add a filter to the query to reduce the number of records that are returned. Select the **Filter** tab on the left pane. This tab shows columns in the query results that you can use to filter the results. The top values in those columns are displayed with the number of records that have that value. Select **200** under **ResultCode**, and then select **Apply & Run**.
-A **where** statement is added to the query with the value you selected. The results now include only those records with that value so you can see that the record count is reduced.
+A **where** statement is added to the query with the value that you selected. The results now include only records with that value, so you can see that the record count is reduced.
-## Time range
-All tables in a Log Analytics workspace have a column called **TimeGenerated** which is the time that the record was created. All queries have a time range that limits the results to records with a **TimeGenerated** value within that range. The time range can either be set in the query or with the selector at the top of the screen.
+### Time range
+All tables in a Log Analytics workspace have a column called **TimeGenerated**, which is the time that the record was created. All queries have a time range that limits the results to records that have a **TimeGenerated** value within that range. You can set the time range in the query or by using the selector at the top of the screen.
-By default, the query will return records from the last 24 hours. You should see a message here that we're not seeing all of the results. This is because Log Analytics can return a maximum of 30,000 records, and our query returned more records than that. Select the **Time range** dropdown and change it to **12 hours**. Click **Run** again to return the results.
+By default, the query returns records from the last 24 hours. You should see a message here that says we're not seeing all of the results. This is because Log Analytics can return a maximum of 30,000 records, and our query returned more records than that. Select the **Time range** dropdown list, and change the value to **12 hours**. Select **Run** again to return the results.
-## Multiple query conditions
-Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Get Home/Index** under **Name** and click **Apply & Run**.
+### Multiple query conditions
+Let's reduce our results further by adding another filter condition. A query can include any number of filters to target exactly the set of records that you want. Select **Get Home/Index** under **Name**, and then select **Apply & Run**.
## Analyze results In addition to helping you write and run queries, Log Analytics provides features for working with the results. Start by expanding a record to view the values for all of its columns.
-Click on the name of any column to sort the results by that column. Click on the filter icon next to it to provide a filter condition. This is similar to adding a filter condition to the query itself except that this filter is cleared if the query is run again. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
+Select the name of any column to sort the results by that column. Select the filter icon next to it to provide a filter condition. This is similar to adding a filter condition to the query itself, except that this filter is cleared if the query is run again. Use this method if you want to quickly analyze a set of records as part of interactive analysis.
-For example, set a filter on the **DurationMs** column to limit the records to those that took over **100** milliseconds.
+For example, set a filter on the **DurationMs** column to limit the records to those that took more than **100** milliseconds.
-Instead of filtering the results, you can group records by a particular column. Clear the filter that you just created and then turn on the **Group columns** slider.
+Instead of filtering the results, you can group records by a particular column. Clear the filter that you just created and then turn on the **Group columns** toggle.
-Now drag the **Url** column into the grouping row. Results are now organized by that column, and you can collapse each group to help you with your analysis.
+Drag the **Url** column into the grouping row. Results are now organized by that column, and you can collapse each group to help you with your analysis.
## Work with charts
-Let's have a look at a query that uses numerical data that we can view in a chart. Instead of building a query, we'll select an example query.
+Let's look at a query that uses numerical data that we can view in a chart. Instead of building a query, we'll select an example query.
-Click on **Queries** in the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have a variety of queries in multiple categories, but if you're using the demo environment, you may only see a single **Log Analytics workspaces** category. Expand that to view the queries in the category.
+Select **Queries** on the left pane. This pane includes example queries that you can add to the query window. If you're using your own workspace, you should have a variety of queries in multiple categories. If you're using the demo environment, you might see only a single **Log Analytics workspaces** category. Expand that to view the queries in the category.
-Click on the query called **Function Error rate** in the **Applications** category. This will add the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are seen as separate queries.
+Select the query called **Function Error rate** in the **Applications** category. This step adds the query to the query window. Notice that the new query is separated from the other by a blank line. A query in KQL ends when it encounters a blank line, so these are considered separate queries.
-The current query is the one that the cursor is positioned on. You can see that the first query is highlighted indicating it's the current query. Click anywhere in the new query to select it and then click the **Run** button to run it.
+The current query is the one that the cursor is positioned on. You can see that the first query is highlighted, indicating that it's the current query. Click anywhere in the new query to select it, and then select the **Run** button to run it.
-To view the results in a graph, select **Chart** in the results pane. Notice that there are various options for working with the chart such as changing it to another type.
+To view the results in a graph, select **Chart** on the results pane. Notice that there are various options for working with the chart, such as changing it to another type.
- ## Next steps
-Now that you know how to use Log Analytics, complete the tutorial on using log queries.
+Now that you know how to use Log Analytics, complete the tutorial on using log queries:
> [!div class="nextstepaction"] > [Write Azure Monitor log queries](get-started-queries.md)
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-access.md
The examples above define a list of tables that are allowed. This example shows
### Custom logs
- Custom logs are created from data sources such as custom logs and HTTP Data Collector API. The easiest way to identify the type of log is by checking the tables listed under [Custom Logs in the log schema](./log-analytics-tutorial.md#table-schema).
+ Custom logs are created from data sources such as custom logs and HTTP Data Collector API. The easiest way to identify the type of log is by checking the tables listed under [Custom Logs in the log schema](./log-analytics-tutorial.md#view-table-information).
You can't grant access to individual custom logs, but you can grant access to all custom logs. To create a role with access to all custom logs, create a custom role using the following actions:
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/private-link-security.md
Title: Use Azure Private Link to securely connect networks to Azure Monitor
-description: Use Azure Private Link to securely connect networks to Azure Monitor
+ Title: Use Azure Private Link to connect networks to Azure Monitor
+description: Set up an Azure Monitor Private Link Scope to securely connect networks to Azure Monitor.
Last updated 10/05/2020
-# Use Azure Private Link to securely connect networks to Azure Monitor
+# Use Azure Private Link to connect networks to Azure Monitor
-[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. However, Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads. As a result, we have built a resource called an Azure Monitor Private Link Scope (AMPLS). AMPLS allows you to define the boundaries of your monitoring network and connect to your virtual network. This article covers when to use and how to set up an Azure Monitor Private Link Scope.
+With [Azure Private Link](../../private-link/private-link-overview.md), you can securely link Azure platform as a service (PaaS) services to your virtual network by using private endpoints. For many services, you just set up an endpoint for each resource. However, Azure Monitor is a constellation of different interconnected services that work together to monitor your workloads.
+
+You can use a resource called an Azure Monitor Private Link Scope to define the boundaries of your monitoring network and connect to your virtual network. This article covers when to use and how to set up an Azure Monitor Private Link Scope.
+
+This article uses the following abbreviations in examples:
+
+- AMPLS: Azure Monitor Private Link Scope
+- VNet: virtual network
+- AI: Application Insights
+- LA: Log Analytics
## Advantages
-With Private Link you can:
+With Private Link, you can:
-- Connect privately to Azure Monitor without opening up any public network access-- Ensure your monitoring data is only accessed through authorized private networks-- Prevent data exfiltration from your private networks by defining specific Azure Monitor resources that connect through your private endpoint-- Securely connect your private on-premises network to Azure Monitor using ExpressRoute and Private Link-- Keep all traffic inside the Microsoft Azure backbone network
+- Connect privately to Azure Monitor without opening any public network access.
+- Ensure that your monitoring data is accessed only through authorized private networks.
+- Prevent data exfiltration from your private networks by defining specific Azure Monitor resources that connect through your private endpoint.
+- Securely connect your private on-premises network to Azure Monitor by using Azure ExpressRoute.
+- Keep all traffic inside the Microsoft Azure backbone network.
-For more information, see [Key Benefits of Private Link](../../private-link/private-link-overview.md#key-benefits).
+For more information, see [Key benefits of Private Link](../../private-link/private-link-overview.md#key-benefits).
## How it works
-Azure Monitor Private Link Scope (AMPLS) connects private endpoints (and the VNets they're contained in) to one or more Azure Monitor resources - Log Analytics workspaces and Application Insights components.
+An Azure Monitor Private Link Scope connects private endpoints (and the virtual networks they're contained in) to one or more Azure Monitor resources. These resources are Log Analytics workspaces and Application Insights components.
-![Diagram of basic resource topology](./media/private-link-security/private-link-basic-topology.png)
+![Diagram of a basic resource topology.](./media/private-link-security/private-link-basic-topology.png)
-* The Private Endpoint on your VNet allows it to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using to the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your VNet to unrequired outbound traffic.
-* Traffic from the Private Endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone, and not routed to public networks.
+* The private endpoint allows your virtual network to reach Azure Monitor endpoints through private IPs from your network's pool, instead of using the public IPs of these endpoints. That allows you to keep using your Azure Monitor resources without opening your virtual network to unrequired outbound traffic.
+* Traffic from the private endpoint to your Azure Monitor resources will go over the Microsoft Azure backbone and not be routed to public networks.
* You can configure each of your workspaces or components to allow or deny ingestion and queries from public networks. That provides a resource-level protection, so that you can control traffic to specific resources. > [!NOTE]
-> A single Azure Monitor resource can belong to multiple AMPLSs, but you cannot connect a single VNet to more than one AMPLS.
+> A single Azure Monitor resource can belong to multiple Azure Monitor Private Link Scopes, but you can't connect a single virtual network to more than one Azure Monitor Private Link Scope.
-### Azure Monitor Private Links and your DNS: It's All or Nothing
-Some Azure Monitor services use global endpoints, meaning they serve requests targeting any workspace/component. When you set up a Private Link connection your DNS is updated to map Azure Monitor endpoints to private IPs, in order to send traffic through the Private Link. When it comes to global endpoints, setting up a Private Link (even to a single resource) affects traffic to all resources. In other words, it's impossible to create a Private Link connection only for a specific component or workspace.
+### Azure Monitor Private Links and DNS: It's all or nothing
+
+Some Azure Monitor services use global endpoints, meaning that they serve requests targeting any workspace or component. When you set up a Private Link connection, your DNS is updated to map Azure Monitor endpoints to private IPs, in order to send traffic through Private Link.
+
+For global endpoints, setting up a Private Link instance (even to a single resource) affects traffic to all resources. In other words, it's impossible to create a Private Link connection for only a specific component or workspace.
#### Global endpoints
-Most importantly, traffic to the below global endpoints will be sent through the Private Link:
-* All Application Insights endpoints - endpoints handling ingestion, live metrics, profiler, debugger etc. to Application Insights endpoints are global.
-* The Query endpoint - the endpoint handling queries to both Application Insights and Log Analytics resources is global.
-That effectively means that all Application Insights traffic will be sent to the Private Link, and that all queries - to both Application Insights and Log Analytics resources - will be sent to the Private Link.
+Traffic to the following global endpoints will be sent through Private Link:
+* All Application Insights endpoints. Endpoints that handle ingestion, live metrics, profiler, and debugger (for example) to Application Insights endpoints are global.
+* The query endpoint. The endpoint that handles queries to both Application Insights and Log Analytics resources is global.
+
+Effectively, all Application Insights traffic will be sent to Private Link. All queries, to both Application Insights and Log Analytics resources, will be sent to Private Link.
-Traffic to Application Insights resource not added to your AMPLS will not pass the Private Link validation, and will fail.
+Traffic to Application Insights resources that aren't added to your Azure Monitor Private Link Scope will not pass the Private Link validation, and will fail.
-![Diagram of All or Nothing behavior](./media/private-link-security/all-or-nothing.png)
+![Diagram of all-or-nothing behavior.](./media/private-link-security/all-or-nothing.png)
#### Resource-specific endpoints
-All Log Analytics endpoints except the Query endpoint, are workspace-specific. So, creating a Private Link to a specific Log Analytics workspace won't affect ingestion (or other) traffic to other workspaces, which will continue to use the public Log Analytics endpoints. All queries, however, will be sent through the Private Link.
-### Azure Monitor Private Link applies to all networks that share the same DNS
-Some networks are composed of multiple VNets or other connected networks. If these networks share the same DNS, setting up a Private Link on any of them would update the DNS and affect traffic across all networks. That's especially important to note due to the "All or Nothing" behavior described above.
+All Log Analytics endpoints, except the query endpoint, are workspace specific. Creating a Private Link connection to a specific Log Analytics workspace won't affect ingestion (or other) traffic to other workspaces, which will continue to use the public Log Analytics endpoints. All queries, however, will be sent through Private Link.
-![Diagram of DNS overrides in multiple VNets](./media/private-link-security/dns-overrides-multiple-vnets.png)
+### Private Link applies to all networks that share the same DNS
-In the above diagram, VNet 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, VNet 10.0.2.x connects to AMPLS2, and overrides the DNS mapping of the *same global endpoints* with IPs from its range. Since these VNets aren't peered, the first VNet now fails to reach these endpoints.
+Some networks consist of multiple virtual networks or other connected networks. If these networks share the same DNS, setting up a Private Link instance on any of them would update the DNS and affect traffic across all networks. That's especially important to note, because of the all-or-nothing behavior described earlier.
+In the following diagram, virtual network 10.0.1.x first connects to AMPLS1 and maps the Azure Monitor global endpoints to IPs from its range. Later, virtual network 10.0.2.x connects to AMPLS2, and it overrides the DNS mapping of the *same global endpoints* with IPs from its range. Because these virtual networks aren't peered, the first virtual network now fails to reach these endpoints.
+
+![Diagram of DNS overrides in multiple virtual networks.](./media/private-link-security/dns-overrides-multiple-vnets.png)
## Planning your Private Link setup
-Before setting up your Azure Monitor Private Link setup, consider your network topology, and specifically your DNS routing topology.
+Before you set up Private Link, consider your network topology, and specifically your DNS routing topology.
-As discussed above, setting up a Private Link affects - in many ways - traffic to all Azure Monitor resources. That's especially true for Application Insights resources. Additionally, it affects not only the network connected to the Private Endpoint (and through it to the AMPLS resources) but also all other networks the share the same DNS.
+As discussed earlier, setting up Private Link affects traffic to all Azure Monitor resources. That's especially true for Application Insights resources. Additionally, it affects not only the network connected to the private endpoint (and through it to the Azure Monitor Private Link Scope resources) but also all other networks that share the same DNS.
-> [!NOTE]
-> Given all that, the simplest and most secure approach would be:
-> 1. Create a single Private Link connection, with a single Private Endpoint and a single AMPLS. If your networks are peered, create the Private Link connection on the shared (or hub) VNet.
-> 2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that AMPLS.
-> 3. Block network egress traffic as much as possible.
+Given all that, the simplest and most secure approach would be:
-If for some reason you can't use a single Private Link and a single AMPLS, the next best thing would be to create isolated Private Link connections for isolation networks. If you are (or can align with) using spoke vnets, follow the guidance in [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, setup separate private link settings in the relevant spoke VNets. **Make sure to separate DNS zones as well**, since sharing DNS zones with other spoke networks will cause DNS overrides.
+1. Create a single Private Link connection, with a single private endpoint and a single Azure Monitor Private Link Scope. If your networks are peered, create the Private Link connection on the shared (or hub) virtual network.
+2. Add *all* Azure Monitor resources (Application Insights components and Log Analytics workspaces) to that Azure Monitor Private Link Scope.
+3. Block network egress traffic as much as possible.
+If you can't use a single Private Link connection and a single Azure Monitor Private Link Scope, the next best thing is to create isolated Private Link connections for isolation networks. If you are (or can align with) using spoke virtual networks, follow the guidance in [Hub-and-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). Then, set up separate Private Link settings in the relevant spoke virtual networks. *Be sure to separate DNS zones*, because sharing DNS zones with other spoke networks will cause DNS overrides.
-### Hub-spoke networks
-Hub-spoke topologies can avoid the issue of DNS overrides by setting the Private Link on the hub (main) VNet, and not on each spoke VNet. This setup makes sense especially if the Azure Monitor resources used by the spoke VNets are shared.
+### Hub-and-spoke networks
-![Hub-and-spoke-single-PE](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
+Hub-and-spoke topologies can avoid the issue of DNS overrides by setting the Private Link connection on the hub (main) virtual network, and not on each spoke virtual network. This setup makes sense, especially if the Azure Monitor resources that the spoke virtual networks use are shared.
-> [!NOTE]
-> You may intentionally prefer to create separate Private Links for your spoke VNets, for example to allow each VNet to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint and AMPLS for each VNet, but **must also verify they don't share the same DNS zones in order to avoid DNS overrides**.
+![Diagram of a hub-and-spoke network with a single private endpoint.](./media/private-link-security/hub-and-spoke-with-single-private-endpoint.png)
+> [!NOTE]
+> You might prefer to create separate Private Link connections for your spoke virtual networks - for example, to allow each virtual network to access a limited set of monitoring resources. In such cases, you can create a dedicated Private Endpoint connection and Azure Monitor Private Link Scope for each virtual network. But you must also verify that they don't share the same DNS zones in order to avoid DNS overrides.
### Peered networks
-Network peering is used in various topologies, other than hub-spoke. Such networks can share reach each others' IP addresses, and most likely share the same DNS. In such cases, our recommendation is similar to Hub-spoke - select a single network that is reached by all other (relevant) networks and set the Private Link connection on that network. Avoid creating multiple Private Endpoints and AMPLS objects, since ultimately only the last one set in the DNS will apply.
+Network peering is used in various topologies other than hub-and-spoke. Such networks can share reach each other's IP addresses, and most likely share the same DNS.
+
+In these cases, our recommendation is similar to hub-and-spoke. Select a single network that's reached by all other (relevant) networks, and set the Private Link connection on that network. Avoid creating multiple private endpoints and Azure Monitor Private Link Scope objects, because ultimately only the last one set in the DNS will apply.
### Isolated networks
-If your networks aren't peered, **you must also separate their DNS in order to use Private Links**. Once that's done, you can create a Private Link for one (or many) network, without affecting traffic of other networks. That means creating a separate Private Endpoint for each network, and a separate AMPLS object. Your AMPLS objects can link to the same workspaces/components, or to different ones.
+If your networks aren't peered, *you must also separate their DNS in order to use a Private Link connection*. After that's done, you can create a Private Link connection for one or many networks, without affecting the traffic of other networks. That means creating a separate private endpoint for each network, and a separate Azure Monitor Private Link Scope object. Your Azure Monitor Private Link Scope objects can link to the same workspaces or components, or to different ones.
-### Test with a local bypass: Edit your machine's hosts file instead of the DNS
-As a local bypass to the All or Nothing behavior, you can select not to update your DNS with the Private Link records, and instead edit the hosts files on select machines so only these machines would send requests to the Private Link endpoints.
-* Set up a Private Link as show below, but when [connecting to a Private Endpoint](#connect-to-a-private-endpoint) choose **not** to auto-integrate with the DNS (step 5b).
-* Configure the relevant endpoints on your machines' hosts files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your Endpoint's DNS settings](#reviewing-your-endpoints-dns-settings).
+### Test with a local bypass: Edit your machine's host file instead of DNS
-That approach isn't recommended for production environments.
+As a local bypass to the all-or-nothing behavior, you can select not to update your DNS with the Private Link records. Instead, you can edit the host files on specific machines so that only these machines send requests to the Private Link endpoints:
-## Limits and additional considerations
+* Set up a Private Link connection as described later in the [Connect to a private endpoint](#connect-to-a-private-endpoint) section of this article. But when you're connecting to a private endpoint, choose *not* to automatically integrate with the DNS (step 5b).
+* Configure the relevant endpoints on your machines' host files. To review the Azure Monitor endpoints that need mapping, see [Reviewing your endpoint's DNS settings](#reviewing-your-endpoints-dns-settings).
-### AMPLS limits
+> [!NOTE]
+> We don't recommend this approach for production environments.
+
+## Limits and additional considerations
-The AMPLS object has the following limits:
-* A VNet can only connect to **one** AMPLS object. That means the AMPLS object must provide access to all the Azure Monitor resources the VNet should have access to.
-* An AMPLS object can connect to 50 Azure Monitor resources at most.
-* An Azure Monitor resource (Workspace or Application Insights component) can connect to 5 AMPLSs at most.
-* An AMPLS object can connect to 10 Private Endpoints at most.
+### Azure Monitor Private Link Scope limits
-In the below diagram:
-* Each VNet connects to only **one** AMPLS object.
-* AMPLS A connects to two workspaces and one Application Insight component, using 3 of the 50 possible Azure Monitor resources connections.
-* Workspace2 connects to AMPLS A and AMPLS B, using of the 5 possible AMPLS connections.
-* AMPLS B is connected to Private Endpoints of two VNets (VNet2 and VNet3), using 2 of the 10 possible Private Endpoint connections.
+The Azure Monitor Private Link Scope object has the following limits:
+* A virtual network can connect to only *one* Azure Monitor Private Link Scope object. The Azure Monitor Private Link Scope object must provide access to all the Azure Monitor resources that the virtual network should have access to.
+* An Azure Monitor Private Link Scope object can connect to 50 Azure Monitor resources at most.
+* An Azure Monitor resource (workspace or Application Insights component) can connect to five Azure Monitor Private Link Scopes at most.
+* An Azure Monitor Private Link Scope object can connect to 10 private endpoints at most.
-![Diagram of AMPLS limits](./media/private-link-security/ampls-limits.png)
+In the following diagram:
+* Each virtual network connects to only one Azure Monitor Private Link Scope object.
+* AMPLS A connects to two workspaces and one Application Insights component, by using three of the 50 possible Azure Monitor resource connections.
+* Workspace 2 connects to AMPLS A and AMPLS B, by using two of the five possible Azure Monitor Private Link Scope connections.
+* AMPLS B is connected to private endpoints of two virtual networks (VNet2 and VNet3), by using two of the 10 possible private endpoint connections.
+![Diagram of Azure Monitor Private Link Scope limits.](./media/private-link-security/ampls-limits.png)
### Application Insights considerations
-* YouΓÇÖll need to add resources hosting the monitored workloads to a private link. For example, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
-* Non-portal consumption experiences must also run on the private-linked VNET that includes the monitored workloads.
-* In order to support Private Links for Profiler and Debugger, you'll need to [provide your own storage account](../app/profiler-bring-your-own-storage.md)
+
+* You'll need to add resources that host the monitored workloads to a Private Link instance. For an example, see [Using Private Endpoints for Azure Web Apps](../../app-service/networking/private-endpoint.md).
+* Non-portal consumption experiences must also run on the connected virtual network that includes the monitored workloads.
+* To support Private Link connections for Profiler and Debugger, you'll need to [provide your own storage account](../app/profiler-bring-your-own-storage.md).
> [!NOTE]
-> To fully secure workspace-based Application Insights, you need to lock down both access to Application Insights resource as well as the underlying Log Analytics workspace.
+> To fully secure workspace-based Application Insights, you need to lock down access to both Application Insights resources and the underlying Log Analytics workspace.
### Log Analytics considerations+ #### Automation
-If you use Log Analytics solutions that require an Automation account, such as Update Management, Change Tracking, or Inventory, you should also set up a separate Private Link for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
-#### Log Analytics solution packs download
-Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created at or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds) can reach the agents' solution packs storage over the private link. This capability is made possible through the new DNS zone created for [blob.core.windows.net](#privatelink-blob-core-windows-net).
+If you use Log Analytics solutions that require an Azure Automation account, such as Update Management, Change Tracking, or Inventory, you should also set up a separate Private Link connection for your Automation account. For more information, see [Use Azure Private Link to securely connect networks to Azure Automation](../../automation/how-to/private-link-security.md).
+
+#### Log Analytics solution packs
-If your Private Link setup was created before April 19, 2021, it won't reach the solution packs storage over a private link. To handle that you can either:
-* Re-create your AMPLS and the Private Endpoint connected to it
+Log Analytics agents need to access a global storage account to download solution packs. Private Link setups created on or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds) can reach the agents' solution pack storage over the Private Link connection. This capability is made possible through the new DNS zone created for [blob.core.windows.net](#privatelink-blob-core-windows-net).
+
+If your Private Link setup was created before April 19, 2021, it won't reach the solution pack storage over a Private Link connection. To handle that, you can either:
+* Re-create your Azure Monitor Private Link Scope and the private endpoint that's connected to it.
* Allow your agents to reach the storage account through its public endpoint, by adding the following rules to your firewall allowlist:
- | Cloud environment | Agent Resource | Ports | Direction |
+ | Cloud environment | Agent resource | Ports | Direction |
|:--|:--|:--|:--| |Azure Public | scadvisorcontent.blob.core.windows.net | 443 | Outbound |Azure Government | usbn1oicore.blob.core.usgovcloudapi.net | 443 | Outbound |Azure China 21Vianet | mceast2oicore.blob.core.chinacloudapi.cn| 443 | Outbound - ## Private Link connection setup Start by creating an Azure Monitor Private Link Scope resource. 1. Go to **Create a resource** in the Azure portal and search for **Azure Monitor Private Link Scope**.
- ![Find Azure Monitor Private Link Scope](./media/private-link-security/ampls-find-1c.png)
+ ![Screenshot that shows finding Azure Monitor Private Link Scope.](./media/private-link-security/ampls-find-1c.png)
-2. Select **create**.
-3. Pick a Subscription and Resource Group.
-4. Give the AMPLS a name. It's best to use a meaningful and clear name, such as "AppServerProdTelem".
-5. Select **Review + Create**.
+2. Select **Create**.
+3. Choose a subscription and resource group.
+4. Give the Azure Monitor Private Link Scope a name. It's best to use a meaningful and clear name, such as **AppServerProdTelem**.
+5. Select **Review + create**.
- ![Create Azure Monitor Private Link Scope](./media/private-link-security/ampls-create-1d.png)
+ ![Screenshot that shows selections for creating an Azure Monitor Private Link Scope.](./media/private-link-security/ampls-create-1d.png)
-6. Let the validation pass, and then select **Create**.
+6. After validation, select **Create**.
### Connect Azure Monitor resources
-Connect Azure Monitor resources (Log Analytics workspaces and Application Insights components) to your AMPLS.
+Connect Azure Monitor resources (Log Analytics workspaces and Application Insights components) to your Azure Monitor Private Link Scope.
-1. In your Azure Monitor Private Link scope, select **Azure Monitor Resources** in the left-hand menu. Select the **Add** button.
-2. Add the workspace or component. Selecting the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can type in their name to filter down to them. Select the workspace or component and select **Apply** to add them to your scope.
+1. In your Azure Monitor Private Link Scope, select **Azure Monitor Resources** on the left menu. Select **Add**.
+2. Add the workspace or component. Selecting the **Add** button brings up a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups, or you can enter their names to filter down to them. Select the workspace or component, and select **Apply** to add them to your scope.
- ![Screenshot of select a scope UX](./media/private-link-security/ampls-select-2.png)
+ ![Screenshot of the interface for selecting a scope.](./media/private-link-security/ampls-select-2.png)
> [!NOTE]
-> Deleting Azure Monitor resources requires that you first disconnect them from any AMPLS objects they are connected to. It's not possible to delete resources connected to an AMPLS.
+> Deleting Azure Monitor resources requires that you first disconnect them from any Azure Monitor Private Link Scope objects that they're connected to. It's not possible to delete resources that are connected to an Azure Monitor Private Link Scope.
### Connect to a private endpoint
-Now that you have resources connected to your AMPLS, create a private endpoint to connect our network. You can do this task in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints), or inside your Azure Monitor Private Link Scope, as done in this example.
+Now that you have resources connected to your Azure Monitor Private Link Scope, create a private endpoint to connect your network. You can do this task in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints) or inside your Azure Monitor Private Link Scope. This example uses the Azure Monitor Private Link Scope:
-1. In your scope resource, select **Private Endpoint connections** in the left-hand resource menu. Select **Private Endpoint** to start the endpoint create process. You can also approve connections that were started in the Private Link center here by selecting them and selecting **Approve**.
+1. In your scope resource, select **Private Endpoint connections** on the left resource menu. Select **Private Endpoint** to start the endpoint creation process. You can also approve connections that were started in the Private Link center here by selecting them and then selecting **Approve**.
- ![Screenshot of Private Endpoint Connections UX](./media/private-link-security/ampls-select-private-endpoint-connect-3.png)
+ ![Screenshot of the interface for setting up private endpoint connections.](./media/private-link-security/ampls-select-private-endpoint-connect-3.png)
-2. Pick the subscription, resource group, and name of the endpoint, and the region it should live in. The region needs to be the same region as the VNet you connect it to.
+2. Choose the subscription, resource group, and name of the endpoint. Choose a region that matches the region of the virtual network that you connected the endpoint to.
3. Select **Next: Resource**.
-4. In the Resource screen,
+4. On the **Resource** tab:
- a. Pick the **Subscription** that contains your Azure Monitor Private Scope resource.
+ a. For **Subscription**, select the subscription that contains your Azure Monitor Private Scope resource.
- b. For **resource type**, choose **Microsoft.insights/privateLinkScopes**.
+ b. For **Resource type**, select **Microsoft.Insights/privateLinkScopes**.
- c. From the **resource** drop-down, choose your Private Link scope you created earlier.
+ c. For **Resource**, select the Azure Monitor Private Link Scope that you created earlier.
- d. Select **Next: Configuration >**.
- ![Screenshot of select Create Private Endpoint](./media/private-link-security/ampls-select-private-endpoint-create-4.png)
+ ![Screenshot of resource selections for creating a private endpoint.](./media/private-link-security/ampls-select-private-endpoint-create-4.png)
-5. On the configuration pane,
+ d. Select **Next: Configuration >**.
- a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Monitor resources.
-
- b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below.
- > [!NOTE]
- > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the AMPLS configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Monitor.
-
- c. Select **Review + create**.
+5. On the **Configuration** tab:
+
+ a. Choose the virtual network and subnet that you want to connect to your Azure Monitor resources.
- d. Let validation pass.
+ b. For **Integrate with private DNS zone**, select **Yes** to automatically create a new private DNS zone. The actual DNS zones might be different from what's shown in the following screenshot.
+
+ > [!NOTE]
+ > If you select **No** and prefer to manage DNS records manually, first complete setting up Private Link - including this private endpoint and the Azure Monitor Private Link Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md).
+ >
+ > Make sure not to create empty records as preparation for your Private Link setup. The DNS records that you create can override existing settings and affect your connectivity with Azure Monitor.
- e. Select **Create**.
-
- ![Screenshot of select Private Endpoint details.](./media/private-link-security/ampls-select-private-endpoint-create-5.png)
+ c. Select **Review + create**.
-You've now created a new private endpoint that is connected to this AMPLS.
+ ![Screenshot of selections for configuring a private endpoint.](./media/private-link-security/ampls-select-private-endpoint-create-5.png)
+
+ d. After validation, select **Create**.
+You've now created a new private endpoint that's connected to this Azure Monitor Private Link Scope.
## Configure access to your resources
-So far we covered the configuration of your network, but you should also consider how you want to configure network access to your monitored resources - Log Analytics workspaces and Application Insights components.
-Go to the Azure portal. In your resource's menu, there's a menu item called **Network Isolation** on the left-hand side. This page controls both which networks can reach the resource through a Private Link, and whether other networks can reach it or not.
+We've now covered the configuration of your network. You should also consider how you want to configure network access to your monitored resources: Log Analytics workspaces and Application Insights components.
+Go to the Azure portal. On your resource's menu, an item called **Network Isolation** is on the left side. This page controls both which networks can reach the resource through Private Link and whether other networks can reach it.
> [!NOTE]
-> Starting August 16, 2021, Network Isolation will be strictly enforced. Resources set to block queries from public networks, and that aren't associated with an AMPLS, will stop accepting queries from any network.
+> Starting August 16, 2021, network isolation will be strictly enforced. Resources that are set to block queries from public networks, and that aren't associated with an Azure Monitor Private Link Scope, will stop accepting queries from any network.
+
+![Screenshot that shows network isolation.](./media/private-link-security/ampls-network-isolation.png)
+
+### Connected Azure Monitor Private Link Scopes
-![LA Network Isolation](./media/private-link-security/ampls-network-isolation.png)
+On the Azure portal page for network isolation, in the **Azure Monitor Private Links Scopes** area, you can review and configure the resource's connections to Azure Monitor Private Link Scopes. Connecting to Azure Monitor Private Link Scopes allows traffic from the virtual network connected to each Azure Monitor Private Link Scope to reach this resource. It has the same effect as connecting a resource from the scope as we did in [Connect Azure Monitor resources](#connect-azure-monitor-resources).
-### Connected Azure Monitor Private Link scopes
-Here you can review and configure the resource's connections to Azure Monitor Private Links scopes. Connecting to scopes (AMPLSs) allows traffic from the virtual network connected to each AMPLS to reach this resource, and has the same effect as connecting it from the scope as we did in [Connecting Azure Monitor resources](#connect-azure-monitor-resources). To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Your resource can connect to 5 AMPLS objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
+To add a new connection, select **Add** and select the Azure Monitor Private Link Scope. Select **Apply** to connect it. Your resource can connect to five Azure Monitor Private Link Scope objects, as mentioned in [Restrictions and limitations](#restrictions-and-limitations).
-### Virtual networks access configuration - Managing access from outside of private links scopes
-The settings on the bottom part of this page control access from public networks, meaning networks not connected to the listed scopes (AMPLSs).
+### Managing access from outside Azure Monitor Private Link Scopes
-If you set **Allow public network access for ingestion** to **No**, then clients (machines, SDKs, etc.) outside of the connected scopes can't upload data or send logs to this resource.
+The bottom part of the Azure portal page for network isolation control is the **Virtual networks access configuration** area. These settings control access from public networks, meaning networks not connected to the listed Azure Monitor Private Link Scopes.
-If you set **Allow public network access for queries** to **No**, then clients (machines, SDKs etc.) outside of the connected scopes can't query data in this resource. That data includes access to logs, metrics, and the live metrics stream, as well as experiences built on top such as workbooks, dashboards, query API-based client experiences, insights in the Azure portal, and more. Experiences running outside the Azure portal and that query Log Analytics data also have to be running within the private-linked VNET.
+If you set **Accept data ingestion from public networks not connected through a Private Link Scope** to **No**, then clients (such as machines and SDKs) outside the connected scopes can't upload data or send logs to this resource.
+If you set **Accept queries from public networks not connected through a Private Link Scope** to **No**, then clients (such as machines and SDKs) outside the connected scopes can't query data in this resource. That data includes access to logs, metrics, and the live metrics stream. It also includes experiences built on top, such as workbooks, dashboards, query API-based client experiences, and insights in the Azure portal. Experiences that run outside the Azure portal and that query Log Analytics data also have to be running within the virtual network that uses Private Link.
### Exceptions #### Diagnostic logs
-Logs and metrics uploaded to a workspace via [Diagnostic Settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel, and are not controlled by these settings.
+
+Logs and metrics uploaded to a workspace via [diagnostic settings](../essentials/diagnostic-settings.md) go over a secure private Microsoft channel and are not controlled by these settings.
#### Azure Resource Manager
-Restricting access as explained above applies to data in the resource. However, configuration changes, including turning these access settings on or off, are managed by Azure Resource Manager. To control these settings, you should restrict access to this resources using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor Roles, Permissions, and Security](../roles-permissions-security.md)
-Additionally, specific experiences (such as the LogicApp connector, Update Management solution and the Workspace Summary blade in the portal, showing the solutions dashboard) query data through Azure Resource Manager and therefore won't be able to query data unless Private Link settings are applied to the Resource Manager as well.
+Restricting access as explained earlier applies to data in the resource. However, Azure Resource Manager manages configuration changes, including turning these access settings on or off. To control these settings, you should restrict access to resources by using the appropriate roles, permissions, network controls, and auditing. For more information, see [Azure Monitor roles, permissions, and security](../roles-permissions-security.md).
+Additionally, specific experiences (such as the Azure Logic Apps connector, the Update Management solution, and the **Workspace Summary** pane in the portal, showing the solution dashboard) query data through Azure Resource Manager. These experiences won't be able to query data unless Private Link settings are also applied to Resource Manager.
## Review and validate your Private Link setup
-### Reviewing your Endpoint's DNS settings
-The Private Endpoint you created should now have an five DNS zones configured:
+### Reviewing your endpoint's DNS settings
+The private endpoint that you created should now have five DNS zones configured:
* privatelink-monitor-azure-com * privatelink-oms-opinsights-azure-com
The Private Endpoint you created should now have an five DNS zones configured:
* privatelink-blob-core-windows-net > [!NOTE]
-> Each of these zones maps specific Azure Monitor endpoints to private IPs from the VNet's pool of IPs. The IP addresses showns in the below images are only examples. Your configuration should instead show private IPs from your own network.
+> Each of these zones maps specific Azure Monitor endpoints to private IPs from the virtual network's pool of IPs. The IP addresses shown in the following images are only examples. Your configuration should instead show private IPs from your own network.
+
+#### privatelink-monitor-azure-com
+
+This zone covers the global endpoints that Azure Monitor uses. These endpoints serve requests that consider all resources, not a specific one. This zone should have endpoints mapped for:
-#### Privatelink-monitor-azure-com
-This zone covers the global endpoints used by Azure Monitor, meaning these endpoints serve requests considering all resources, not a specific one. This zone should have endpoints mapped for:
-* `in.ai` - Application Insights ingestion endpoint (both a global and a regional entry)
-* `api` - Application Insights and Log Analytics API endpoint
-* `live` - Application Insights live metrics endpoint
-* `profiler` - Application Insights profiler endpoint
-* `snapshot` - Application Insights snapshots endpoint
-[![Screenshot of Private DNS zone monitor-azure-com.](./media/private-link-security/dns-zone-privatelink-monitor-azure-com.png)](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded.png#lightbox)
+* `in.ai`: Application Insights ingestion endpoint (both a global and a regional entry).
+* `api`: Application Insights and Log Analytics API endpoint.
+* `live`: Application Insights live metrics endpoint.
+* `profiler`: Application Insights profiler endpoint.
+* `snapshot`: Application Insights snapshot endpoint.
+
+[![Screenshot of the private D N S zone for global endpoints.](./media/private-link-security/dns-zone-privatelink-monitor-azure-com.png)](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded.png#lightbox)
#### privatelink-oms-opinsights-azure-com
-This zone covers workspace-specific mapping to OMS endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
-[![Screenshot of Private DNS zone oms-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com-expanded.png#lightbox)
+
+This zone covers workspace-specific mapping to Operations Management Suite (OMS) endpoints. You should see an entry for each workspace linked to the Azure Monitor Private Link Scope that's connected with this private endpoint.
+
+[![Screenshot of the private D N S zone for mapping to O M S endpoints.](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-oms-opinsights-azure-com-expanded.png#lightbox)
#### privatelink-ods-opinsights-azure-com
-This zone covers workspace-specific mapping to ODS endpoints - the ingestion endpoint of Log Analytics. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
-[![Screenshot of Private DNS zone ods-opinsights-azure-com.](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com-expanded.png#lightbox)
+
+This zone covers workspace-specific mapping to Operational Data Store (ODS) endpoints - the ingestion endpoint of Log Analytics. You should see an entry for each workspace linked to the Azure Monitor Private Link Scope connected with this private endpoint.
+
+[![Screenshot of the private D N S zone for mapping to O D S endpoints.](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com.png)](./media/private-link-security/dns-zone-privatelink-ods-opinsights-azure-com-expanded.png#lightbox)
#### privatelink-agentsvc-azure-automation-net
-This zone covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the AMPLS connected with this Private Endpoint.
-[![Screenshot of Private DNS zone agent svc-azure-automation-net.](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net.png)](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net-expanded.png#lightbox)
+
+This zone covers workspace-specific mapping to the agent service automation endpoints. You should see an entry for each workspace linked to the Azure Monitor Private Link Scope connected with this private endpoint.
+
+[![Screenshot of the private D N S zone for mapping to agent service automation endpoints.](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net.png)](./media/private-link-security/dns-zone-privatelink-agentsvc-azure-automation-net-expanded.png#lightbox)
#### privatelink-blob-core-windows-net
-This zone configures connectivity to the global agents' solution packs storage account. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle to Log Analytics agents, no matter how many workspaces are used.
-[![Screenshot of Private DNS zone blob-core-windows-net.](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net.png)](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net-expanded.png#lightbox)
+
+This zone configures connectivity to the global agents' storage account for solution packs. Through it, agents can download new or updated solution packs (also known as management packs). Only one entry is required to handle to Log Analytics agents, no matter how many workspaces are used.
+
+[![Screenshot of the private D N S zone for connectivity to the global agents' storage account for solution packs.](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net.png)](./media/private-link-security/dns-zone-privatelink-blob-core-windows-net-expanded.png#lightbox)
+ > [!NOTE]
-> This entry is only added to Private Links setups created at or after April 19, 2021 (or starting June, 2021 on Azure Sovereign clouds).
+> This entry is only added to Private Link setups created on or after April 19, 2021 (or starting June 2021 on Azure Sovereign clouds).
+
+### Validating that you're communicating over a Private Link connection
+To validate that your requests are now sent through the private endpoint, you can review them with a network tracking tool or even your browser. For example, when you're trying to query your workspace or application, make sure the request is sent to the private IP that's mapped to the API endpoint. In this example, it's 172.17.0.9.
-### Validating you are communicating over a Private Link
-* To validate your requests are now sent through the Private Endpoint, you can review them with a network tracking tool or even your browser. For example, when attempting to query your workspace or application, make sure the request is sent to the private IP mapped to the API endpoint, in this example it's *172.17.0.9*.
+> [!NOTE]
+> Some browsers might use [other DNS settings](#browser-dns-settings). Make sure that your DNS settings apply.
- Note: Some browsers may use other DNS settings (see [Browser DNS settings](#browser-dns-settings)). Make sure your DNS settings apply.
+To make sure that your workspace or component isn't receiving requests from public networks (not connected through an Azure Monitor Private Link Scope), set the resource's public ingestion and query flags to **No** as explained in [Configure access to your resources](#configure-access-to-your-resources).
-* To make sure your workspace or component aren't receiving requests from public networks (not connected through AMPLS), set the resource's public ingestion and query flags to *No* as explained in [Configure access to your resources](#configure-access-to-your-resources).
+From a client on your protected network, use `nslookup` for any of the endpoints listed in your DNS zones. Your DNS server should resolve it to the mapped private IPs instead of the public IPs that are used by default.
-* From a client on your protected network, use `nslookup` to any of the endpoints listed in your DNS zones. It should be resolved by your DNS server to the mapped private IPs instead of the public IPs used by default.
+## Use APIs and the command line
+You can automate the process of using Azure Resource Manager templates, REST, and command-line interfaces.
-## Use APIs and command line
+To create and manage Azure Monitor Private Link Scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or the [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
-You can automate the process described earlier using Azure Resource Manager templates, REST, and command-line interfaces.
+To manage the network access flag on your workspace or component, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]` on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
-To create and manage private link scopes, use the [REST API](/rest/api/monitor/privatelinkscopes(preview)/private%20link%20scoped%20resources%20(preview)) or [Azure CLI (az monitor private-link-scope)](/cli/azure/monitor/private-link-scope).
+### Example Azure Resource Manager template
-To manage the network access flag on your workspace or component, use the flags `[--ingestion-access {Disabled, Enabled}]` and `[--query-access {Disabled, Enabled}]`on [Log Analytics workspaces](/cli/azure/monitor/log-analytics/workspace) or [Application Insights components](/cli/azure/ext/application-insights/monitor/app-insights/component).
+The following Azure Resource Manager template creates:
-### Example Azure Resource Manager (ARM) template
-The below Azure Resource Manager template creates:
-* A private link scope (AMPLS) named "my-scope"
-* A Log Analytics workspace named "my-workspace"
-* Add a scoped resource to the "my-scope" AMPLS, named "my-workspace-connection"
+* An Azure Monitor Private Link Scope named *my-scope*.
+* A Log Analytics workspace named *my-workspace*.
+* A scoped resource named *my-workspace-connection* added to the *my-scope* Azure Monitor Private Link Scope.
``` {
The below Azure Resource Manager template creates:
} ```
-## Collect custom logs and IIS log over Private Link
+## Collect custom logs and IIS logs over Private Link
-Storage accounts are used in the ingestion process of custom logs. By default, service-managed storage accounts are used. However to ingest custom logs on private links, you must use your own storage accounts and associate them with Log Analytics workspace(s). See more details on how to set up such accounts using the [command line](/cli/azure/monitor/log-analytics/workspace/linked-storage).
+Storage accounts are used in the ingestion process of custom logs. By default, the process uses service-managed storage accounts. To ingest custom logs on Private Link, you must use your own storage accounts and associate them with Log Analytics workspaces. You can set up such accounts by using the [command line](/cli/azure/monitor/log-analytics/workspace/linked-storage).
-For more information on bringing your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md)
+For more information on bringing your own storage account, see [Customer-owned storage accounts for log ingestion](private-storage.md).
## Restrictions and limitations
-### AMPLS
+### Azure Monitor Private Link Scope
-The AMPLS object has a number of limits you should consider when planning your Private Link setup. See [AMPLS limits](#ampls-limits) for a deeper review of these limits.
+When you're planning your Private Link setup, consider the [limits on the Azure Monitor Private Link Scope object](#azure-monitor-private-link-scope-limits).
### Agents
-The latest versions of the Windows and Linux agents must be used to support secure ingestion to Log Analytics workspaces. Older versions can't upload monitoring data over a private network.
+To support secure ingestion to Log Analytics workspaces, you must use the latest versions of the Windows and Linux agents. Older versions can't upload monitoring data over a private network.
-**Log Analytics Windows agent**
+- **Log Analytics Windows agent**: Use Log Analytics agent version 10.20.18038.0 or later.
+- **Log Analytics Linux agent**: Use agent version 1.12.25 or later. If you can't, run the following commands on your VM:
-Use the Log Analytics agent version 10.20.18038.0 or later.
-
-**Log Analytics Linux agent**
-
-Use agent version 1.12.25 or later. If you can't, run the following commands on your VM.
-
-```cmd
-$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X
-$ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace key>
-```
+ ```cmd
+ $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -X
+ $ sudo /opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <workspace key>
+ ```
### Azure portal
-To use Azure Monitor portal experiences such as Application Insights and Log Analytics, you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add **AzureActiveDirectory**, **AzureResourceManager**, **AzureFrontDoor.FirstParty**, and **AzureFrontdoor.Frontend** [service tags](../../firewall/service-tags.md) to your Network Security Group.
+To use Azure Monitor portal experiences such as Application Insights and Log Analytics, you need to allow the Azure portal and Azure Monitor extensions to be accessible on the private networks. Add `AzureActiveDirectory`, `AzureResourceManager`, `AzureFrontDoor.FirstParty`, and `AzureFrontdoor.Frontend` [service tags](../../firewall/service-tags.md) to your network security group.
### Querying data
-The [`externaldata` operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) isn't supported over a Private Link, as it reads data from storage accounts but doesn't guarantee the storage is accessed privately.
+
+The [`externaldata` operator](/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuremonitor) isn't supported over a Private Link connection. It reads data from storage accounts but doesn't guarantee that the storage is accessed privately.
### Programmatic access
-To use the REST API, [CLI](/cli/azure/monitor) or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) **AzureActiveDirectory** and **AzureResourceManager** to your firewall.
+To use the REST API, the [CLI](/cli/azure/monitor), or PowerShell with Azure Monitor on private networks, add the [service tags](../../virtual-network/service-tags-overview.md) `AzureActiveDirectory` and `AzureResourceManager` to your firewall.
### Application Insights SDK downloads from a content delivery network
-Bundle the JavaScript code in your script so that the browser doesn't attempt to download code from a CDN. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup)
+Bundle the JavaScript code in your script so that the browser doesn't try to download code from a content delivery network. An example is provided on [GitHub](https://github.com/microsoft/ApplicationInsights-JS#npm-setup-ignore-if-using-snippet-setup).
### Browser DNS settings
-If you're connecting to your Azure Monitor resources over a Private Link, traffic to these resources must go through the private endpoint that is configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](#connect-to-a-private-endpoint). Some browsers use their own DNS settings instead of the ones you set. The browser might attempt to connect to Azure Monitor public endpoints and bypass the Private Link entirely. Verify that your browsers settings don't override or cache old DNS settings.
+If you're connecting to your Azure Monitor resources over a Private Link connection, traffic to these resources must go through the private endpoint that's configured on your network. To enable the private endpoint, update your DNS settings as explained in [Connect to a private endpoint](#connect-to-a-private-endpoint).
+
+Some browsers use their own DNS settings instead of the ones you set. The browser might try to connect to Azure Monitor public endpoints and bypass the Private Link connection entirely. Verify that your browser's settings don't override or cache old DNS settings.
## Next steps -- Learn about [private storage](private-storage.md)
+- Learn about [private storage](private-storage.md).
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 07/27/2021 Last updated : 08/09/2021 # FAQs About Azure NetApp Files
Yes, you can create [custom Azure policies](../governance/policy/tutorials/creat
However, you cannot create Azure policies (custom naming policies) on the Azure NetApp Files interface. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#considerations).
+### When I delete an Azure NetApp Files volume, is the data deleted safely?
+
+Deletion of an Azure NetApp Files volume is performed in the backend (physical infrastructure layer) programmatically with immediate effect. The delete operation includes deleting keys used for encrypting data at rest. There is no option for any scenario to recover a deleted volume once the delete operation is executed successfully (via interfaces such as the Azure portal and the API.)
+ ## Performance FAQs ### What should I do to optimize or tune Azure NetApp Files performance?
azure-netapp-files Cross Region Replication Create Peering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/cross-region-replication-create-peering.md
na ms.devlang: na Previously updated : 03/10/2021 Last updated : 08/09/2021 # Create volume replication for Azure NetApp Files
To authorize the replication, you need to obtain the resource ID of the replicat
6. In the Authorize field, paste the destination replication volume resource ID that you obtained in Step 3, then click **OK**.
+ > [!NOTE]
+ > ThereΓÇÖs likely a difference between the used space of the source volume and the used space of the destination volume. <!-- ANF-14038 -->
+ ## Next steps * [Cross-region replication](cross-region-replication-introduction.md)
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 06/21/2021 Last updated : 08/10/2021
You can change the default settings of the Azure portal to meet your own prefere
Most settings are available from the **Settings** menu in the top right section of global page header. > [!NOTE] > We're in the process of moving all users to the newest settings experience described in this topic. For information about the older experience, see [Manage Azure portal settings and preferences (older version)](original-preferences.md).
-## Settings overview
+## Directories + subscriptions
-The settings **Overview** pane shows key settings in one glance and lets you switch directories or view and activate subscription filters.
+The **Directories + subscriptions** page lets you manage directories and set subscription filters.
-In the **Directories** section, you can switch to a recently used directory by selecting **Switch** next to the desired directory. For a full list of directories to which you have access, select **See all**.
+### Switch and manage directories
-If you've [opted in to the new subscription filtering experience](#opt-into-the-new-subscription-filtering-experience), you can change the active filter to view the subscriptions or resources of your choice in the *Subscriptions + filters** section. To view all available filters, select **See all**.
+In the **Directories** section, you'll see your **Current directory** (which you're currently signed in to).
-To change other settings, select any of the items in the pane, or select an item in the left menu bar. You can also use the search menu near the top of the screen to find a setting.
+The **Startup directory** shows the default directory when you sign in to the Azure portal. To choose a different startup directory, select **change** to go to the [Appearance + startup views](#appearance--startup-views) page, where you can change this option.
-
-## Directories
-
-In **Directories**, you can select **All Directories** to see a full list of directories to which you have access.
+To see a full list of directories to which you have access, select **All Directories**.
To mark a directory as a favorite, select its star icon. Those directories will be listed in the **Favorites** section.
-To switch to a different directory, select the directory that you want to work in, then select the **Switch** button near the bottom of the screen. You'll be prompted to confirm before switching. If you'd like the new directory to be the default directory whenever you sign in to the Azure portal, you can select the box to make it your startup directory.
+To switch to a different directory, select the directory that you want to work in, then select the **Switch** button in its row.
-## Subscriptions + filters
+### Subscription filters
-You can choose the subscriptions that are filtered by default when you sign in to the Azure portal by selecting the directory and subscription filter icon in the global page header. This can be helpful if you have a primary list of subscriptions you work with but use others occasionally.
+You can choose the subscriptions that are filtered by default when you sign in to the Azure portal. This can be helpful if you have a primary list of subscriptions you work with but use others occasionally.
+To use customized filters, select **Advanced filters**. You'll be prompted to confirm before continuing.
-### Opt into the new subscription filtering experience
-The new subscription filtering experience can help you manage large numbers of subscriptions. You can opt in to this experience at any time when you select the directory and subscription filter icon. If you decide to return to the [previous experience](original-preferences.md#choose-your-default-subscription), you can do so from the **Subscriptions + filters** pane.
+This will enable the **Advanced filters** page, where you can create and manage multiple subscription filters. Any currently selected subscriptions will be saved as an imported filter that you can use again. If you want to stop using advanced filters, select the toggle again to restore the default subscription view. Any custom filters you've created will be saved and will be available to use if you enable **Advanced filters** in the future.
-> [!IMPORTANT]
-> If you have access to delegated subscriptions through [Azure Lighthouse](../lighthouse/overview.md), be sure that all directories and subscriptions are selected before you select the **Try it now** link, or else the new experience may not show all of the subscriptions to which you have access. If that happens, you can select **Switch back to the previous view** in the **Subscriptions + filters** pane, then repeat the opt in process with all directories and subscriptions selected. For more information, see [Work in the context of a delegated subscription](../lighthouse/how-to/view-manage-customers.md#work-in-the-context-of-a-delegated-subscription).
+## Advanced filters
-In the new experience, the **Subscriptions + filters** pane lets you create customized filters. When you activate one of your filters, the full portal experience will be scoped to show only the subscriptions to which the filter applies. You can do this by selecting **Activate** in the **Subscription + filters** pane, or in the **Subscriptions + filters** section of the overview pane.
+On the **Advanced filters** page, you can create, modify, or delete subscription filters.
The **Default** filter shows all subscriptions to which you have access. This filter is used if there are no other filters, or when the active filter fails to include any subscriptions.
-You'll also see a filter named **Imported-filter**, which includes all subscriptions that had been selected before opting in to the new filtering experience.
+You may also see a filter named **Imported-filter**, which includes all subscriptions that had been selected previously.
+
+To change the filter that is currently in use, select that filter from the **Advanced filter** drop-down box. You can also select **Modify advanced filters** to go to the **Advanced filters** page, where you can create, modify, and delete your filters.
### Create a filter
-To create additional filters of your choice, select **Create a filter** in the **Subscriptions + filters** pane. You can create up to ten filters.
+To create a new filter, select **Create a filter**. You can create up to ten filters.
Each filter must have a unique name that is between 8 and 50 characters long and contains only letters, numbers, and hyphens. After you've named your filter, enter at least one condition. In the **Filter type** field, select either **Subscription name**, **Subscription ID**, or **Subscription state**. Then select an operator and enter a value to filter on.
-When you're finished adding conditions, select **Create**. Your filter will then appear in the list in **Subscriptions + filters**.
+When you're finished adding conditions, select **Create**. Your filter will then appear in the list in **Active filters**.
### Modify or delete a filter
You can modify or rename an existing filter by selecting the pencil icon in that
To delete a filter, select the trash can icon in that filter's row. You can't delete the **Default** filter or any filter that is currently active.
-## Appearance
+## Appearance + startup views
-The **Appearance** pane lets you choose menu behavior, your color theme, and whether to use a high-contrast theme.
+The **Appearance + startup views** pane has two sections. The **Appearance** section lets you choose menu behavior, your color theme, and whether to use a high-contrast theme, and the **Startup views** section lets you set options for what you see when you first sign in to the Azure portal.
### Set menu behavior
The theme that you choose affects the background and font colors that appear in
Alternatively, you can choose a theme from the **High contrast theme** section. These themes can make the Azure portal easier to read, especially if you have a visual impairment. Selecting either the white or black high-contrast theme will override any other theme selections.
-## Startup views
-
-This pane allows you to set options for what you see when you first sign in to the Azure portal.
-- ### Startup page Choose one of the following options for the page you'll see when you first sign in to the Azure portal.
Choose one of the following options for the directory to work in when you first
- **Sign in to your last visited directory**: When you sign in to the Azure portal, you'll start in whichever directory you'd been working in last time. - **Select a directory**: Choose this option to select one of your directory. You'll start in that directory every time you sign in to the Azure portal, even if you had been working in a different directory last time. + ## Language + region Choose your language and the regional format that will influence how data such as dates and currency will appear in the Azure portal. > [!NOTE] > These language and regional settings affect only the Azure portal. Documentation links that open in a new tab or window use your browser's settings to determine the language to display.
The options shown in the **Regional format** drop-down list changes based on the
Select **Apply** to update your language and regional format settings.
-## Contact information
+## My information
+
+The **My information** page lets you update the email address that is used for updates on Azure services, billing, support, or security issues. You can also opt in or out from additional emails about Microsoft Azure and other products and services.
-This pane lets you update the email address that is used for updates on Azure services, billing, support, or security issues.
+Near the top of the **My information** page, you'll see options to export, restore, or delete settings.
-You can also opt in or out from additional emails about Microsoft Azure and other products and services on this page.
+
+### Export user settings
+
+Information about your custom settings is stored in Azure. You can export the following user data:
+
+- Private dashboards in the Azure portal
+- User settings like favorite subscriptions or directories
+- Themes and other custom portal settings
+
+It's a good idea to export and review your settings if you plan to delete them. Rebuilding dashboards or redoing settings can be time-consuming.
+
+To export your portal settings, select **Export settings** from the top of the settings **Overview** pane. This creates a *.json* file that contains your user settings data.
+
+Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file.
+
+### Restore default settings
+
+If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the settings **Overview** pane. You'll be prompted to confirm this action. When you do so, any changes you've made to your Azure portal settings will be lost. This option doesn't affect dashboard customizations.
+
+### Delete user settings and dashboards
+
+Information about your custom settings is stored in Azure. You can delete the following user data:
+
+- Private dashboards in the Azure portal
+- User settings like favorite subscriptions or directories
+- Themes and other custom portal settings
+
+It's a good idea to export and review your settings before you delete them. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming.
++
+To delete your portal settings, select **Delete all settings and private dashboards** from the top of the **My information** page. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost.
## Signing out + notifications This pane lets you manage pop-up notifications and session timeouts. ### Signing out
The inactivity timeout setting helps to protect resources from unauthorized acce
In the drop-down menu next to **Sign me out when inactive**, choose the duration after which your Azure portal session is signed out if you're idle. Select **Apply** to save your changes. After that, if you're inactive during the portal session, Azure portal will sign out after the duration you set. If your admin has enabled an inactivity timeout policy, you can still set your own, as long as it's shorter than the directory-level setting. To do so, select **Override the directory inactivity timeout policy**, then enter a time interval for the **Override value**. ### Change the directory timeout setting (admin)
Admins in the [Global Administrator role](../active-directory/roles/permissions-
If you're a Global Administrator, and you want to enforce an idle timeout setting for all users of the Azure portal, select **Enable directory level idle timeout** to turn on the setting. Next, enter the **Hours** and **Minutes** for the maximum time that a user can be inactive before their session is automatically signed out. After you select **Apply**, this setting will apply to all users in the directory. To confirm that the inactivity timeout policy is set correctly, select **Notifications** from the global page header and verify that a success notification is listed. ### Enable or disable pop-up notifications
To read all notifications received during your current session, select **Notific
To view notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log.md#view-the-activity-log).
-## Export, restore, or delete settings
-
-The settings **Overview** pane lets you export, restore, or delete settings.
--
-### Export user settings
-
-Information about your custom settings is stored in Azure. You can export the following user data:
--- Private dashboards in the Azure portal-- User settings like favorite subscriptions or directories-- Themes and other custom portal settings-
-It's a good idea to export and review your settings if you plan to delete them. Rebuilding dashboards or redoing settings can be time-consuming.
-
-To export your portal settings, select **Export settings** from the top of the settings **Overview** pane. This creates a *.json* file that contains your user settings data.
-
-Due to the dynamic nature of user settings and risk of data corruption, you can't import settings from the *.json* file.
-
-### Restore default settings
-
-If you've made changes to the Azure portal settings and want to discard them, select **Restore default settings** from the top of the settings **Overview** pane. You'll be prompted to confirm this action. When you do so, any changes you've made to your Azure portal settings will be lost. This option doesn't affect dashboard customizations.
-
-### Delete user settings and dashboards
-
-Information about your custom settings is stored in Azure. You can delete the following user data:
--- Private dashboards in the Azure portal-- User settings like favorite subscriptions or directories-- Themes and other custom portal settings-
-It's a good idea to export and review your settings before you delete them. Rebuilding [dashboards](azure-portal-dashboards.md) or redoing custom settings can be time-consuming.
--
-To delete your portal settings, select **Delete all settings and private dashboards** from the top of the settings **Overview** pane. You'll be prompted to confirm the deletion. When you do so, all settings customizations will return to the default settings, and all of your private dashboards will be lost.
- ## Next steps - [Learn about keyboard shortcuts in the Azure portal](azure-portal-keyboard-shortcuts.md)
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-powershell.md
You need a Bicep file to deploy. The local file name used in this article is _C:
You need to install Azure PowerShell and connect to Azure: - **Install Azure PowerShell cmdlets on your local computer.** For more information, see [Get started with Azure PowerShell](/powershell/azure/get-started-azureps).-- **Connect to Azure by using [Connect-AZAccount](/powershell/module/az.accounts/connect-azaccount)**. If you have multiple Azure subscriptions, you might also need to run [Set-AzContext](/powershell/module/Az.Accounts/Set-AzContext). For more information, see [Use multiple Azure subscriptions](/powershell/azure/manage-subscriptions-azureps).
+- **Connect to Azure by using [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount)**. If you have multiple Azure subscriptions, you might also need to run [Set-AzContext](/powershell/module/Az.Accounts/Set-AzContext). For more information, see [Use multiple Azure subscriptions](/powershell/azure/manage-subscriptions-azureps).
If you don't have PowerShell installed, you can use Azure Cloud Shell. For more information, see [Deploy Bicep files from Azure Cloud Shell](./deploy-cloud-shell.md).
You can target your deployment to a resource group, subscription, management gro
- To deploy to a **management group**, use [New-AzManagementGroupDeployment](/powershell/module/az.resources/New-AzManagementGroupDeployment). ```azurepowershell
- New-AzManagementGroupDeployment -Location <location> -TemplateFile <path-to-bicep>
+ New-AzManagementGroupDeployment -ManagementGroupId <management-group-id> -Location <location> -TemplateFile <path-to-bicep>
``` For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md).
If you're deploying to a resource group that doesn't exist, create the resource
New-AzResourceGroup -Name ExampleGroup -Location "Central US" ```
-To deploy a local Bicep file, use the `-TemplateFile` parameter in the deployment command. The following example also shows how to set a parameter value that comes from the Bicep file.
+To deploy a local Bicep file, use the `-TemplateFile` parameter in the deployment command.
```azurepowershell New-AzResourceGroupDeployment `
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/overview.md
This approach means you can safely share templates that meet your organization's
* To learn about ARM templates through a guided set of modules on Microsoft Learn, see [Deploy and manage resources in Azure by using ARM templates](/learn/paths/deploy-manage-resource-manager-templates/). * For information about the properties in template files, see [Understand the structure and syntax of ARM templates](./syntax.md). * To learn about exporting templates, see [Quickstart: Create and deploy ARM templates by using the Azure portal](quickstart-create-templates-use-the-portal.md).
-* For answers to common questions, see [Frequently asked questions about ARM templates](/azure/purview/frequently-asked-questions.yml).
+* For answers to common questions, see [Frequently asked questions about ARM templates](./frequently-asked-questions.yml).
azure-resource-manager Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/syntax.md
You can break a string into multiple lines. For example, see the `location` prop
* For details about the functions you can use from within a template, see [ARM template functions](template-functions.md). * To combine several templates during deployment, see [Using linked and nested templates when deploying Azure resources](linked-templates.md). * For recommendations about creating templates, see [ARM template best practices](./best-practices.md).
-* For answers to common questions, see [Frequently asked questions about ARM templates](/azure/purview/frequently-asked-questions.yml).
+* For answers to common questions, see [Frequently asked questions about ARM templates](frequently-asked-questions.yml).
azure-sql Always Encrypted Enclaves Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/always-encrypted-enclaves-getting-started.md
In this step, you will create a new Azure SQL Database logical server and a new
```PowerShell Connect-AzAccount $subscriptionId = "<your subscription ID>"
- Set-AzContext -Subscription $subscriptionId
+ $context = Set-AzContext -Subscription $subscriptionId
``` 1. Create a new resource group.
In this step, you'll create and configure an attestation provider in Microsoft A
$attestationProviderName = "<your attestation provider name>" New-AzAttestation -Name $attestationProviderName -ResourceGroupName $resourceGroupName -Location $location ```
+1. Assign yourself to the Attestation Contributor role for the attestaton provider, to ensure you have permissions to configure an attestation policy.
-1. Configure your attestation policy.
+ ```powershell
+ New-AzRoleAssignment -SignInName $context.Account.Id `
+ -RoleDefinitionName "Attestation Contributor" `
+ -ResourceName $attestationProviderName `
+ -ResourceType "Microsoft.Attestation/attestationProviders" `
+ -ResourceGroupName $resourceGroupName
+ ```
+
+3. Configure your attestation policy.
```powershell $policyFile = "<the pathname of the file from step 1 in this section>"
azure-sql Monitoring With Dmvs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/monitoring-with-dmvs.md
The next example shows you different ways that you can use the **sys.resource_st
```sql SELECT
- (COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 40 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name) AS 'CPU Fit Percent',
- (COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 40 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name) AS 'Log Write Fit Percent',
- (COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 40 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name) AS 'Physical Data IO Fit Percent'
+ 100*((COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 40 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name)) AS 'CPU Fit Percent',
+ 100*((COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 40 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name)) AS 'Log Write Fit Percent',
+ 100*((COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 40 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name)) AS 'Physical Data IO Fit Percent'
FROM sys.resource_stats
- WHERE database_name = 'userdb1' AND start_time > DATEADD(day, -7, GETDATE());
+ WHERE database_name = 'sample' AND start_time > DATEADD(day, -7, GETDATE());
``` Based on your database service tier, you can decide whether your workload fits into the lower compute size. If your database workload objective is 99.9 percent and the preceding query returns values greater than 99.9 percent for all three resource dimensions, your workload likely fits into the lower compute size.
The next example shows you different ways that you can use the **sys.resource_st
The average CPU is about a quarter of the limit of the compute size, which would fit well into the compute size of the database. But, the maximum value shows that the database reaches the limit of the compute size. Do you need to move to the next higher compute size? Look at how many times your workload reaches 100 percent, and then compare it to your database workload objective. ```sql
- SELECT
- (COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name) AS 'CPU fit percent'
- ,(COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name) AS 'Log write fit percent'
- ,(COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name) AS 'Physical data IO fit percent'
- FROM sys.resource_stats
- WHERE database_name = 'userdb1' AND start_time > DATEADD(day, -7, GETDATE());
+ SELECT
+ 100*((COUNT(database_name) - SUM(CASE WHEN avg_cpu_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name)) AS 'CPU Fit Percent',
+ 100*((COUNT(database_name) - SUM(CASE WHEN avg_log_write_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name)) AS 'Log Write Fit Percent',
+ 100*((COUNT(database_name) - SUM(CASE WHEN avg_data_io_percent >= 100 THEN 1 ELSE 0 END) * 1.0) / COUNT(database_name)) AS 'Physical Data IO Fit Percent'
+ FROM sys.resource_stats
+ WHERE database_name = 'sample' AND start_time > DATEADD(day, -7, GETDATE());
``` If this query returns a value less than 99.9 percent for any of the three resource dimensions, consider either moving to the next higher compute size or use application-tuning techniques to reduce the load on the database.
ORDER BY highest_cpu_queries.total_worker_time DESC;
## See also
-[Introduction to Azure SQL Database and Azure SQL Managed Instance](sql-database-paas-overview.md)
+[Introduction to Azure SQL Database and Azure SQL Managed Instance](sql-database-paas-overview.md)
azure-sql Create Template Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/create-template-quickstart.md
More template samples can be found in [Azure Quickstart Templates](https://azure
Select **Try it** from the following PowerShell code block to open Azure Cloud Shell. > [!IMPORTANT]
-> Deploying a managed instance is a long-running operation. Deployment of the first instance in the subnet typically takes much longer than deploying into a subnet with existing managed instances. For average provisioning times, see [SQL Managed Instance management operations](sql-managed-instance-paas-overview.md#management-operations).
+> Deploying a managed instance is a long-running operation. Deployment of the first instance in the subnet typically takes much longer than deploying into a subnet with existing managed instances. For average provisioning times, see [SQL Managed Instance management operations](management-operations-overview.md#duration).
# [PowerShell](#tab/azure-powershell)
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
Title: "Azure SQL Managed Instance: Long-term backup retention"
-description: "Learn how to store and restore automated backups on separate Azure Blob storage containers for an Azure SQL Managed Instance using PowerShell."
+description: "Learn how to store and restore automated backups on separate Azure Blob storage containers for an Azure SQL Managed Instance using the Azure portal and PowerShell."
Last updated 07/13/2021
# Manage Azure SQL Managed Instance long-term backup retention [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-In Azure SQL Managed Instance, you can configure a [long-term backup retention](../database/long-term-retention-overview.md) policy (LTR) as a public preview feature. This allows you to automatically retain database backups in separate Azure Blob storage containers for up to 10 years. You can then recover a database using these backups with PowerShell.
+In Azure SQL Managed Instance, you can configure a [long-term backup retention](../database/long-term-retention-overview.md) policy (LTR) as a public preview feature. This allows you to automatically retain database backups in separate Azure Blob storage containers for up to 10 years. You can then recover a database using these backups with the Azure portal and PowerShell.
> [!IMPORTANT] > LTR for managed instances is currently available in public preview in Azure Public regions.
-The following sections show you how to use PowerShell to configure the long-term backup retention, view backups in Azure SQL storage, and restore from a backup in Azure SQL storage.
+The following sections show you how to use the Azure portal and PowerShell to configure the long-term backup retention, view backups in Azure SQL storage, and restore from a backup in Azure SQL storage.
## Using the Azure portal
azure-video-analyzer Embed Player In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/embed-player-in-power-bi.md
Dashboards are an insightful way to monitor your business and view all your most
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one. - Complete either [Detect motion and record video](detect-motion-record-video-clips-cloud.md) or [Continuous video recording](continuous-video-recording.md) - a pipeline with video sink is required.
- [!NOTE] Your video analyzer account should have a minimum of one video recorded to proceed. Check for list of videos by logging into your Azure Video Analyzer account > Videos > Video Analyzer section.
+
+ > [!NOTE]
+ > Your video analyzer account should have a minimum of one video recorded to proceed. Check for list of videos by logging into your Azure Video Analyzer account > Videos > Video Analyzer section.
+ - A [Power BI](https://powerbi.microsoft.com/) account. ## Create a token
Dashboards are an insightful way to monitor your business and view all your most
4. Select any video from the list. 5. Click on **Widget** setup. A pane **Use widget in your application** opens on the right-hand side. Scroll down to **Option 2 ΓÇô using HTML** and copy the code and paste it in a text editor. Click the **Close** button.
- :::image type="content" source="./media/power-bi/widget-code.png" alt-text="Copy widget HTML code":::
+ :::image type="content" source="./media/power-bi/widget-code.png" alt-text="Screenshot of copy widget HTML code from AVA portal and save for later.":::
6. Edit the HTML code copied in step 5 to replace values for - Token **AVA-API-JWT-TOKEN** - replace with the value of Token that you saved in the ΓÇ£Create a tokenΓÇ¥ step. Ensure to remove the angular brackets.
Dashboards are an insightful way to monitor your business and view all your most
1. Open the [Power BI service](http://app.powerbi.com/) in your browser. From the navigation pane, select **My Workspace**
- :::image type="content" source="./media/power-bi/power-bi-workspace.png" alt-text="Power BI workspace":::
+ :::image type="content" source="./media/power-bi/power-bi-workspace.png" alt-text="Screenshot of Power BI workspace home page.":::
2. Create a new dashboard by clicking **New** > **Dashboard** or open an existing dashboard. Select the **Edit** drop down arrow and then **Add a tile**. Select **Web content** > **Next**. 3. In **Add web content tile**, enter your **Embed code** from previous section. Click **Apply**.
- :::image type="content" source="./media/power-bi/embed-code.png" alt-text="Embed the html code in tile":::
+ :::image type="content" source="./media/power-bi/embed-code.png" alt-text="Screenshot of embedding the html code in a Power BI dashboard tile.":::
4. You will see a player widget pinned to the dashboard with a video.
- :::image type="content" source="./media/power-bi/one-player-added.png" alt-text="One video player widget added":::
+ :::image type="content" source="./media/power-bi/one-player-added.png" alt-text="Screenshot of one video player widget added.":::
5. To add more videos from Azure Video Analyzer Videos section, follow the same steps in this section.
Dashboards are an insightful way to monitor your business and view all your most
Here is a sample of multiple videos pinned to a single Power BI dashboard. > [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/power-bi/two-players-added.png" alt-text="Two video player widgets added":::
+> :::image type="content" source="./media/power-bi/two-players-added.png" alt-text="Screenshot of two video player widgets added as an example.":::
## Next steps
azure-web-pubsub Howto Develop Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/howto-develop-create-instance.md
Title: How to create the Azure Web PubSub instance
-description: An overview on options to create Azure Web PubSub instance and how to do it
+ Title: Quickstart - Create a Web PubSub instance from Azure portal
+description: Quickstart showing how to create a Web PubSub instance from Azure portal
-+ Last updated 03/17/2021
-# How to create Azure Web PubSub instance
+# Quickstart: Create a Web PubSub instance from Azure portal
-To build an application with Azure Web PubSub service, you need to create the Web PubSub instance and then connect your clients and servers. This how-to guide shows you the options to create Azure Web PubSub instance.
+This quickstart shows you how to create Azure Web PubSub instance from Azure portal.
-## Create Azure Web PubSub instance with Azure portal
-The [Azure portal](../azure-portal/index.yml) is a web-based, unified console that provides an alternative to command-line tools. You can manage your Azure subscription with the Azure portal. Build, manage, and monitor everything from simple web apps to complex cloud deployments. You could also create Azure Web PubSub service instance with Azure portal.
-1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type *Web PubSub* in the search box and press enter. (You could also search the Azure Web PubSub from the `Web` category.)
+## Try the newly created instance
+> [!div class="nextstepaction"]
+> [Try the instance from the browser](./quickstart-live-demo.md)
-2. Select **Web PubSub** from the search results, then select **Create**.
+> [!div class="nextstepaction"]
+> [Try the instance with Azure CLI](./quickstart-cli-try.md)
-3. Enter the following settings.
+## Next steps
- | Setting | Description |
- | | -- |
- | **Resource name** | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `0-9`, and `-`. |
- | **Subscription** | The Azure subscription under which this new Web PubSub service instance is created. |
- | **[Resource Group](../azure-resource-manager/management/overview.md)** | Name for the new or existing resource group in which to create your Web PubSub service instance. |
- | **Location** | Choose a [region](https://azure.microsoft.com/regions/) near you. |
- | **Pricing tier** | Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/). |
- | **Unit count** | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. |
--
-4. Select **Create** to start deploying the Web PubSub service instance.
-
-## Create Azure Web PubSub instance with Azure CLI
-
-The [Azure command-line interface (Azure CLI)](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. You could also create Azure Web PubSub service instance with Azure CLI after GA.
azure-web-pubsub Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/key-concepts.md
+
+ Title: Azure Web PubSub basic concepts about hubs, groups, and connections
+description: Better understand the terms used in Azure Web PubSub.
++++ Last updated : 08/06/2021++
+# Azure Web PubSub basic concepts
+
+Azure Web PubSub Service helps you build real-time messaging web applications. The clients connect to the service using the [standard WebSocket protocol](https://datatracker.ietf.org/doc/html/rfc6455), and the service exposes [REST APIs](/rest/api/webpubsub) and SDKs for you to manage these clients.
+
azure-web-pubsub Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/overview.md
This article provides an overview of Azure Web PubSub service.
Any scenario that requires real-time publish-subscribe messaging between server and clients or among clients, can use Azure Web PubSub service. Traditional real-time features that often require polling from server or submitting HTTP requests, can also use Azure Web PubSub service.
-Azure Web PubSub service has been used in a wide variety of industries, for any application type that requires real-time content updates. We list some examples that are good to use Azure Web PubSub service:
+Azure Web PubSub service can be used in any application type that requires real-time content updates. We list some examples that are good to use Azure Web PubSub service:
* **High frequency data updates:** gaming, voting, polling, auction. * **Live dashboards and monitoring:** company dashboard, financial market data, instant sales update, multi-player game leader board, and IoT monitoring.
Azure Web PubSub service is designed for large-scale real-time applications. The
**Support for a wide variety of client SDKs and programming languages:**
-Azure Web PubSub service works with a broad range of clients, such as web and mobile browsers, desktop apps, mobile apps, server process, IoT devices, and game consoles. Since this service supports the raw WebSocket with publish-subscribe pattern, it is easily to use any standard WebSocket client SDK in different languages with this service.
+Azure Web PubSub service works with a broad range of clients, such as web and mobile browsers, desktop apps, mobile apps, server process, IoT devices, and game consoles. Since this service supports the standard WebSocket connection with publish-subscribe pattern, it is easily to use any standard WebSocket client SDK in different languages with this service.
**Offer rich APIs for different messaging patterns:** Azure Web PubSub service is a bi-directional messaging service that allows different messaging patterns among server and clients, for example:
-* The server sends messages to a particular connection, all connections, or a subset of connections that belong to a specific user, or have been placed in an arbitrary group.
-* The client sends messages to a particular connection, all connections, or a subset of connections that belong to an arbitrary group.
+* The server sends messages to a particular client, all clients, or a subset of clients that belong to a specific user, or have been placed in an arbitrary group.
+* The client sends messages to clients that belong to an arbitrary group.
* The clients send messages to server.
azure-web-pubsub Quickstart Cli Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-cli-create.md
+
+ Title: Quickstart - Create a Web PubSub instance with the Azure CLI
+description: Quickstart showing how to create a Web PubSub instance with the Azure CLI
++++ Last updated : 08/06/2021++
+# Quickstart: Create a Web PubSub instance with the Azure CLI
+
+The [Azure command-line interface (Azure CLI)](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. This quickstart shows you the options to create Azure Web PubSub instance with the Azure CLI.
+++
+- This quickstart requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Create a resource group
++
+## Create a Web PubSub instance
++
+## Try the newly created instance
+
+> [!div class="nextstepaction"]
+> [Try the newly created instance using CLI](./quickstart-cli-try.md)
++
+> [!div class="nextstepaction"]
+> [Try the newly created instance from browser](./quickstart-live-demo.md)
+
+## Clean up resources
++
+## Next steps
+
azure-web-pubsub Quickstart Cli Try https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-cli-try.md
+
+ Title: Quickstart - Connect and play with the Azure Web PubSub instance
+description: Quickstart showing how to play with the instance from the Azure CLI
++++ Last updated : 08/06/2021++
+# Quickstart: Connect to the Azure Web PubSub instance from CLI
+
+Previously we talked about how to create the Web PubSub instance [using Azure CLI](./quickstart-cli-create.md) or [from the portal](./howto-develop-create-instance.md). After the instance is successfully created, Azure CLI also provides a set of commands to connect to the instance and publish messages to the connected clients.
+
+## Play with the instance
++
+## Next steps
+
+This quickstart provides you a basic idea of how to connect to the Web PubSub service and how to publish messages to the connected clients.
+
azure-web-pubsub Quickstart Live Demo https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/quickstart-live-demo.md
Title: A simple Pub/Sub live demo
-description: A quickstart for getting started with Azure Web PubSub service live demo.
+ Title: Quickstart - Connect to the Azure Web PubSub instance from the browser
+description: A quickstart for getting started with Azure Web PubSub service in-browser live demo.
Last updated 04/26/2021
-# Quickstart: Get started with chatroom live demo
+# Quickstart: Connect to the Azure Web PubSub instance from the browser
-The Azure Web PubSub service helps you build real-time messaging web applications using WebSockets and the publish-subscribe pattern easily. The [pub/sub live demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html) demonstrates the real-time messaging capability provided by Azure Web PubSub. With this live demo, you could easily join a chat group and send real-time message to a specific group.
+Previously we talked about how to create the Web PubSub instance [from the portal](./howto-develop-create-instance.md) or [using Azure CLI](./quickstart-cli-create.md). This quickstart shows you how to get started easily with a [Pub/Sub live demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html).
-
-In this quickstart, learn how to get started easily with a live demo.
---
-## Get started with the chatroom live demo
-
-### Get client URL with a temp access token
-
-As the first step, you need to get the Client URL from the Azure Web PubSub instance.
--- Go to Azure portal and find out the Azure Web PubSub instance.-- Go to the `Client URL Generator` in `Key` blade. -- Set proper `Roles`: **Send To Groups** and **Join/Leave Groups**-- Generate and copy the `Client Access URL`. --
-### Try the live demo
-
-With this live demo, you could join or leave a group and send messages to the group members easily.
--- Open [chatroom live demo](https://azure.github.io/azure-webpubsub/demos/clientpubsub.html), paste the `Client Access URL` and Connect. --
-> [!NOTE]
-> **Client Access URL** is a convenience tool provided in the portal to simplify your getting-started experience, you can also use this Client Access URL to do some quick connect test. To write your own application, we provide SDKs in 4 languages to help you generate the URL.
--- Try different groups to join and different groups to send messages to, and see what messages are received. For example:
- - Make two clients joining into the same group. You will see that the message could broadcast to the group members.
- - Make two clients joining into different groups. You will see that the client cannot receive message if it is not group member.
-- You can also try to uncheck `Roles` when generating the `Client Access URL` to see what will happen when join a group or send messages to a group. For example:
- - Uncheck the `Send to Groups` permission. You will see that the client cannot send messages to the group.
- - Uncheck the `Join/Leave Groups` permission. You will see that the client cannot join a group.
## Next steps
-This quickstart provides you a basic idea of the Web PubSub service. In this quickstart, we leverage the *Client URL Generator* to generate a temporarily available client URL to connect to the service. In real-world applications, SDKs in various languages are provided for you to generate the client URL from the *Connection String*. Besides using SDKs to talk to the Web PubSub service from the application servers, Azure Function extension is also provided for you to build your serverless applications.
-
-Follow the quick starts listed below to start building your own application.
-
-> [!div class="nextstepaction"]
-> [Quick start: publish and subscribe messages in Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/publish-messages/js-publish-message)
-
-> [!div class="nextstepaction"]
-> [Quick start: Create a simple chatroom with Azure Web PubSub](https://azure.github.io/azure-webpubsub/getting-started/create-a-chat-app/js-handle-events)
-
-> [!div class="nextstepaction"]
-> [Quickstart: Create a serverless simple chat application with Azure Functions and Azure Web PubSub service](./quickstart-serverless.md)
+ In this quickstart, we use the *Client URL Generator* to generate a temporarily available client URL to connect to the service, and provide you some basic ideas of the Web PubSub service.
-> [!div class="nextstepaction"]
-> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples)
backup Sap Hana Db Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/sap-hana-db-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, discover how to restore SAP HANA databases that are running on Azure Virtual Machines. You can also use Cross Region Restore to restore your databases to a secondary region. Previously updated : 11/7/2019 Last updated : 08/06/2021 # Restore SAP HANA databases on Azure VMs
batch Batch Pool Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/batch-pool-vm-sizes.md
Batch pools in the Virtual Machine configuration support almost all [VM sizes](.
| Ev4, Esv4 | Not supported | | F, Fs | All sizes | | Fsv2 | All sizes |
-| FX | All sizes |
+| FX<sup>1</sup> | All sizes |
| G, Gs | All sizes | | H | All sizes | | HB | All sizes |
batch High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/high-availability-disaster-recovery.md
When providing the ability to failover to an alternate region, all components in
Consider the following points when designing a solution that can failover: -- Pre-create all required accounts in each region, such as the Batch account and storage account. There is often no charge for having accounts created, and charges accrue only when the account is used or when data is stored.-- Make sure the appropriate [quotas](batch-quota-limit.md) are set on all accounts ahead of time, so you can allocate the required number of cores using the Batch account.
+- Pre-create all required services in each region, such as the Batch account and storage account. There is often no charge for having accounts created, and charges accrue only when the account is used or when data is stored.
+- Make sure the appropriate [quotas](batch-quota-limit.md) are set on all subscriptions ahead of time, so you can allocate the required number of cores using the Batch account.
- Use templates and/or scripts to automate the deployment of the application in a region. - Keep application binaries and reference data up-to-date in all regions. Staying up-to-date will ensure the region can be brought online quickly without having to wait for the upload and deployment of files. For example, if a custom application to install on pool nodes is stored and referenced using Batch application packages, then when a new version of the application is produced, it should be uploaded to each Batch account and referenced by the pool configuration (or make the new version the default version). - In the application calling Batch, storage, and any other services, make it easy to switch over clients or the load to different regions.
cdn Cdn Msft Http Debug Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-msft-http-debug-headers.md
Header | Description
-| X-Cache: TCP_HIT | This header is returned when the content is served from the CDN edge cache. X-Cache: TCP_REMOTE_HIT | This header is returned when the content is served from the CDN regional cache (Origin shield layer)
-X-Cache: TCP_MISS | This header is returned when there is a cache miss, and the content is served from the Origin.
+X-Cache: TCP_MISS | This header is returned when there is a cache miss, and the content is served from the Origin.
+X-Cache: PRIVATE_NOSTORE | This header is returned when the request cannot be cached as Cache-Control response header is set to either private or no-store.
+X-Cache: CONFIG_NOCACHE | This header is returned when the request when request is configured not to cache in the CDN profile.
-For additional information on HTTP headers supported in Azure CDN, see [Front Door to backend](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend).
+For additional information on HTTP headers supported in Azure CDN, see [Front Door to backend](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend).
cloud-services-extended-support Deploy Prerequisite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-prerequisite.md
For a virtual networks belonging to the same resource group as the cloud service
<Subnet name="<subnet-name>"/> </Subnets> </InstanceAddress>
+ </AddressAssignments>
``` #### Virtual network located in different resource group
For a virtual networks belonging to the same resource group as the cloud service
<Subnet name="<subnet-name>"/> </Subnets> </InstanceAddress>
+ </AddressAssignments>
``` ### 2) Remove the old plugins
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
] ```
-3. Create a Network Profile Object and associate the public IP address to the frontend of the load balancer. A Load balancer is automatically created by the platform.
+3. Create a Cloud Service (Extended Support) object, adding appropriate `dependsOn` references if you are deploying Virtual Networks or Public IP within your template.
+
+ ```json
+ {
+ "apiVersion": "2021-03-01",
+ "type": "Microsoft.Compute/cloudServices",
+ "name": "[variables('cloudServiceName')]",
+ "location": "[parameters('location')]",
+ "tags": {
+ "DeploymentLabel": "[parameters('deploymentLabel')]",
+ "DeployFromVisualStudio": "true"
+ },
+ "dependsOn": [
+ "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]",
+ "[concat('Microsoft.Network/publicIPAddresses/', parameters('publicIPName'))]"
+ ],
+ "properties": {
+ "packageUrl": "[parameters('packageSasUri')]",
+ "configurationUrl": "[parameters('configurationSasUri')]",
+ "upgradeMode": "[parameters('upgradeMode')]"
+ }
+ }
+ ```
+4. Create a Network Profile Object for your Cloud Service and associate the public IP address to the frontend of the load balancer. A Load balancer is automatically created by the platform.
```json "networkProfile": {
This tutorial explains how to create a Cloud Service (extended support) deployme
```
-4. Add your key vault reference in the `OsProfile` section of the ARM template. Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration (.cscfg) file. You also need to enable Key Vault 'Access policies' for 'Azure Virtual Machines for deployment'(on portal) so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. The key vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [using certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
+5. Add your key vault reference in the `OsProfile` section of the ARM template. Key Vault is used to store certificates that are associated to Cloud Services (extended support). Add the certificates to Key Vault, then reference the certificate thumbprints in Service Configuration (.cscfg) file. You also need to enable Key Vault 'Access policies' for 'Azure Virtual Machines for deployment'(on portal) so that Cloud Services (extended support) resource can retrieve certificate stored as secrets from Key Vault. The key vault must be located in the same region and subscription as cloud service and have a unique name. For more information, see [using certificates with Cloud Services (extended support)](certificates-and-key-vault.md).
```json "osProfile": {
This tutorial explains how to create a Cloud Service (extended support) deployme
> - certificateUrl can be found by navigating to the certificate in the key vault labeled as **Secret Identifier**.ΓÇ» > - certificateUrl should be of the form https://{keyvault-endpoin}/secrets/{secretname}/{secret-id}
-5. Create a Role Profile. Ensure that the number of roles, role names, number of instances in each role and sizes are the same across the Service Configuration (.cscfg), Service Definition (.csdef) and role profile section in ARM template.
+6. Create a Role Profile. Ensure that the number of roles, role names, number of instances in each role and sizes are the same across the Service Configuration (.cscfg), Service Definition (.csdef) and role profile section in ARM template.
```json "roleProfile": {
This tutorial explains how to create a Cloud Service (extended support) deployme
} ```
-6. (Optional) Create an extension profile to add extensions to your cloud service. For this example, we are adding the remote desktop and Windows Azure diagnostics extension.
+7. (Optional) Create an extension profile to add extensions to your cloud service. For this example, we are adding the remote desktop and Windows Azure diagnostics extension.
> [!Note] > The password for remote desktop must be between 8-123 characters long and must satisfy at least 3 of password complexity requirements from the following: 1) Contains an uppercase character 2) Contains a lowercase character 3) Contains a numeric digit 4) Contains a special character 5) Control characters are not allowed
This tutorial explains how to create a Cloud Service (extended support) deployme
} ```
-7. Review the full template.
+8. Review the full template.
```json {
This tutorial explains how to create a Cloud Service (extended support) deployme
} ```
-8. Deploy the template and parameter file (defining parameters in template file) to create the Cloud Service (extended support) deployment. Please refer these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support) as required.
+9. Deploy the template and parameter file (defining parameters in template file) to create the Cloud Service (extended support) deployment. Please refer these [sample templates](https://github.com/Azure-Samples/cloud-services-extended-support) as required.
```powershell New-AzResourceGroupDeployment -ResourceGroupName "ContosOrg" -TemplateFile "file path to your template file" -TemplateParameterFile "file path to your parameter file"
cloud-services-extended-support Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/overview.md
Cloud Services (extended support) is a newΓÇ»[Azure Resource Manager](../azure-r
With this change, the Azure Service Manager based deployment model for Cloud Services will be renamed [Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md). You will retain the ability to build and rapidly deploy your web and cloud applications and services. You will be able to scale your cloud services infrastructure based on current demand and ensure that the performance of your applications can keep up while simultaneously reducing costs.
-> [!VIDEO https://youtu.be/H4K9xTUvNdw]
+ ## What does not change - You create the code, define the configurations, and deploy it to Azure. Azure sets up the compute environment, runs your code then monitors and maintains it for you.
cloud-shell Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/troubleshooting.md
In order to **delete** your user settings Cloud Shell saves for you such as pref
>[!Note] > If you delete your user settings, the actual Azure Files share will not be deleted. Go to your Azure Files to complete that action.
-1. [![Image showing a button labeled Launch Azure Cloud Shell.](https://shell.azure.com/images/launchcloudshell.png)](https://shell.azure.com)
+1. Launch Cloud Shell or a local shell with either Azure PowerShell or Azure CLI installed.
2. Run the following commands in Bash or PowerShell: Bash: ```
- token="Bearer $(curl http://localhost:50342/oauth2/token --data "resource=https://management.azure.com/" -H Metadata:true -s | jq -r ".access_token")"
+ token=(az account get-access-token --resource "https://management.azure.com/" | jq -r ".access_token")
curl -X DELETE https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -H Authorization:"$token" ``` PowerShell: ```powershell
- $token= ((Invoke-WebRequest -Uri "$env:MSI_ENDPOINT`?resource=https://management.core.windows.net/" -Headers @{Metadata='true'}).content | ConvertFrom-Json).access_token
- Invoke-WebRequest -Method Delete -Uri https://management.azure.com/providers/Microsoft.Portal/usersettings/cloudconsole?api-version=2017-12-01-preview -Headers @{Authorization = "Bearer $token"}
+ $token= (Get-AzAccessToken -Resource https://management.azure.com/).Token
+ Invoke-WebRequest -Method Delete -Uri https://management.azure.com?api-version=2017-12-01-preview -Headers @{Authorization = "Bearer $token"}
``` ## Azure Government limitations Azure Cloud Shell in Azure Government is only accessible through the Azure portal.
cognitive-services V3 0 Detect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/v3-0-detect.md
https://api.cognitive.microsofttranslator.com/detect?api-version=3.0
Request parameters passed on the query string are:
-<table width="100%">
- <th width="20%">Query parameter</th>
- <th>Description</th>
- <tr>
- <td>api-version</td>
- <td>*Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`.</td>
- </tr>
-</table>
+| Query parameter | Description |
+| | |
+| api-version | *Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`. |
Request headers include:
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>Authentication header(s)</td>
- <td><em>Required request header</em>.<br/>See <a href="/azure/cognitive-services/translator/reference/v3-0-reference#authentication">available options for authentication</a>.</td>
- </tr>
- <tr>
- <td>Content-Type</td>
- <td>*Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json`.</td>
- </tr>
- <tr>
- <td>Content-Length</td>
- <td>*Required request header*.<br/>The length of the request body.</td>
- </tr>
- <tr>
- <td>X-ClientTraceId</td>
- <td>*Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`.</td>
- </tr>
-</table>
+| Headers | Description |
+| | |
+| Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication)</a>. |
+| Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json`. |
+| Content-Length | *Required request header*.<br/>The length of the request body. |
+| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
## Request body
The body of the request is a JSON array. Each array element is a JSON object wit
```json [
- { "Text": "Ich w├╝rde wirklich gern Ihr Auto um den Block fahren ein paar Mal." }
+ { "Text": "Ich w├╝rde wirklich gerne Ihr Auto ein paar Mal um den Block fahren." }
] ```
An example JSON response is:
## Response headers
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>X-RequestId</td>
- <td>Value generated by the service to identify the request. It is used for troubleshooting purposes.</td>
- </tr>
-</table>
+| Headers | Description |
+| | |
+| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. |
## Response status codes The following are the possible HTTP status codes that a request returns.
-<table width="100%">
- <th width="20%">Status Code</th>
- <th>Description</th>
- <tr>
- <td>200</td>
- <td>Success.</td>
- </tr>
- <tr>
- <td>400</td>
- <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td>
- </tr>
- <tr>
- <td>401</td>
- <td>The request could not be authenticated. Check that credentials are specified and valid.</td>
- </tr>
- <tr>
- <td>403</td>
- <td>The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up.</td>
- </tr>
- <tr>
- <td>429</td>
- <td>The server rejected the request because the client has exceeded request limits.</td>
- </tr>
- <tr>
- <td>500</td>
- <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
- <tr>
- <td>503</td>
- <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
-</table>
+| Status Code | Description |
+| | |
+| 200 | Success. |
+| 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |
+| 401 | The request could not be authenticated. Check that credentials are specified and valid. |
+| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. |
+| 429 | The server rejected the request because the client has exceeded request limits. |
+| 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
+| 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
If an error occurs, the request will also return a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
cognitive-services V3 0 Transliterate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/reference/v3-0-transliterate.md
https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0
Request parameters passed on the query string are:
-<table width="100%">
- <th width="20%">Query parameter</th>
- <th>Description</th>
- <tr>
- <td>api-version</td>
- <td>*Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`.</td>
- </tr>
- <tr>
- <td>language</td>
- <td>*Required parameter*.<br/>Specifies the language of the text to convert from one script to another. Possible languages are listed in the `transliteration` scope obtained by querying the service for its [supported languages](./v3-0-languages.md).</td>
- </tr>
- <tr>
- <td>fromScript</td>
- <td>*Required parameter*.<br/>Specifies the script used by the input text. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find input scripts available for the selected language.</td>
- </tr>
- <tr>
- <td>toScript</td>
- <td>*Required parameter*.<br/>Specifies the output script. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find output scripts available for the selected combination of input language and input script.</td>
- </tr>
-</table>
+| Query parameter | Description |
+| | |
+| api-version | *Required parameter*.<br/>Version of the API requested by the client. Value must be `3.0`. |
+| language | *Required parameter*.<br/>Specifies the language of the text to convert from one script to another. Possible languages are listed in the `transliteration` scope obtained by querying the service for its [supported languages](./v3-0-languages.md). |
+| fromScript | *Required parameter*.<br/>Specifies the script used by the input text. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find input scripts available for the selected language. |
+| toScript | *Required parameter*.<br/>Specifies the output script. Look up [supported languages](./v3-0-languages.md) using the `transliteration` scope, to find output scripts available for the selected combination of input language and input script. |
Request headers include:
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>Authentication header(s)</td>
- <td><em>Required request header</em>.<br/>See <a href="/azure/cognitive-services/translator/reference/v3-0-reference#authentication">available options for authentication</a>.</td>
- </tr>
- <tr>
- <td>Content-Type</td>
- <td>*Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json`.</td>
- </tr>
- <tr>
- <td>Content-Length</td>
- <td>*Required request header*.<br/>The length of the request body.</td>
- </tr>
- <tr>
- <td>X-ClientTraceId</td>
- <td>*Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`.</td>
- </tr>
-</table>
+| Headers | Description |
+| | |
+| Authentication header(s) | <em>Required request header</em>.<br/>See [available options for authentication](./v3-0-reference.md#authentication). |
+| Content-Type | *Required request header*.<br/>Specifies the content type of the payload. Possible values are: `application/json` |
+| Content-Length | *Required request header*.<br/>The length of the request body. |
+| X-ClientTraceId | *Optional*.<br/>A client-generated GUID to uniquely identify the request. Note that you can omit this header if you include the trace ID in the query string using a query parameter named `ClientTraceId`. |
## Request body
An example JSON response is:
## Response headers
-<table width="100%">
- <th width="20%">Headers</th>
- <th>Description</th>
- <tr>
- <td>X-RequestId</td>
- <td>Value generated by the service to identify the request. It is used for troubleshooting purposes.</td>
- </tr>
-</table>
+| Headers | Description |
+| | |
+| X-RequestId | Value generated by the service to identify the request. It is used for troubleshooting purposes. |
## Response status codes The following are the possible HTTP status codes that a request returns.
-<table width="100%">
- <th width="20%">Status Code</th>
- <th>Description</th>
- <tr>
- <td>200</td>
- <td>Success.</td>
- </tr>
- <tr>
- <td>400</td>
- <td>One of the query parameters is missing or not valid. Correct request parameters before retrying.</td>
- </tr>
- <tr>
- <td>401</td>
- <td>The request could not be authenticated. Check that credentials are specified and valid.</td>
- </tr>
- <tr>
- <td>403</td>
- <td>The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up.</td>
- </tr>
- <tr>
- <td>429</td>
- <td>The server rejected the request because the client has exceeded request limits.</td>
- </tr>
- <tr>
- <td>500</td>
- <td>An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
- <tr>
- <td>503</td>
- <td>Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`.</td>
- </tr>
-</table>
+| Status Code | Description |
+| | |
+| 200 | Success. |
+| 400 | One of the query parameters is missing or not valid. Correct request parameters before retrying. |
+| 401 | The request could not be authenticated. Check that credentials are specified and valid. |
+| 403 | The request is not authorized. Check the details error message. This often indicates that all free translations provided with a trial subscription have been used up. |
+| 429 | The server rejected the request because the client has exceeded request limits. |
+| 500 | An unexpected error occurred. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
+| 503 | Server temporarily unavailable. Retry the request. If the error persists, report it with: date and time of the failure, request identifier from response header `X-RequestId`, and client identifier from request header `X-ClientTraceId`. |
If an error occurs, the request also returns a JSON error response. The error code is a 6-digit number combining the 3-digit HTTP status code followed by a 3-digit number to further categorize the error. Common error codes can be found on the [v3 Translator reference page](./v3-0-reference.md#errors).
If you are using cURL in a command-line window that does not support Unicode cha
``` curl -X POST "https://api.cognitive.microsofttranslator.com/transliterate?api-version=3.0&language=ja&fromScript=Jpan&toScript=Latn" -H "X-ClientTraceId: 875030C7-5380-40B8-8A03-63DACCF69C11" -H "Ocp-Apim-Subscription-Key: <client-secret>" -H "Content-Type: application/json" -d @request.txt
-```
+```
cognitive-services Concept Business Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-business-cards.md
Previously updated : 07/01/2021 Last updated : 08/09/2021
The data extracted with the Business Card API can be used to perform various tas
The Business Card API also powers the [AI Builder Business Card Processing feature](/ai-builder/prebuilt-business-card).
-## Try it out
+## Try it
To try out the Form Recognizer receipt service, go to the online Sample UI Tool: > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://aka.ms/fott-2.1-ga "Start with a prebuilt model to extract data from business card")
+> [Try business card model](https://aka.ms/fott-2.1-ga "Start with a prebuilt model to extract data from business card")
## What does the Business Card service do?
cognitive-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-custom.md
Previously updated : 07/27/2021 Last updated : 08/09/2021
A custom model is a machine learning program trained to recognize form fields wi
With composed models, you can assign multiple custom models to a composed model called with a single model ID. This is useful when you have trained several models and want to group them to analyze similar form types. For example, your composed model may be comprised of custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-## Try it out
+## Try it
Get started with our Form Recognizer sample labeling tool: > [!div class="nextstepaction"]
-> [Try Custom](https://aka.ms/fott-2.1-ga "Start with Custom to train a model with labels and find key-value pairs.")
+> [Try a custom model](https://aka.ms/fott-2.1-ga "Start with Custom to train a model with labels and find key-value pairs.")
## Create your models
cognitive-services Concept Identification Cards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-identification-cards.md
Previously updated : 07/01/2021 Last updated : 08/09/2021
The data extracted with the IDs API can be used to perform a variety of tasks fo
The IDs API also powers the [AI Builder ID reader feature](/ai-builder/prebuilt-id-reader).
-## Try it out
+## Try it
To try out the Form Recognizer IDs service, go to the online Sample UI Tool: > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://aka.ms/fott-2.1-ga "Start with a prebuilt model to extract data from identity documents.")
+> [Try ID document model](https://aka.ms/fott-2.1-ga "Start with a prebuilt model to extract data from identity documents.")
## What does the ID service do?
cognitive-services Concept Invoices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-invoices.md
Previously updated : 07/01/2021 Last updated : 08/09/2021
The Invoice API extracts key fields and line items from invoices and returns the
![Contoso invoice example](./media/invoice-example-new.jpg)
-## Try it out
+## Try it
To try out the Form Recognizer Invoice Service, go to the online Sample UI Tool: > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://aka.ms/fott-2.1-ga "Start with a prebuilt model to extract data from invoices.")
+> [Try invoice model](https://aka.ms/fott-2.1-ga "Start with a prebuilt model to extract data from invoices.")
You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer Invoice service.
cognitive-services Concept Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-layout.md
Previously updated : 07/01/2021 Last updated : 08/09/2021
The Layout API extracts text, tables with table headers included, selection mark
![Layout example](./media/layout-demo.gif)
-## Try it out
+## Try it
To try out the Form Recognizer Layout Service, go to the online sample UI tool: > [!div class="nextstepaction"]
-> [Try Form Recognizer](https://aka.ms/fott-2.1-ga "Start with the layout prebuilt model to extract data from your forms.")
+> [Try layout model](https://aka.ms/fott-2.1-ga "Start with the layout prebuilt model to extract data from your forms.")
You will need an Azure subscription ([create one for free](https://azure.microsoft.com/free/cognitive-services)) and a [Form Recognizer resource](https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) endpoint and key to try out the Form Recognizer Layout API.
cognitive-services Concept Receipts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-receipts.md
Previously updated : 07/01/2021 Last updated : 08/09/2021
Receipts contain useful data which you can use to analyze consumer behavior and
The Receipt API also powers the [AI Builder Receipt Processing feature](/ai-builder/prebuilt-receipt-processing).
-## Try it out
+## Try it
To try out the Form Recognizer receipt service, go to the online Sample UI Tool: > [!div class="nextstepaction"]
-> [Try Prebuilt Models](https://aka.ms/fott-2.1-ga "Start with prebuilt model to extract data from receipts.")
+> [Try receipt model](https://aka.ms/fott-2.1-ga "Start with prebuilt model to extract data from receipts.")
## What does the Receipt service do?
communication-services Certified Session Border Controllers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/certified-session-border-controllers.md
Note the certification granted to a major version. That means that firmware with
### Conceptual documentation - [Phone number types in Azure Communication Services](./plan-solution.md)-- [Plan for Azure direct routing](./sip-interface-infrastructure.md)
+- [Plan for Azure direct routing](./direct-routing-infrastructure.md)
- [Pair the Session Border Controller and configure voice routing](./direct-routing-provisioning.md) - [Pricing](../pricing.md)
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/direct-routing-infrastructure.md
+
+ Title: Azure direct routing infrastructure requirements - Azure Communication Services
+description: Familiarize yourself with the infrastructure requirements for Azure Communication Services direct routing configuration
+++++ Last updated : 06/30/2021++++
+# Azure direct routing infrastructure requirements
++
+
+This article describes infrastructure, licensing, and Session Border Controller (SBC) connectivity details that you'll want to keep in mind as your plan your Azure direct routing deployment.
++
+## Infrastructure requirements
+The infrastructure requirements for the supported SBCs, domains, and other network connectivity requirements to deploy Azure direct routing are listed in the following table:
+
+|Infrastructure requirement|You need the following|
+|: |: |
+|Session Border Controller (SBC)|A supported SBC. For more information, see [Supported SBCs](#supported-session-border-controllers-sbcs).|
+|Telephony trunks connected to the SBC|One or more telephony trunks connected to the SBC. On one end, the SBC connects to the Azure Communication Service via direct routing. The SBC can also connect to third-party telephony entities, such as PBXs, Analog Telephony Adapters, and so on. Any PSTN connectivity option connected to the SBC will work. (For configuration of the PSTN trunks to the SBC, refer to the SBC vendors or trunk providers.)|
+|Azure subscription|An Azure subscription that you use to create Communication Services resource, and the configuration and connection to the SBC.|
+|Communication Services Access Token|To make calls, you need a valid Access Token with `voip` scope. See [Access Tokens](../identity-model.md#access-tokens)|
+|Public IP address for the SBC|A public IP address that can be used to connect to the SBC. Based on the type of SBC, the SBC can use NAT.|
+|Fully Qualified Domain Name (FQDN) for the SBC|An FQDN for the SBC, where the domain portion of the FQDN does not match registered domains in your Microsoft 365 or Office 365 organization. For more information, see [SBC domain names](#sbc-domain-names).|
+|Public DNS entry for the SBC |A public DNS entry mapping the SBC FQDN to the public IP Address. |
+|Public trusted certificate for the SBC |A certificate for the SBC to be used for all communication with Azure direct routing. For more information, see [Public trusted certificate for the SBC](#public-trusted-certificate-for-the-sbc).|
+|Firewall IP addresses and ports for SIP signaling and media |The SBC communicates to the following services in the cloud:<br/><br/>SIP Proxy, which handles the signaling<br/>Media Processor, which handles media<br/><br/>These two services have separate IP addresses in Microsoft Cloud, described later in this document.
++
+## SBC domain names
+
+Customers without Office 365 can use any domain name for which they can obtain a public certificate.
+
+The following table shows examples of DNS names registered for the tenant, whether the name can be used as a fully qualified domain name (FQDN) for the SBC, and examples of valid FQDN names:
+
+|DNS name|Can be used for SBC FQDN|Examples of FQDN names|
+|: |: |: |
+contoso.com|Yes|**Valid names:**<br/>sbc1.contoso.com<br/>ssbcs15.contoso.com<br/>europe.contoso.com|
+|contoso.onmicrosoft.com|No|Using *.onmicrosoft.com domains is not supported for SBC names
+
+If you are an Office 365 customer, then the SBC domain name must not match registered in Domains of the Office 365 tenant. Below is the example of Office 365 and Azure Communication Service coexistence:
+
+|Domain registered in Office 365|Examples of SBC FQDN in Teams|Examples of SBC FQDN names in Azure Communication Services|
+|: |: |: |
+**contoso.com** (second level domain)|**sbc.contoso.com** (name in the second level domain)|**sbc.acs.contoso.com** (name in the third level domain)<br/>**sbc.fabrikam.com** (any name within different domain)|
+|**o365.contoso.com** (third level domain)|**sbc.o365.contoso.com** (name in the third level domain)|**sbc.contoso.com** (name in the second level domain)<br/>**sbc.acs.o365.contoso.com** (name in the fourth level domain)<br/>**sbc.fabrikam.com** (any name within different domain)
+
+SBC pairing works on the Communication Services resource level, meaning you can pair many SBCs to a single Communication Services resource, but you cannot pair a single SBC to more than one Communication Services resource. Unique SBC FQDNs are required for pairing to different resources.
+
+## Public trusted certificate for the SBC
+
+Microsoft recommends that you request the certificate for the SBC by generating a certification signing request (CSR). For specific instructions on generating a CSR for an SBC, refer to the interconnection instructions or documentation provided by your SBC vendors.
+
+ > [!NOTE]
+ > Most Certificate Authorities (CAs) require the private key size to be at least 2048. Keep this in mind when generating the CSR.
+
+The certificate needs to have the SBC FQDN as the common name (CN) or the subject alternative name (SAN) field. The certificate should be issued directly from a certification authority, not from an intermediate provider.
+
+Alternatively, Communication Services direct routing supports a wildcard in the CN and/or SAN, and the wildcard needs to conform to standard [RFC HTTP Over TLS](https://tools.ietf.org/html/rfc2818#section-3.1).
+
+An example would be using `\*.contoso.com`, which would match the SBC FQDN `sbc.contoso.com`, but wouldn't match with `sbc.test.contoso.com`.
+
+The certificate needs to be generated by one of the following root certificate authorities:
+
+- AffirmTrust
+- AddTrust External CA Root
+- Baltimore CyberTrust Root*
+- Buypass
+- Cybertrust
+- Class 3 Public Primary Certification Authority
+- Comodo Secure Root CA
+- Deutsche Telekom
+- DigiCert Global Root CA
+- DigiCert High Assurance EV Root CA
+- Entrust
+- GlobalSign
+- Go Daddy
+- GeoTrust
+- Verisign, Inc.
+- SSL.com
+- Starfield
+- Symantec Enterprise Mobile Root for Microsoft
+- SwissSign
+- Thawte Timestamping CA
+- Trustwave
+- TeliaSonera
+- T-Systems International GmbH (Deutsche Telekom)
+- QuoVadis
+
+Microsoft is working on adding more certification authorities based on customer requests.
+
+## SIP Signaling: FQDNs
+
+The connection points for Communication Services direct routing are the following three FQDNs:
+
+- **sip.pstnhub.microsoft.com** ΓÇô Global FQDN ΓÇô must be tried first. When the SBC sends a request to resolve this name, the Microsoft Azure DNS servers return an IP address pointing to the primary Azure datacenter assigned to the SBC. The assignment is based on performance metrics of the datacenters and geographical proximity to the SBC. The IP address returned corresponds to the primary FQDN.
+- **sip2.pstnhub.microsoft.com** ΓÇô Secondary FQDN ΓÇô geographically maps to the second priority region.
+- **sip3.pstnhub.microsoft.com** ΓÇô Tertiary FQDN ΓÇô geographically maps to the third priority region.
+
+Placing these three FQDNs in order is required to:
+
+- Provide optimal experience (less loaded and closest to the SBC datacenter assigned by querying the first FQDN).
+- Provide failover when connection from an SBC is established to a datacenter that is experiencing a temporary issue. For more information, see [Failover mechanism](#failover-mechanism-for-sip-signaling) below.
+
+The FQDNs ΓÇô sip.pstnhub.microsoft.com, sip2.pstnhub.microsoft.com, and sip3.pstnhub.microsoft.com ΓÇô will be resolved to one of the following IP addresses:
+
+- `52.114.148.0`
+- `52.114.132.46`
+- `52.114.75.24`
+- `52.114.76.76`
+- `52.114.7.24`
+- `52.114.14.70`
+- `52.114.16.74`
+- `52.114.20.29`
+
+Open firewall ports for these IP addresses to allow incoming and outgoing traffic to and from the addresses for signaling. If your firewall supports DNS names, the FQDN `sip-all.pstnhub.microsoft.com` resolves to all these IP addresses.
+
+## SIP Signaling: Ports
+
+Use the following ports for Communication Services Azure direct routing:
+
+|Traffic|From|To|Source port|Destination port|
+|: |: |: |: |: |
+|SIP/TLS|SIP Proxy|SBC|1024 ΓÇô 65535|Defined on the SBC (For Office 365 GCC High/DoD only port 5061 must be used)|
+SIP/TLS|SBC|SIP Proxy|Defined on the SBC|5061|
+
+### Failover mechanism for SIP Signaling
+
+The SBC makes a DNS query to resolve sip.pstnhub.microsoft.com. Based on the SBC location and the datacenter performance metrics, the primary datacenter is selected. If the primary datacenter experiences an issue, the SBC will try the sip2.pstnhub.microsoft.com, which resolves to the second assigned datacenter, and, in the rare case that datacenters in two regions are not available, the SBC retries the last FQDN (sip3.pstnhub.microsoft.com), which provides the tertiary datacenter IP.
+
+## Media traffic: IP and Port ranges
+
+The media traffic flows to and from a separate service called Media Processor. At the moment of publishing, Media Processor for Communication Services can use any Azure IP address.
+Download [the full list of addresses](https://www.microsoft.com/download/details.aspx?id=56519).
+
+### Port range
+The port range of the Media Processors is shown in the following table:
+
+|Traffic|From|To|Source port|Destination port|
+|: |: |: |: |: |
+|UDP/SRTP|Media Processor|SBC|3478-3481 and 49152 ΓÇô 53247|Defined on the SBC|
+|UDP/SRTP|SBC|Media Processor|Defined on the SBC|3478-3481 and 49152 ΓÇô 53247|
+
+ > [!NOTE]
+ > Microsoft recommends at least two ports per concurrent call on the SBC.
++
+## Media traffic: Media processors geography
+
+The media traffic flows via components called media processors. Media processors are placed in the same datacenters as SIP proxies. Also, there are additional media processors to optimize media flow. For example, we do not have a SIP proxy component now in Australia (SIP flows via Singapore or Hong Kong SAR) but we do have the media processor locally in Australia. The need for the media processors locally is dictated by the latency which we experience by sending traffic long-distance, for example from Australia to Singapore or Hong Kong SAR. While latency in the example of traffic flowing from Australia to Hong Kong SAR or Singapore is acceptable to preserve good call quality for SIP traffic, for real-time media traffic it is not.
+
+Locations where both SIP proxy and media processor components deployed:
+- US (two in US West and US East datacenters)
+- Europe (Amsterdam and Dublin datacenters)
+- Asia (Singapore and Hong Kong SAR datacenters)
+- Australia (AU East and Southeast datacenters)
+
+Locations where only media processors are deployed (SIP flows via the closest datacenter listed above):
+- Japan (JP East and West datacenters)
++
+## Media traffic: Codecs
+
+### Leg between SBC and Cloud Media Processor or Microsoft Teams client.
+
+The Azure direct routing interface on the leg between the Session Border Controller and Cloud Media Processor can use the following codecs:
+
+- SILK, G.711, G.722, G.729
+
+You can force use of the specific codec on the Session Border Controller by excluding undesirable codecs from the offer.
+
+### Leg between Communication Services Calling SDK app and Cloud Media Processor
+
+On the leg between the Cloud Media Processor and Communication Services Calling SDK app, G.722 is used. Microsoft is working on adding more codecs on this leg.
+
+## Supported Session Border Controllers (SBCs)
+
+- [Session Border Controllers certified for Azure Communication Services direct routing](./certified-session-border-controllers.md)
+
+## Next steps
+
+### Conceptual documentation
+
+- [Telephony Concept](./telephony-concept.md)
+- [Phone number types in Azure Communication Services](./plan-solution.md)
+- [Pair the Session Border Controller and configure voice routing](./direct-routing-provisioning.md)
+- [Pricing](../pricing.md)
+
+### Quickstarts
+
+- [Call to Phone](../../quickstarts/voice-video-calling/pstn-call.md)
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/direct-routing-provisioning.md
Azure Communication Services direct routing enables you to connect your existing
[!INCLUDE [Public Preview](../../includes/public-preview-include-document.md)]
-For information about whether Azure Communication Services direct routing is the right solution for your organization, see [Azure telephony concepts](./telephony-concept.md). For information about prerequisites and planning your deployment, see [Communication Services direct routing infrastructure requirements](./sip-interface-infrastructure.md).
+For information about whether Azure Communication Services direct routing is the right solution for your organization, see [Azure telephony concepts](./telephony-concept.md). For information about prerequisites and planning your deployment, see [Communication Services direct routing infrastructure requirements](./direct-routing-infrastructure.md).
## Connect the SBC with Azure Communication Services
If you are using Office 365, make sure the domain part of the SBCΓÇÖs FQDN is di
- For example, if `contoso.com` is a registered domain in O365, you cannot use `sbc.contoso.com` for Communication Services. But you can use an upper-level domain if one does not exist in O365: you can create an `acs.contoso.com` domain and use FQDN `sbc.acs.contoso.com` as an SBC name. - SBC certificate must match the name; wildcard certificates are supported. - The *.onmicrosoft.com domain cannot be used for the FQDN of the SBC.
-For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./sip-interface-infrastructure.md).
+For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./direct-routing-infrastructure.md).
:::image type="content" source="../media/direct-routing-provisioning/add-session-border-controller.png" alt-text="Adding Session Border Controller."::: - When you are done, click Next.
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
This option requires:
### Conceptual documentation - [Phone number types in Azure Communication Services](./plan-solution.md)-- [Plan for Azure direct routing](./sip-interface-infrastructure.md)
+- [Plan for Azure direct routing](./direct-routing-infrastructure.md)
- [Session Border Controllers certified for Azure Communication Services direct routing](./certified-session-border-controllers.md) - [Pricing](../pricing.md)
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
# What is Azure Communication Services?
-Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication features to your applications without being an expert in communication technologies such as media encoding and real-time networking. Azure Communication Services supports various communication formats:
+Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication features to your applications without being an expert in communication technologies such as media encoding and real-time networking. This functionality is also supported in Azure for government.
+
+Azure Communication Services supports various communication formats:
1. Voice and Video Calling 1. Rich Text Chat
communication-services Quick Create Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/identity/quick-create-identity.md
In the [Azure portal](https://portal.azure.com), navigate to the **Identities &
Choose the scope of the access tokens. You can select none, one, or multiple. Click **Generate**.
+![Select the scopes of the identity and access tokens.](../media/quick-create-identity-choose-scopes.png)
+ You'll see an identity and corresponding user access token generated. You can copy these strings and use them in the [sample apps](../../samples/overview.md) and other testing scenarios.
+![The identity and access tokens are generated and show the expiration date.](../media/quick-create-identity-generated.png)
+ ## Next steps You may also want to: - [Learn about authentication](../../concepts/authentication.md)
+ - [Learn about client and server architecture](../../concepts/client-and-server-architecture.md)
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-gpu.md
When deploying GPU resources, set CPU and memory resources appropriate for the w
* **CUDA drivers** - Container instances with GPU resources are pre-provisioned with NVIDIA CUDA drivers and container runtimes, so you can use container images developed for CUDA workloads.
- We support only CUDA 9.0 at this stage. For example, you can use the following base images for your Docker file:
+ We support only CUDA 9.0 at this stage. For example, you can use the following base images for your Dockerfile:
* [nvidia/cuda:9.0-base-ubuntu16.04](https://hub.docker.com/r/nvidia/cuda/) * [tensorflow/tensorflow: 1.12.0-gpu-py3](https://hub.docker.com/r/tensorflow/tensorflow)+
+ > [!NOTE]
+ > To improve reliability when using a public container image from Docker Hub, import and manage the image in a private Azure container registry, and update your Dockerfile to use your privately managed base image. [Learn more about working with public images](../container-registry/buffer-gate-public-content.md).
## YAML example
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* If you are connecting your container group to an Azure Storage Account, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to that resource. * [IPv6 addresses](../virtual-network/ipv6-overview.md) are not supported at this time. - ## Required network resources There are three Azure Virtual Network resources required for deploying container groups to a virtual network: the [virtual network](#virtual-network) itself, a [delegated subnet](#subnet-delegated) within the virtual network, and a [network profile](#network-profile).
container-registry Container Registry Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/container-registry-customer-managed-keys.md
You will also be unable to change (rotate) the encryption key. The resolution st
**User-assigned identity**
-If this issue occurs with a user-assigned identity, first reassign the identity using the GUID displayed in the error message. For example:
+If this issue occurs with a user-assigned identity, first reassign the identity using the [az acr identity assign](/cli/azure/acr/identity/#az_acr_identity_assign) command. Pass the identity's resource ID, or use the identity's name when it is in the same resource group as the registy. For example:
```azurecli
-az acr identity assign -n myRegistry --identities xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
+az acr identity assign -n myRegistry \
+ --identities "/subscriptions/mysubscription/resourcegroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myidentity"
``` Then, after changing the key and assigning a different identity, you can remove the original user-assigned identity.
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/analytical-store-introduction.md
It is possible to use Full Fidelity Schema for SQL (Core) API accounts. Here are
az cosmosdb create --name MyCosmosDBDatabaseAccount --resource-group MyResourceGroup --subscription MySubscription --analytical-storage-schema-type "FullFidelity" --enable-analytical-storage true ```
+> [!NOTE]
+> In the command above, replace `create` with `update` for existing accounts.
+
With the PowerShell: ``` New-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount -EnableAnalyticalStorage true -AnalyticalStorageSchemaType "FullFidelity" ```
+
+> [!NOTE]
+> In the command above, replace `New-AzCosmosDBAccount` with `Update-AzCosmosDBAccount` for existing accounts.
+
+ #### Well-defined schema representation
cosmos-db Cli Samples Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/cli-samples-gremlin.md
- Title: Azure CLI Samples for Azure Cosmos DB Gremlin API
-description: Azure CLI Samples for Azure Cosmos DB Gremlin API
---- Previously updated : 10/13/2020----
-# Azure CLI samples for Azure Cosmos DB Gremlin API
-
-The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
-
-These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
-
-## Common Samples
-
-These samples apply to all Azure Cosmos DB APIs
-
-|Task | Description |
-|||
-| [Add or failover regions](scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
-| [Account keys and connection strings](scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
-| [Secure with IP firewall](scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
-| [Secure new account with service endpoints](scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
-| [Secure existing account with service endpoints](scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
-|||
-
-## Gremlin API Samples
-
-|Task | Description |
-|||
-| [Create an Azure Cosmos account, database and graph](scripts/cli/gremlin/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and graph for Gremlin API. |
-| [Create an Azure Cosmos account, database and graph with autoscale](scripts/cli/gremlin/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and graph with autoscale for Gremlin API. |
-| [Throughput operations](scripts/cli/gremlin/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and graph.|
-| [Lock resources from deletion](scripts/cli/gremlin/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
-|||
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently the point in time restore functionality has the following limitations:
* The restore process restores all the properties of a container including its TTL configuration. As a result, it is possible that the data restored is deleted immediately if you configured that way. In order to prevent this situation, the restore timestamp must be before the TTL properties were added into the container.
+* Unique indexes in API for MongoDB can't be added or updated when you create a continuous backup mode account or migrate an account from periodic to continuous mode.
+ ## Next steps * Provision continuous backup using [Azure portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), [CLI](provision-account-continuous-backup.md#provision-cli), or [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template).
cosmos-db Access System Properties Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/access-system-properties-gremlin.md
+
+ Title: Access system document properties via Azure Cosmos DB Graph
+description: Learn how read and write Cosmos DB system document properties via Gremlin API
+++ Last updated : 09/10/2019++++
+# System document properties
+
+Azure Cosmos DB has [system properties](/rest/api/cosmos-db/databases) such as ```_ts```, ```_self```, ```_attachments```, ```_rid```, and ```_etag``` on every document. Additionally, Gremlin engine adds ```inVPartition``` and ```outVPartition``` properties on edges. By default, these properties are available for traversal. However, it's possible to include specific properties, or all of them, in Gremlin traversal.
+
+```
+g.withStrategies(ProjectionStrategy.build().IncludeSystemProperties('_ts').create())
+```
+
+## E-Tag
+
+This property is used for optimistic concurrency control. If application needs to break operation into a few separate traversals, it can use eTag property to avoid data loss in concurrent writes.
+
+```
+g.withStrategies(ProjectionStrategy.build().IncludeSystemProperties('_etag').create()).V('1').has('_etag', '"00000100-0000-0800-0000-5d03edac0000"').property('test', '1')
+```
+
+## Time-to-live (TTL)
+
+If collection has document expiration enabled and documents have ```ttl``` property set on them, then this property will be available in Gremlin traversal as a regular vertex or edge property. ```ProjectionStrategy``` isn't necessary to enable time-to-live property exposure.
+
+Vertex created with the traversal below will be automatically deleted in **123 seconds**.
+
+```
+g.addV('vertex-one').property('ttl', 123)
+```
+
+## Next steps
+* [Cosmos DB Optimistic Concurrency](../faq.yml#how-does-the-sql-api-provide-concurrency-)
+* [Time to Live (TTL)](../time-to-live.md) in Azure Cosmos DB
cosmos-db Bulk Executor Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/bulk-executor-graph-dotnet.md
+
+ Title: Use the graph bulk executor .NET library with Azure Cosmos DB Gremlin API
+description: Learn how to use the bulk executor library to massively import graph data into an Azure Cosmos DB Gremlin API container.
+++ Last updated : 05/28/2019+++++
+# Using the graph bulk executor .NET library to perform bulk operations in Azure Cosmos DB Gremlin API
+
+This tutorial provides instructions about using Azure CosmosDB's bulk executor .NET library to import and update graph objects into an Azure Cosmos DB Gremlin API container. This process makes use of the Graph class in the [bulk executor library](../bulk-executor-overview.md) to create Vertex and Edge objects programmatically to then insert multiple of them per network request. This behavior is configurable through the bulk executor library to make optimal use of both database and local memory resources.
+
+As opposed to sending Gremlin queries to a database, where the command is evaluated and then executed one at a time, using the bulk executor library will instead require to create and validate the objects locally. After creating the objects, the library allows you to send graph objects to the database service sequentially. Using this method, data ingestion speeds can be increased up to 100x, which makes it an ideal method for initial data migrations or periodical data movement operations. Learn more by visiting the GitHub page of the [Azure Cosmos DB Graph bulk executor sample application](https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started).
+
+## Bulk operations with graph data
+
+The [bulk executor library](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph) contains a `Microsoft.Azure.CosmosDB.BulkExecutor.Graph` namespace to provide functionality for creating and importing graph objects.
+
+The following process outlines how data migration can be used for a Gremlin API container:
+1. Retrieve records from the data source.
+2. Construct `GremlinVertex` and `GremlinEdge` objects from the obtained records and add them into an `IEnumerable` data structure. In this part of the application the logic to detect and add relationships should be implemented, in case the data source is not a graph database.
+3. Use the [Graph BulkImportAsync method](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph.graphbulkexecutor.bulkimportasync) to insert the graph objects into the collection.
+
+This mechanism will improve the data migration efficiency as compared to using a Gremlin client. This improvement is experienced because inserting data with Gremlin will require the application send a query at a time that will need to be validated, evaluated, and then executed to create the data. The bulk executor library will handle the validation in the application and send multiple graph objects at a time for each network request.
+
+### Creating Vertices and Edges
+
+`GraphBulkExecutor` provides the `BulkImportAsync` method that requires a `IEnumerable` list of `GremlinVertex` or `GremlinEdge` objects, both defined in the `Microsoft.Azure.CosmosDB.BulkExecutor.Graph.Element` namespace. In the sample, we separated the edges and vertices into two BulkExecutor import tasks. See the example below:
+
+```csharp
+
+IBulkExecutor graphbulkExecutor = new GraphBulkExecutor(documentClient, targetCollection);
+
+BulkImportResponse vResponse = null;
+BulkImportResponse eResponse = null;
+
+try
+{
+ // Import a list of GremlinVertex objects
+ vResponse = await graphbulkExecutor.BulkImportAsync(
+ Utils.GenerateVertices(numberOfDocumentsToGenerate),
+ enableUpsert: true,
+ disableAutomaticIdGeneration: true,
+ maxConcurrencyPerPartitionKeyRange: null,
+ maxInMemorySortingBatchSize: null,
+ cancellationToken: token);
+
+ // Import a list of GremlinEdge objects
+ eResponse = await graphbulkExecutor.BulkImportAsync(
+ Utils.GenerateEdges(numberOfDocumentsToGenerate),
+ enableUpsert: true,
+ disableAutomaticIdGeneration: true,
+ maxConcurrencyPerPartitionKeyRange: null,
+ maxInMemorySortingBatchSize: null,
+ cancellationToken: token);
+}
+catch (DocumentClientException de)
+{
+ Trace.TraceError("Document client exception: {0}", de);
+}
+catch (Exception e)
+{
+ Trace.TraceError("Exception: {0}", e);
+}
+```
+
+For more information on the parameters of the bulk executor library, refer to the [BulkImportData to Azure Cosmos DB topic](../bulk-executor-dot-net.md#bulk-import-data-to-an-azure-cosmos-account).
+
+The payload needs to be instantiated into `GremlinVertex` and `GremlinEdge` objects. Here is how these objects can be created:
+
+**Vertices**:
+```csharp
+// Creating a vertex
+GremlinVertex v = new GremlinVertex(
+ "vertexId",
+ "vertexLabel");
+
+// Adding custom properties to the vertex
+v.AddProperty("customProperty", "value");
+
+// Partitioning keys must be specified for all vertices
+v.AddProperty("partitioningKey", "value");
+```
+
+**Edges**:
+```csharp
+// Creating an edge
+GremlinEdge e = new GremlinEdge(
+ "edgeId",
+ "edgeLabel",
+ "targetVertexId",
+ "sourceVertexId",
+ "targetVertexLabel",
+ "sourceVertexLabel",
+ "targetVertexPartitioningKey",
+ "sourceVertexPartitioningKey");
+
+// Adding custom properties to the edge
+e.AddProperty("customProperty", "value");
+```
+
+> [!NOTE]
+> The bulk executor utility doesn't automatically check for existing Vertices before adding Edges. This needs to be validated in the application before running the BulkImport tasks.
+
+## Sample application
+
+### Prerequisites
+* Visual Studio 2019 with the Azure development workload. You can get started with the [Visual Studio 2019 Community Edition](https://visualstudio.microsoft.com/downloads/) for free.
+* An Azure subscription. You can create [a free Azure account here](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cosmos-db). Alternatively, you can create a Cosmos database account with [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+* An Azure Cosmos DB Gremlin API database with an **unlimited collection**. This guide shows how to get started with [Azure Cosmos DB Gremlin API in .NET](./create-graph-dotnet.md).
+* Git. For more information check out the [Git Downloads page](https://git-scm.com/downloads).
+
+### Clone the sample application
+In this tutorial, we'll follow through the steps for getting started by using the [Azure Cosmos DB Graph bulk executor sample](https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started) hosted on GitHub. This application consists of a .NET solution that randomly generates vertex and edge objects and then executes bulk insertions to the specified graph database account. To get the application, run the `git clone` command below:
+
+```bash
+git clone https://github.com/Azure-Samples/azure-cosmosdb-graph-bulkexecutor-dotnet-getting-started.git
+```
+
+This repository contains the GraphBulkExecutor sample with the following files:
+
+File|Description
+|
+`App.config`|This is where the application and database-specific parameters are specified. This file should be modified first to connect to the destination database and collections.
+`Program.cs`| This file contains the logic behind creating the `DocumentClient` collection, handling the cleanups and sending the bulk executor requests.
+`Util.cs`| This file contains a helper class that contains the logic behind generating test data, and checking if the database and collections exist.
+
+In the `App.config` file, the following are the configuration values that can be provided:
+
+Setting|Description
+|
+`EndPointUrl`|This is **your .NET SDK endpoint** found in the Overview blade of your Azure Cosmos DB Gremlin API database account. This has the format of `https://your-graph-database-account.documents.azure.com:443/`
+`AuthorizationKey`|This is the Primary or Secondary key listed under your Azure Cosmos DB account. Learn more about [Securing Access to Azure Cosmos DB data](../secure-access-to-data.md#primary-keys)
+`DatabaseName`, `CollectionName`|These are the **target database and collection names**. When `ShouldCleanupOnStart` is set to `true` these values, along with `CollectionThroughput`, will be used to drop them and create a new database and collection. Similarly, if `ShouldCleanupOnFinish` is set to `true`, they will be used to delete the database as soon as the ingestion is over. Note that the target collection must be **an unlimited collection**.
+`CollectionThroughput`|This is used to create a new collection if the `ShouldCleanupOnStart` option is set to `true`.
+`ShouldCleanupOnStart`|This will drop the database account and collections before the program is run, and then create new ones with the `DatabaseName`, `CollectionName` and `CollectionThroughput` values.
+`ShouldCleanupOnFinish`|This will drop the database account and collections with the specified `DatabaseName` and `CollectionName` after the program is run.
+`NumberOfDocumentsToImport`|This will determine the number of test vertices and edges that will be generated in the sample. This number will apply to both vertices and edges.
+`NumberOfBatches`|This will determine the number of test vertices and edges that will be generated in the sample. This number will apply to both vertices and edges.
+`CollectionPartitionKey`|This will be used to create the test vertices and edges, where this property will be auto-assigned. This will also be used when re-creating the database and collections if the `ShouldCleanupOnStart` option is set to `true`.
+
+### Run the sample application
+
+1. Add your specific database configuration parameters in `App.config`. This will be used to create a DocumentClient instance. If the database and container have not been created yet, they will be created automatically.
+2. Run the application. This will call `BulkImportAsync` two times, one to import Vertices and one to import Edges. If any of the objects generates an error when they're inserted, they will be added to either `.\BadVertices.txt` or `.\BadEdges.txt`.
+3. Evaluate the results by querying the graph database. If the `ShouldCleanupOnFinish` option is set to true, then the database will automatically be deleted.
+
+## Next steps
+
+* To learn about NuGet package details and release notes of bulk executor .NET library, see [bulk executor SDK details](../sql-api-sdk-bulk-executor-dot-net.md).
+* Check out the [Performance Tips](../bulk-executor-dot-net.md#performance-tips) to further optimize the usage of bulk executor.
+* Review the [BulkExecutor.Graph Reference article](/dotnet/api/microsoft.azure.cosmosdb.bulkexecutor.graph) for more details about the classes and methods defined in this namespace.
cosmos-db Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/cli-samples.md
+
+ Title: Azure CLI Samples for Azure Cosmos DB Gremlin API
+description: Azure CLI Samples for Azure Cosmos DB Gremlin API
++++ Last updated : 10/13/2020++++
+# Azure CLI samples for Azure Cosmos DB Gremlin API
+
+The following table includes links to sample Azure CLI scripts for Azure Cosmos DB. Use the links on the right to navigate to API specific samples. Common samples are the same across all APIs. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). Azure Cosmos DB CLI script samples can also be found in the [Azure Cosmos DB CLI GitHub Repository](https://github.com/Azure-Samples/azure-cli-samples/tree/master/cosmosdb).
+
+These samples require Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli)
+
+## Common Samples
+
+These samples apply to all Azure Cosmos DB APIs
+
+|Task | Description |
+|||
+| [Add or failover regions](../scripts/cli/common/regions.md?toc=%2fcli%2fazure%2ftoc.json) | Add a region, change failover priority, trigger a manual failover.|
+| [Account keys and connection strings](../scripts/cli/common/keys.md?toc=%2fcli%2fazure%2ftoc.json) | List account keys, read-only keys, regenerate keys and list connection strings.|
+| [Secure with IP firewall](../scripts/cli/common/ipfirewall.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account with IP firewall configured.|
+| [Secure new account with service endpoints](../scripts/cli/common/service-endpoints.md?toc=%2fcli%2fazure%2ftoc.json)| Create a Cosmos account and secure with service-endpoints.|
+| [Secure existing account with service endpoints](../scripts/cli/common/service-endpoints-ignore-missing-vnet.md?toc=%2fcli%2fazure%2ftoc.json)| Update a Cosmos account to secure with service-endpoints when the subnet is eventually configured.|
+|||
+
+## Gremlin API Samples
+
+|Task | Description |
+|||
+| [Create an Azure Cosmos account, database and graph](../scripts/cli/gremlin/create.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and graph for Gremlin API. |
+| [Create an Azure Cosmos account, database and graph with autoscale](../scripts/cli/gremlin/autoscale.md?toc=%2fcli%2fazure%2ftoc.json)| Creates an Azure Cosmos DB account, database, and graph with autoscale for Gremlin API. |
+| [Throughput operations](../scripts/cli/gremlin/throughput.md?toc=%2fcli%2fazure%2ftoc.json) | Read, update and migrate between autoscale and standard throughput on a database and graph.|
+| [Lock resources from deletion](../scripts/cli/gremlin/lock.md?toc=%2fcli%2fazure%2ftoc.json)| Prevent resources from being deleted with resource locks.|
+|||
cosmos-db Create Graph Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/create-graph-console.md
+
+ Title: 'Query with Azure Cosmos DB Gremlin API using TinkerPop Gremlin Console: Tutorial'
+description: An Azure Cosmos DB quickstart to creates vertices, edges, and queries using the Azure Cosmos DB Gremlin API.
+++ Last updated : 07/10/2020+++
+# Quickstart: Create, query, and traverse an Azure Cosmos DB graph database using the Gremlin console
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](create-graph-console.md)
+> * [.NET](create-graph-dotnet.md)
+> * [Java](create-graph-java.md)
+> * [Node.js](create-graph-nodejs.md)
+> * [Python](create-graph-python.md)
+> * [PHP](create-graph-php.md)
+>
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](graph-introduction.md) account, database, and graph (container) using the Azure portal and then use the [Gremlin Console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) from [Apache TinkerPop](https://tinkerpop.apache.org) to work with Gremlin API data. In this tutorial, you create and query vertices and edges, updating a vertex property, query vertices, traverse the graph, and drop a vertex.
++
+The Gremlin console is Groovy/Java based and runs on Linux, Mac, and Windows. You can download it from the [Apache TinkerPop site](https://tinkerpop.apache.org/downloads.html).
+
+## Prerequisites
+
+You need to have an Azure subscription to create an Azure Cosmos DB account for this quickstart.
++
+You also need to install the [Gremlin Console](https://tinkerpop.apache.org/downloads.html). The **recommended version is v3.4.3** or earlier. (To use Gremlin Console on Windows, you need to install [Java Runtime](https://www.oracle.com/technetwork/java/javase/overview/https://docsupdatetracker.net/index.html)).
+
+## Create a database account
++
+## Add a graph
++
+## <a id="ConnectAppService"></a>Connect to your app service/Graph
+
+1. Before starting the Gremlin Console, create or modify the remote-secure.yaml configuration file in the `apache-tinkerpop-gremlin-console-3.2.5/conf` directory.
+2. Fill in your *host*, *port*, *username*, *password*, *connectionPool*, and *serializer* configurations as defined in the following table:
+
+ Setting|Suggested value|Description
+ ||
+ hosts|[*account-name*.**gremlin**.cosmos.azure.com]|See the following screenshot. This is the **Gremlin URI** value on the Overview page of the Azure portal, in square brackets, with the trailing :443/ removed. Note: Be sure to use the Gremlin value, and **not** the URI that ends with [*account-name*.documents.azure.com] which would likely result in a "Host did not respond in a timely fashion" exception when attempting to execute Gremlin queries later.
+ port|443|Set to 443.
+ username|*Your username*|The resource of the form `/dbs/<db>/colls/<coll>` where `<db>` is your database name and `<coll>` is your collection name.
+ password|*Your primary key*| See second screenshot below. This is your primary key, which you can retrieve from the Keys page of the Azure portal, in the Primary Key box. Use the copy button on the left side of the box to copy the value.
+ connectionPool|{enableSsl: true}|Your connection pool setting for TLS.
+ serializer|{ className: org.apache.tinkerpop.gremlin.<br>driver.ser.GraphSONMessageSerializerV2d0,<br> config: { serializeResultToString: true }}|Set to this value and delete any `\n` line breaks when pasting in the value.
+
+ For the hosts value, copy the **Gremlin URI** value from the **Overview** page:
+
+ :::image type="content" source="./media/create-graph-console/gremlin-uri.png" alt-text="View and copy the Gremlin URI value on the Overview page in the Azure portal":::
+
+ For the password value, copy the **Primary key** from the **Keys** page:
+
+ :::image type="content" source="./media/create-graph-console/keys.png" alt-text="View and copy your primary key in the Azure portal, Keys page":::
+
+ Your remote-secure.yaml file should look like this:
+
+ ```yaml
+ hosts: [your_database_server.gremlin.cosmos.azure.com]
+ port: 443
+ username: /dbs/your_database_account/colls/your_collection
+ password: your_primary_key
+ connectionPool: {
+ enableSsl: true
+ }
+ serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0, config: { serializeResultToString: true }}
+ ```
+
+ make sure to wrap the value of hosts parameter within brackets [].
+
+1. In your terminal, run `bin/gremlin.bat` or `bin/gremlin.sh` to start the [Gremlin Console](https://tinkerpop.apache.org/docs/3.2.5/tutorials/getting-started/).
+
+1. In your terminal, run `:remote connect tinkerpop.server conf/remote-secure.yaml` to connect to your app service.
+
+ > [!TIP]
+ > If you receive the error `No appenders could be found for logger` ensure that you updated the serializer value in the remote-secure.yaml file as described in step 2. If your configuration is correct, then this warning can be safely ignored as it should not impact the use of the console.
+
+1. Next run `:remote console` to redirect all console commands to the remote server.
+
+ > [!NOTE]
+ > If you don't run the `:remote console` command but would like to redirect all console commands to the remote server, you should prefix the command with `:>`, for example you should run the command as `:> g.V().count()`. This prefix is a part of the command and it is important when using the Gremlin console with Azure Cosmos DB. Omitting this prefix instructs the console to execute the command locally, often against an in-memory graph. Using this prefix `:>` tells the console to execute a remote command, in this case against Azure Cosmos DB (either the localhost emulator, or an Azure instance).
+
+Great! Now that we finished the setup, let's start running some console commands.
+
+Let's try a simple count() command. Type the following into the console at the prompt:
+
+```java
+g.V().count()
+```
+
+## Create vertices and edges
+
+Let's begin by adding five person vertices for *Thomas*, *Mary Kay*, *Robin*, *Ben*, and *Jack*.
+
+Input (Thomas):
+
+```java
+g.addV('person').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44).property('userid', 1).property('pk', 'pk')
+```
+
+Output:
+
+```bash
+==>[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d,label:person,type:vertex,properties:[firstName:[[id:f02a749f-b67c-4016-850e-910242d68953,value:Thomas]],lastName:[[id:f5fa3126-8818-4fda-88b0-9bb55145ce5c,value:Andersen]],age:[[id:f6390f9c-e563-433e-acbf-25627628016e,value:44]],userid:[[id:796cdccc-2acd-4e58-a324-91d6f6f5ed6d|userid,value:1]]]]
+```
+
+Input (Mary Kay):
+
+```java
+g.addV('person').property('firstName', 'Mary Kay').property('lastName', 'Andersen').property('age', 39).property('userid', 2).property('pk', 'pk')
+
+```
+
+Output:
+
+```bash
+==>[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e,label:person,type:vertex,properties:[firstName:[[id:ea0604f8-14ee-4513-a48a-1734a1f28dc0,value:Mary Kay]],lastName:[[id:86d3bba5-fd60-4856-9396-c195ef7d7f4b,value:Andersen]],age:[[id:bc81b78d-30c4-4e03-8f40-50f72eb5f6da,value:39]],userid:[[id:0ac9be25-a476-4a30-8da8-e79f0119ea5e|userid,value:2]]]]
+
+```
+
+Input (Robin):
+
+```java
+g.addV('person').property('firstName', 'Robin').property('lastName', 'Wakefield').property('userid', 3).property('pk', 'pk')
+```
+
+Output:
+
+```bash
+==>[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e,label:person,type:vertex,properties:[firstName:[[id:ec65f078-7a43-4cbe-bc06-e50f2640dc4e,value:Robin]],lastName:[[id:a3937d07-0e88-45d3-a442-26fcdfb042ce,value:Wakefield]],userid:[[id:8dc14d6a-8683-4a54-8d74-7eef1fb43a3e|userid,value:3]]]]
+```
+
+Input (Ben):
+
+```java
+g.addV('person').property('firstName', 'Ben').property('lastName', 'Miller').property('userid', 4).property('pk', 'pk')
+
+```
+
+Output:
+
+```bash
+==>[id:ee86b670-4d24-4966-9a39-30529284b66f,label:person,type:vertex,properties:[firstName:[[id:a632469b-30fc-4157-840c-b80260871e9a,value:Ben]],lastName:[[id:4a08d307-0719-47c6-84ae-1b0b06630928,value:Miller]],userid:[[id:ee86b670-4d24-4966-9a39-30529284b66f|userid,value:4]]]]
+```
+
+Input (Jack):
+
+```java
+g.addV('person').property('firstName', 'Jack').property('lastName', 'Connor').property('userid', 5).property('pk', 'pk')
+```
+
+Output:
+
+```bash
+==>[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469,label:person,type:vertex,properties:[firstName:[[id:4250824e-4b72-417f-af98-8034aa15559f,value:Jack]],lastName:[[id:44c1d5e1-a831-480a-bf94-5167d133549e,value:Connor]],userid:[[id:4c835f2a-ea5b-43bb-9b6b-215488ad8469|userid,value:5]]]]
+```
++
+Next, let's add edges for relationships between our people.
+
+Input (Thomas -> Mary Kay):
+
+```java
+g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Mary Kay'))
+```
+
+Output:
+
+```bash
+==>[id:c12bf9fb-96a1-4cb7-a3f8-431e196e702f,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:0d1fa428-780c-49a5-bd3a-a68d96391d5c,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
+```
+
+Input (Thomas -> Robin):
+
+```java
+g.V().hasLabel('person').has('firstName', 'Thomas').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Robin'))
+```
+
+Output:
+
+```bash
+==>[id:58319bdd-1d3e-4f17-a106-0ddf18719d15,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:3e324073-ccfc-4ae1-8675-d450858ca116,outV:1ce821c6-aa3d-4170-a0b7-d14d2a4d18c3]
+```
+
+Input (Robin -> Ben):
+
+```java
+g.V().hasLabel('person').has('firstName', 'Robin').addE('knows').to(g.V().hasLabel('person').has('firstName', 'Ben'))
+```
+
+Output:
+
+```bash
+==>[id:889c4d3c-549e-4d35-bc21-a3d1bfa11e00,label:knows,type:edge,inVLabel:person,outVLabel:person,inV:40fd641d-546e-412a-abcc-58fe53891aab,outV:3e324073-ccfc-4ae1-8675-d450858ca116]
+```
+
+## Update a vertex
+
+Let's update the *Thomas* vertex with a new age of *45*.
+
+Input:
+```java
+g.V().hasLabel('person').has('firstName', 'Thomas').property('age', 45)
+```
+Output:
+
+```bash
+==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
+```
+
+## Query your graph
+
+Now, let's run a variety of queries against your graph.
+
+First, let's try a query with a filter to return only people who are older than 40 years old.
+
+Input (filter query):
+
+```java
+g.V().hasLabel('person').has('age', gt(40))
+```
+
+Output:
+
+```bash
+==>[id:ae36f938-210e-445a-92df-519f2b64c8ec,label:person,type:vertex,properties:[firstName:[[id:872090b6-6a77-456a-9a55-a59141d4ebc2,value:Thomas]],lastName:[[id:7ee7a39a-a414-4127-89b4-870bc4ef99f3,value:Andersen]],age:[[id:a2a75d5a-ae70-4095-806d-a35abcbfe71d,value:45]]]]
+```
+
+Next, let's project the first name for the people who are older than 40 years old.
+
+Input (filter + projection query):
+
+```java
+g.V().hasLabel('person').has('age', gt(40)).values('firstName')
+```
+
+Output:
+
+```bash
+==>Thomas
+```
+
+## Traverse your graph
+
+Let's traverse the graph to return all of Thomas's friends.
+
+Input (friends of Thomas):
+
+```java
+g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person')
+```
+
+Output:
+
+```bash
+==>[id:f04bc00b-cb56-46c4-a3bb-a5870c42f7ff,label:person,type:vertex,properties:[firstName:[[id:14feedec-b070-444e-b544-62be15c7167c,value:Mary Kay]],lastName:[[id:107ab421-7208-45d4-b969-bbc54481992a,value:Andersen]],age:[[id:4b08d6e4-58f5-45df-8e69-6b790b692e0a,value:39]]]]
+==>[id:91605c63-4988-4b60-9a30-5144719ae326,label:person,type:vertex,properties:[firstName:[[id:f760e0e6-652a-481a-92b0-1767d9bf372e,value:Robin]],lastName:[[id:352a4caa-bad6-47e3-a7dc-90ff342cf870,value:Wakefield]]]]
+```
+
+Next, let's get the next layer of vertices. Traverse the graph to return all the friends of Thomas's friends.
+
+Input (friends of friends of Thomas):
+
+```java
+g.V().hasLabel('person').has('firstName', 'Thomas').outE('knows').inV().hasLabel('person').outE('knows').inV().hasLabel('person')
+```
+Output:
+
+```bash
+==>[id:a801a0cb-ee85-44ee-a502-271685ef212e,label:person,type:vertex,properties:[firstName:[[id:b9489902-d29a-4673-8c09-c2b3fe7f8b94,value:Ben]],lastName:[[id:e084f933-9a4b-4dbc-8273-f0171265cf1d,value:Miller]]]]
+```
+
+## Drop a vertex
+
+Let's now delete a vertex from the graph database.
+
+Input (drop Jack vertex):
+
+```java
+g.V().hasLabel('person').has('firstName', 'Jack').drop()
+```
+
+## Clear your graph
+
+Finally, let's clear the database of all vertices and edges.
+
+Input:
+
+```java
+g.E().drop()
+g.V().drop()
+```
+
+Congratulations! You've completed this Azure Cosmos DB: Gremlin API tutorial!
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, create vertices and edges, and traverse your graph using the Gremlin console. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query-graph.md)
cosmos-db Create Graph Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/create-graph-dotnet.md
+
+ Title: Build an Azure Cosmos DB .NET Framework, Core application using the Gremlin API
+description: Presents a .NET Framework/Core code sample you can use to connect to and query Azure Cosmos DB
++++
+ms.devlang: dotnet
+ Last updated : 02/21/2020+++
+# Quickstart: Build a .NET Framework or Core application using the Azure Cosmos DB Gremlin API account
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](create-graph-console.md)
+> * [.NET](create-graph-dotnet.md)
+> * [Java](create-graph-java.md)
+> * [Node.js](create-graph-nodejs.md)
+> * [Python](create-graph-python.md)
+> * [PHP](create-graph-php.md)
+>
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, key/value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+This quickstart demonstrates how to create an Azure Cosmos DB [Gremlin API](graph-introduction.md) account, database, and graph (container) using the Azure portal. You then build and run a console app built using the open-source driver [Gremlin.Net](https://tinkerpop.apache.org/docs/3.2.7/reference/#gremlin-DotNet).
+
+## Prerequisites
+
+If you don't already have Visual Studio 2019 installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/). Make sure that you enable **Azure development** during the Visual Studio setup.
++
+## Create a database account
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-gremlindotnet-getting-started.git
+ ```
+
+4. Then open Visual Studio and open the solution file.
+
+5. Restore the NuGet packages in the project. This should include the Gremlin.Net driver, as well as the Newtonsoft.Json package.
++
+6. You can also install the Gremlin.Net@v3.4.6 driver manually using the Nuget package manager, or the [nuget command-line utility](/nuget/install-nuget-client-tools):
+
+ ```bash
+ nuget install Gremlin.NET -Version 3.4.6
+ ```
+
+> [!NOTE]
+> The Gremlin API currently only [supports Gremlin.Net up to v3.4.6](gremlin-support.md#compatible-client-libraries). If you install the latest version, you'll receive errors when using the service.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+
+The following snippets are all taken from the Program.cs file.
+
+* Set your connection parameters based on the account created above:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="configureConnectivity":::
+
+* The Gremlin commands to be executed are listed in a Dictionary:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineQueries":::
+
+* Create a new `GremlinServer` and `GremlinClient` connection objects using the parameters provided above:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="defineClientandServerObjects":::
+
+* Execute each Gremlin query using the `GremlinClient` object with an async task. You can read the Gremlin queries from the dictionary defined in the previous step and execute them. Later get the result and read the values, which are formatted as a dictionary, using the `JsonSerializer` class from Newtonsoft.Json package:
+
+ :::code language="csharp" source="~/azure-cosmosdb-graph-dotnet/GremlinNetSample/Program.cs" id="executeQueries":::
+
+## Update your connection string
+
+Now go back to the Azure portal to get your connection string information and copy it into the app.
+
+1. From the [Azure portal](https://portal.azure.com/), navigate to your graph database account. In the **Overview** tab, you can see two endpoints-
+
+ **.NET SDK URI** - This value is used when you connect to the graph account by using Microsoft.Azure.Graphs library.
+
+ **Gremlin Endpoint** - This value is used when you connect to the graph account by using Gremlin.Net library.
+
+ :::image type="content" source="./media/create-graph-dotnet/endpoint.png" alt-text="Copy the endpoint":::
+
+ To run this sample, copy the **Gremlin Endpoint** value, delete the port number at the end, that is the URI becomes `https://<your cosmos db account name>.gremlin.cosmosdb.azure.com`. The endpoint value should look like `testgraphacct.gremlin.cosmosdb.azure.com`
+
+1. Next, navigate to the **Keys** tab and copy the **PRIMARY KEY** value from the Azure portal.
+
+1. After you have copied the URI and PRIMARY KEY of your account, save them to a new environment variable on the local machine running the application. To set the environment variable, open a command prompt window, and run the following command. Make sure to replace <Your_Azure_Cosmos_account_URI> and <Your_Azure_Cosmos_account_PRIMARY_KEY> values.
+
+ ```console
+ setx Host "<your Azure Cosmos account name>.gremlin.cosmosdb.azure.com"
+ setx PrimaryKey "<Your_Azure_Cosmos_account_PRIMARY_KEY>"
+ ```
+
+1. Open the *Program.cs* file and update the "database and "container" variables with the database and container (which is also the graph name) names created above.
+
+ `private static string database = "your-database-name";`
+ `private static string container = "your-container-or-graph-name";`
+
+1. Save the Program.cs file.
+
+You've now updated your app with all the info it needs to communicate with Azure Cosmos DB.
+
+## Run the console app
+
+Click CTRL + F5 to run the application. The application will print both the Gremlin query commands and results in the console.
+
+ The console window displays the vertexes and edges being added to the graph. When the script completes, press ENTER to close the console window.
+
+## Browse using the Data Explorer
+
+You can now go back to Data Explorer in the Azure portal and browse and query your new graph data.
+
+1. In Data Explorer, the new database appears in the Graphs pane. Expand the database and container nodes, and then click **Graph**.
+
+2. Click the **Apply Filter** button to use the default query to view all the vertices in the graph. The data generated by the sample app is displayed in the Graphs pane.
+
+ You can zoom in and out of the graph, you can expand the graph display space, add additional vertices, and move vertices on the display surface.
+
+ :::image type="content" source="./media/create-graph-dotnet/graph-explorer.png" alt-text="View the graph in Data Explorer in the Azure portal":::
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run an app. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query-graph.md)
cosmos-db Create Graph Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/create-graph-java.md
+
+ Title: Build a graph database with Java in Azure Cosmos DB
+description: Presents a Java code sample you can use to connect to and query graph data in Azure Cosmos DB using Gremlin.
++
+ms.devlang: java
+ Last updated : 03/26/2019+++++
+# Quickstart: Build a graph database with the Java SDK and the Azure Cosmos DB Gremlin API
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](create-graph-console.md)
+> * [.NET](create-graph-dotnet.md)
+> * [Java](create-graph-java.md)
+> * [Node.js](create-graph-nodejs.md)
+> * [Python](create-graph-python.md)
+> * [PHP](create-graph-php.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API account from the Azure portal, and add data by using a Java app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- [Java Development Kit (JDK) 8](https://www.azul.com/downloads/azure-only/zulu/?&version=java-8-lts&architecture=x86-64-bit&package=jdk). Point your `JAVA_HOME` environment variable to the folder where the JDK is installed.
+- A [Maven binary archive](https://maven.apache.org/download.cgi).
+- [Git](https://www.git-scm.com/downloads).
+
+## Create a database account
+
+Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-java-getting-started.git
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
+
+The following snippets are all taken from the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\GetStarted\Program.java* file.
+
+This Java console app uses a [Gremlin API](graph-introduction.md) database with the OSS [Apache TinkerPop](https://tinkerpop.apache.org/) driver.
+
+- The Gremlin `Client` is initialized from the configuration in the *C:\git-samples\azure-cosmos-db-graph-java-getting-started\src\remote.yaml* file.
+
+ ```java
+ cluster = Cluster.build(new File("src/remote.yaml")).create();
+ ...
+ client = cluster.connect();
+ ```
+
+- Series of Gremlin steps are executed using the `client.submit` method.
+
+ ```java
+ ResultSet results = client.submit(gremlin);
+
+ CompletableFuture<List<Result>> completableFutureResults = results.all();
+ List<Result> resultList = completableFutureResults.get();
+
+ for (Result result : resultList) {
+ System.out.println(result.toString());
+ }
+ ```
+
+## Update your connection information
+
+Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
+
+ Copy the first portion of the URI value.
+
+ :::image type="content" source="./media/create-graph-java/copy-access-key-azure-portal.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
+
+2. Open the *src/remote.yaml* file and paste the unique ID value over `$name$` in `hosts: [$name$.graphs.azure.com]`.
+
+ Line 1 of *remote.yaml* should now look similar to
+
+ `hosts: [test-graph.graphs.azure.com]`
+
+3. Change `graphs` to `gremlin.cosmosdb` in the `endpoint` value. (If you created your graph database account before December 20, 2017, make no changes to the endpoint value and continue to the next step.)
+
+ The endpoint value should now look like this:
+
+ `"endpoint": "https://testgraphacct.gremlin.cosmosdb.azure.com:443/"`
+
+4. In the Azure portal, use the copy button to copy the PRIMARY KEY and paste it over `$masterKey$` in `password: $masterKey$`.
+
+ Line 4 of *remote.yaml* should now look similar to
+
+ `password: 2Ggkr662ifxz2Mg==`
+
+5. Change line 3 of *remote.yaml* from
+
+ `username: /dbs/$database$/colls/$collection$`
+
+ to
+
+ `username: /dbs/sample-database/colls/sample-graph`
+
+ If you used a unique name for your sample database or graph, update the values as appropriate.
+
+6. Save the *remote.yaml* file.
+
+## Run the console app
+
+1. In the git terminal window, `cd` to the azure-cosmos-db-graph-java-getting-started folder.
+
+ ```git
+ cd "C:\git-samples\azure-cosmos-db-graph-java-getting-started"
+ ```
+
+2. In the git terminal window, use the following command to install the required Java packages.
+
+ ```git
+ mvn package
+ ```
+
+3. In the git terminal window, use the following command to start the Java application.
+
+ ```git
+ mvn exec:java -D exec.mainClass=GetStarted.Program
+ ```
+
+ The terminal window displays the vertices being added to the graph.
+
+ If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
+
+ Once the program stops, select Enter, then switch back to the Azure portal in your internet browser.
+
+<a id="add-sample-data"></a>
+## Review and add sample data
+
+You can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
+
+1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
+
+ :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
+
+2. In the **Results** list, notice the new users added to the graph. Select **ben** and notice that the user is connected to robin. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
+
+ :::image type="content" source="./media/create-graph-java/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
+
+3. Let's add a few new users. Select **New Vertex** to add data to your graph.
+
+ :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
+
+4. In the label box, enter *person*.
+
+5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
+
+ key|value|Notes
+ -|-|-
+ id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|female|
+ tech | java |
+
+ > [!NOTE]
+ > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
+
+6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
+
+7. Select **New Vertex** again and add an additional new user.
+
+8. Enter a label of *person*.
+
+9. Select **Add property** to add each of the following properties:
+
+ key|value|Notes
+ -|-|-
+ id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|male|
+ school|MIT|
+
+10. Select **OK**.
+
+11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
+
+ As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query-graph.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
+
+12. Now you can connect rakesh, and ashley. Ensure **ashley** is selected in the **Results** list, then select :::image type="content" source="./media/create-graph-java/edit-pencil-button.png" alt-text="Change the target of a vertex in a graph"::: next to **Targets** on lower right side. You may need to widen your window to see the button.
+
+ :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph - Azure CosmosDB":::
+
+13. In the **Target** box enter *rakesh*, and in the **Edge label** box enter *knows*, and then select the check box.
+
+ :::image type="content" source="./media/create-graph-java/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection in Data Explorer - Azure CosmosDB":::
+
+14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
+
+ :::image type="content" source="./media/create-graph-java/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer - Azure CosmosDB":::
+
+That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Java app that adds data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query-graph.md)
+
cosmos-db Create Graph Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/create-graph-nodejs.md
+
+ Title: Build an Azure Cosmos DB Node.js application by using Gremlin API
+description: Presents a Node.js code sample you can use to connect to and query Azure Cosmos DB
++
+ms.devlang: nodejs
+ Last updated : 06/05/2019++++
+# Quickstart: Build a Node.js application by using Azure Cosmos DB Gremlin API account
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](create-graph-console.md)
+> * [.NET](create-graph-dotnet.md)
+> * [Java](create-graph-java.md)
+> * [Node.js](create-graph-nodejs.md)
+> * [Python](create-graph-python.md)
+> * [PHP](create-graph-php.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API account from the Azure portal, and add data by using a Node.js app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+- [Node.js 0.10.29+](https://nodejs.org/).
+- [Git](https://git-scm.com/downloads).
+
+## Create a database account
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to the new folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-nodejs-getting-started.git
+ ```
+
+3. Open the solution file in Visual Studio.
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-string).
+
+The following snippets are all taken from the *app.js* file.
+
+This console app uses the open-source [Gremlin Node.js](https://www.npmjs.com/package/gremlin) driver.
+
+* The Gremlin client is created.
+
+ ```javascript
+ const authenticator = new Gremlin.driver.auth.PlainTextSaslAuthenticator(
+ `/dbs/${config.database}/colls/${config.collection}`,
+ config.primaryKey
+ )
++
+ const client = new Gremlin.driver.Client(
+ config.endpoint,
+ {
+ authenticator,
+ traversalsource : "g",
+ rejectUnauthorized : true,
+ mimeType : "application/vnd.gremlin-v2.0+json"
+ }
+ );
+
+ ```
+
+ The configurations are all in *config.js*, which we edit in the [following section](#update-your-connection-string).
+
+* A series of functions are defined to execute different Gremlin operations. This is one of them:
+
+ ```javascript
+ function addVertex1()
+ {
+ console.log('Running Add Vertex1');
+ return client.submit("g.addV(label).property('id', id).property('firstName', firstName).property('age', age).property('userid', userid).property('pk', 'pk')", {
+ label:"person",
+ id:"thomas",
+ firstName:"Thomas",
+ age:44, userid: 1
+ }).then(function (result) {
+ console.log("Result: %s\n", JSON.stringify(result));
+ });
+ }
+ ```
+
+* Each function executes a `client.execute` method with a Gremlin query string parameter. Here is an example of how `g.V().count()` is executed:
+
+ ```javascript
+ function countVertices()
+ {
+ console.log('Running Count');
+ return client.submit("g.V().count()", { }).then(function (result) {
+ console.log("Result: %s\n", JSON.stringify(result));
+ });
+ }
+ ```
+
+* At the end of the file, all methods are then invoked. This will execute them one after the other:
+
+ ```javascript
+ client.open()
+ .then(dropGraph)
+ .then(addVertex1)
+ .then(addVertex2)
+ .then(addEdge)
+ .then(countVertices)
+ .catch((err) => {
+ console.error("Error running query...");
+ console.error(err)
+ }).then((res) => {
+ client.close();
+ finish();
+ }).catch((err) =>
+ console.error("Fatal error:", err)
+ );
+ ```
++
+## Update your connection string
+
+1. Open the *config.js* file.
+
+2. In *config.js*, fill in the `config.endpoint` key with the **Gremlin Endpoint** value from the **Overview** page of your Cosmos DB account in the Azure portal.
+
+ `config.endpoint = "https://<your_Gremlin_account_name>.gremlin.cosmosdb.azure.com:443/";`
+
+ :::image type="content" source="./media/create-graph-nodejs/gremlin-uri.png" alt-text="View and copy an access key in the Azure portal, Overview page":::
+
+3. In *config.js*, fill in the config.primaryKey value with the **Primary Key** value from the **Keys** page of your Cosmos DB account in the Azure portal.
+
+ `config.primaryKey = "PRIMARYKEY";`
+
+ :::image type="content" source="./media/create-graph-nodejs/keys.png" alt-text="Azure portal keys blade":::
+
+4. Enter the database name, and graph (container) name for the value of config.database and config.collection.
+
+Here's an example of what your completed *config.js* file should look like:
+
+```javascript
+var config = {}
+
+// Note that this must include the protocol (HTTPS:// for .NET SDK URI or wss:// for Gremlin Endpoint) and the port number
+config.endpoint = "https://testgraphacct.gremlin.cosmosdb.azure.com:443/";
+config.primaryKey = "Pams6e7LEUS7LJ2Qk0fjZf3eGo65JdMWHmyn65i52w8ozPX2oxY3iP0yu05t9v1WymAHNcMwPIqNAEv3XDFsEg==";
+config.database = "graphdb"
+config.collection = "Persons"
+
+module.exports = config;
+```
+
+## Run the console app
+
+1. Open a terminal window and change (via `cd` command) to the installation directory for the *package.json* file that's included in the project.
+
+2. Run `npm install` to install the required npm modules, including `gremlin`.
+
+3. Run `node app.js` in a terminal to start your node application.
+
+## Browse with Data Explorer
+
+You can now go back to Data Explorer in the Azure portal to view, query, modify, and work with your new graph data.
+
+In Data Explorer, the new database appears in the **Graphs** pane. Expand the database, followed by the container, and then select **Graph**.
+
+The data generated by the sample app is displayed in the next pane within the **Graph** tab when you select **Apply Filter**.
+
+Try completing `g.V()` with `.has('firstName', 'Thomas')` to test the filter. Note that the value is case sensitive.
+
+## Review SLAs in the Azure portal
++
+## Clean up your resources
++
+## Next steps
+
+In this article, you learned how to create an Azure Cosmos DB account, create a graph by using Data Explorer, and run a Node.js app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic by using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query by using Gremlin](tutorial-query-graph.md)
cosmos-db Create Graph Php https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/create-graph-php.md
+
+ Title: 'Quickstart: Gremlin API with PHP - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB Gremlin API to create a console application with the Azure portal and PHP
++
+ms.devlang: php
+ Last updated : 01/05/2019++++
+# Quickstart: Create a graph database in Azure Cosmos DB using PHP and the Azure portal
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](create-graph-console.md)
+> * [.NET](create-graph-dotnet.md)
+> * [Java](create-graph-java.md)
+> * [Node.js](create-graph-nodejs.md)
+> * [Python](create-graph-python.md)
+> * [PHP](create-graph-php.md)
+>
+
+This quickstart shows how to use PHP and the Azure Cosmos DB [Gremlin API](graph-introduction.md) to build a console app by cloning an example from GitHub. This quickstart also walks you through the creation of an Azure Cosmos DB account by using the web-based Azure portal.
+
+Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can quickly create and query document, table, key-value, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
+
+## Prerequisites
++
+In addition:
+* [PHP](https://php.net/) 5.6 or newer
+* [Composer](https://getcomposer.org/download/)
+
+## Create a database account
+
+Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ md "C:\git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
+
+ ```bash
+ cd "C:\git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-php-getting-started.git
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the connect.php file in the C:\git-samples\azure-cosmos-db-graph-php-getting-started\ folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
+
+* The Gremlin `connection` is initialized in the beginning of the `connect.php` file using the `$db` object.
+
+ ```php
+ $db = new Connection([
+ 'host' => '<your_server_address>.graphs.azure.com',
+ 'username' => '/dbs/<db>/colls/<coll>',
+ 'password' => 'your_primary_key'
+ ,'port' => '443'
+
+ // Required parameter
+ ,'ssl' => TRUE
+ ]);
+ ```
+
+* A series of Gremlin steps are executed using the `$db->send($query);` method.
+
+ ```php
+ $query = "g.V().drop()";
+ ...
+ $result = $db->send($query);
+ $errors = array_filter($result);
+ }
+ ```
+
+## Update your connection information
+
+Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
+
+1. In the [Azure portal](https://portal.azure.com/), click **Keys**.
+
+ Copy the first portion of the URI value.
+
+ :::image type="content" source="./media/create-graph-php/keys.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
+
+2. Open the `connect.php` file and in line 8 paste the URI value over `your_server_address`.
+
+ The connection object initialization should now look similar to the following code:
+
+ ```php
+ $db = new Connection([
+ 'host' => 'testgraphacct.gremlin.cosmosdb.azure.com',
+ 'username' => '/dbs/<db>/colls/<coll>',
+ 'password' => 'your_primary_key'
+ ,'port' => '443'
+
+ // Required parameter
+ ,'ssl' => TRUE
+ ]);
+ ```
+
+3. Change `username` parameter in the Connection object with your database and graph name. If you used the recommended values of `sample-database` and `sample-graph`, it should look like the following code:
+
+ `'username' => '/dbs/sample-database/colls/sample-graph'`
+
+ The entire Connection object should look like the following code snippet at this time:
+
+ ```php
+ $db = new Connection([
+ 'host' => 'testgraphacct.gremlin.cosmosdb.azure.com',
+ 'username' => '/dbs/sample-database/colls/sample-graph',
+ 'password' => 'your_primary_key',
+ 'port' => '443'
+
+ // Required parameter
+ ,'ssl' => TRUE
+ ]);
+ ```
+
+4. In the Azure portal, use the copy button to copy the PRIMARY KEY and paste it over `your_primary_key` in the password parameter.
+
+ The Connection object initialization should now look like the following code:
+
+ ```php
+ $db = new Connection([
+ 'host' => 'testgraphacct.graphs.azure.com',
+ 'username' => '/dbs/sample-database/colls/sample-graph',
+ 'password' => '2Ggkr662ifxz2Mg==',
+ 'port' => '443'
+
+ // Required parameter
+ ,'ssl' => TRUE
+ ]);
+ ```
+
+5. Save the `connect.php` file.
+
+## Run the console app
+
+1. In the git terminal window, `cd` to the azure-cosmos-db-graph-php-getting-started folder.
+
+ ```git
+ cd "C:\git-samples\azure-cosmos-db-graph-php-getting-started"
+ ```
+
+2. In the git terminal window, use the following command to install the required PHP dependencies.
+
+ ```
+ composer install
+ ```
+
+3. In the git terminal window, use the following command to start the PHP application.
+
+ ```
+ php connect.php
+ ```
+
+ The terminal window displays the vertices being added to the graph.
+
+ If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
+
+ Once the program stops, press Enter, then switch back to the Azure portal in your internet browser.
+
+<a id="add-sample-data"></a>
+## Review and add sample data
+
+You can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
+
+1. Click **Data Explorer**, expand **sample-graph**, click **Graph**, and then click **Apply Filter**.
+
+ :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
+
+2. In the **Results** list, notice the new users added to the graph. Select **ben** and notice that they're connected to robin. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
+
+ :::image type="content" source="./media/create-graph-php/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
+
+3. Let's add a few new users. Click the **New Vertex** button to add data to your graph.
+
+ :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
+
+4. Enter a label of *person*.
+
+5. Click **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the **id** key is required.
+
+ Key | Value | Notes
+ -|-|-
+ **id** | ashley | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ **gender** | female |
+ **tech** | java |
+
+ > [!NOTE]
+ > In this quickstart you create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
+
+6. Click **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
+
+7. Click **New Vertex** again and add an additional new user.
+
+8. Enter a label of *person*.
+
+9. Click **Add property** to add each of the following properties:
+
+ Key | Value | Notes
+ -|-|-
+ **id** | rakesh | The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ **gender** | male |
+ **school** | MIT |
+
+10. Click **OK**.
+
+11. Click the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
+
+ As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query-graph.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and click **Apply Filter** to display all the results again.
+
+12. Now you can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then click the edit button next to **Targets** on lower right side. You may need to widen your window to see the **Properties** area.
+
+ :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph":::
+
+13. In the **Target** box type *rakesh*, and in the **Edge label** box type *knows*, and then click the check.
+
+ :::image type="content" source="./media/create-graph-php/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection between ashley and rakesh in Data Explorer":::
+
+14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
+
+ :::image type="content" source="./media/create-graph-php/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer":::
+
+ That completes the resource creation part of this quickstart. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you've learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run an app. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query-graph.md)
+
cosmos-db Create Graph Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/create-graph-python.md
+
+ Title: 'Quickstart: Gremlin API with Python - Azure Cosmos DB'
+description: This quickstart shows how to use the Azure Cosmos DB Gremlin API to create a console application with the Azure portal and Python
++
+ms.devlang: python
+ Last updated : 03/29/2021+++++
+# Quickstart: Create a graph database in Azure Cosmos DB using Python and the Azure portal
+
+> [!div class="op_single_selector"]
+> * [Gremlin console](create-graph-console.md)
+> * [.NET](create-graph-dotnet.md)
+> * [Java](create-graph-java.md)
+> * [Node.js](create-graph-nodejs.md)
+> * [Python](create-graph-python.md)
+> * [PHP](create-graph-php.md)
+>
+
+In this quickstart, you create and manage an Azure Cosmos DB Gremlin (graph) API account from the Azure portal, and add data by using a Python app cloned from GitHub. Azure Cosmos DB is a multi-model database service that lets you quickly create and query document, table, key-value, and graph databases with global distribution and horizontal scale capabilities.
+
+## Prerequisites
+- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription.
+- [Python 3.6+](https://www.python.org/downloads/) including [pip](https://pip.pypa.io/en/stable/installing/) package installer.
+- [Python Driver for Gremlin](https://github.com/apache/tinkerpop/tree/master/gremlin-python).
+- [Git](https://git-scm.com/downloads).
+
+> [!NOTE]
+> This quickstart requires a graph database account created after December 20th, 2017. Existing accounts will support Python once theyΓÇÖre migrated to general availability.
+
+## Create a database account
+
+Before you can create a graph database, you need to create a Gremlin (Graph) database account with Azure Cosmos DB.
++
+## Add a graph
++
+## Clone the sample application
+
+Now let's switch to working with code. Let's clone a Gremlin API app from GitHub, set the connection string, and run it. You'll see how easy it is to work with data programmatically.
+
+1. Open a command prompt, create a new folder named git-samples, then close the command prompt.
+
+ ```bash
+ mkdir "./git-samples"
+ ```
+
+2. Open a git terminal window, such as git bash, and use the `cd` command to change to a folder to install the sample app.
+
+ ```bash
+ cd "./git-samples"
+ ```
+
+3. Run the following command to clone the sample repository. This command creates a copy of the sample app on your computer.
+
+ ```bash
+ git clone https://github.com/Azure-Samples/azure-cosmos-db-graph-python-getting-started.git
+ ```
+
+## Review the code
+
+This step is optional. If you're interested in learning how the database resources are created in the code, you can review the following snippets. The snippets are all taken from the *connect.py* file in the *C:\git-samples\azure-cosmos-db-graph-python-getting-started\\* folder. Otherwise, you can skip ahead to [Update your connection string](#update-your-connection-information).
+
+* The Gremlin `client` is initialized in line 155 in *connect.py*. Make sure to replace `<YOUR_DATABASE>` and `<YOUR_CONTAINER_OR_GRAPH>` with the values of your account's database name and graph name:
+
+ ```python
+ ...
+ client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/<YOUR_DATABASE>/colls/<YOUR_CONTAINER_OR_GRAPH>",
+ password="<YOUR_PASSWORD>")
+ ...
+ ```
+
+* A series of Gremlin steps are declared at the beginning of the *connect.py* file. They are then executed using the `client.submitAsync()` method:
+
+ ```python
+ client.submitAsync(_gremlin_cleanup_graph)
+ ```
+
+## Update your connection information
+
+Now go back to the Azure portal to get your connection information and copy it into the app. These settings enable your app to communicate with your hosted database.
+
+1. In your Azure Cosmos DB account in the [Azure portal](https://portal.azure.com/), select **Keys**.
+
+ Copy the first portion of the URI value.
+
+ :::image type="content" source="./media/create-graph-python/keys.png" alt-text="View and copy an access key in the Azure portal, Keys page":::
+
+2. Open the *connect.py* file and in line 155 paste the URI value over `<YOUR_ENDPOINT>` in here:
+
+ ```python
+ client = client.Client('wss://<YOUR_ENDPOINT>.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
+ password="<YOUR_PASSWORD>")
+ ```
+
+ The URI portion of the client object should now look similar to this code:
+
+ ```python
+ client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/<YOUR_DATABASE>/colls/<YOUR_COLLECTION_OR_GRAPH>",
+ password="<YOUR_PASSWORD>")
+ ```
+
+3. Change the second parameter of the `client` object to replace the `<YOUR_DATABASE>` and `<YOUR_COLLECTION_OR_GRAPH>` strings. If you used the suggested values, the parameter should look like this code:
+
+ `username="/dbs/sample-database/colls/sample-graph"`
+
+ The entire `client` object should now look like this code:
+
+ ```python
+ client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/sample-database/colls/sample-graph",
+ password="<YOUR_PASSWORD>")
+ ```
+
+4. On the **Keys** page, use the copy button to copy the PRIMARY KEY and paste it over `<YOUR_PASSWORD>` in the `password=<YOUR_PASSWORD>` parameter.
+
+ The entire `client` object definition should now look like this code:
+ ```python
+ client = client.Client('wss://test.gremlin.cosmosdb.azure.com:443/','g',
+ username="/dbs/sample-database/colls/sample-graph",
+ password="asdb13Fadsf14FASc22Ggkr662ifxz2Mg==")
+ ```
+
+6. Save the *connect.py* file.
+
+## Run the console app
+
+1. In the git terminal window, `cd` to the azure-cosmos-db-graph-python-getting-started folder.
+
+ ```git
+ cd "./git-samples\azure-cosmos-db-graph-python-getting-started"
+ ```
+
+2. In the git terminal window, use the following command to install the required Python packages.
+
+ ```
+ pip install -r requirements.txt
+ ```
+
+3. In the git terminal window, use the following command to start the Python application.
+
+ ```
+ python connect.py
+ ```
+
+ The terminal window displays the vertices and edges being added to the graph.
+
+ If you experience timeout errors, check that you updated the connection information correctly in [Update your connection information](#update-your-connection-information), and also try running the last command again.
+
+ Once the program stops, press Enter, then switch back to the Azure portal in your internet browser.
+
+<a id="add-sample-data"></a>
+## Review and add sample data
+
+After the vertices and edges are inserted, you can now go back to Data Explorer and see the vertices added to the graph, and add additional data points.
+
+1. In your Azure Cosmos DB account in the Azure portal, select **Data Explorer**, expand **sample-graph**, select **Graph**, and then select **Apply Filter**.
+
+ :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-expanded.png" alt-text="Screenshot shows Graph selected from the A P I with the option to Apply Filter.":::
+
+2. In the **Results** list, notice three new users are added to the graph. You can move the vertices around by dragging and dropping, zoom in and out by scrolling the wheel of your mouse, and expand the size of the graph with the double-arrow.
+
+ :::image type="content" source="./media/create-graph-python/azure-cosmosdb-graph-explorer-new.png" alt-text="New vertices in the graph in Data Explorer in the Azure portal":::
+
+3. Let's add a few new users. Select the **New Vertex** button to add data to your graph.
+
+ :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-new-vertex.png" alt-text="Screenshot shows the New Vertex pane where you can enter values.":::
+
+4. Enter a label of *person*.
+
+5. Select **Add property** to add each of the following properties. Notice that you can create unique properties for each person in your graph. Only the id key is required.
+
+ key|value|Notes
+ -|-|-
+ pk|/pk|
+ id|ashley|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|female|
+ tech | java |
+
+ > [!NOTE]
+ > In this quickstart create a non-partitioned collection. However, if you create a partitioned collection by specifying a partition key during the collection creation, then you need to include the partition key as a key in each new vertex.
+
+6. Select **OK**. You may need to expand your screen to see **OK** on the bottom of the screen.
+
+7. Select **New Vertex** again and add an additional new user.
+
+8. Enter a label of *person*.
+
+9. Select **Add property** to add each of the following properties:
+
+ key|value|Notes
+ -|-|-
+ pk|/pk|
+ id|rakesh|The unique identifier for the vertex. If you don't specify an id, one is generated for you.
+ gender|male|
+ school|MIT|
+
+10. Select **OK**.
+
+11. Select the **Apply Filter** button with the default `g.V()` filter to display all the values in the graph. All of the users now show in the **Results** list.
+
+ As you add more data, you can use filters to limit your results. By default, Data Explorer uses `g.V()` to retrieve all vertices in a graph. You can change it to a different [graph query](tutorial-query-graph.md), such as `g.V().count()`, to return a count of all the vertices in the graph in JSON format. If you changed the filter, change the filter back to `g.V()` and select **Apply Filter** to display all the results again.
+
+12. Now we can connect rakesh and ashley. Ensure **ashley** is selected in the **Results** list, then select the edit button next to **Targets** on lower right side. You may need to widen your window to see the **Properties** area.
+
+ :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-edit-target.png" alt-text="Change the target of a vertex in a graph":::
+
+13. In the **Target** box type *rakesh*, and in the **Edge label** box type *knows*, and then select the check.
+
+ :::image type="content" source="./media/create-graph-python/azure-cosmosdb-data-explorer-set-target.png" alt-text="Add a connection between ashley and rakesh in Data Explorer":::
+
+14. Now select **rakesh** from the results list and see that ashley and rakesh are connected.
+
+ :::image type="content" source="./media/create-graph-python/azure-cosmosdb-graph-explorer.png" alt-text="Two vertices connected in Data Explorer":::
+
+That completes the resource creation part of this tutorial. You can continue to add vertexes to your graph, modify the existing vertexes, or change the queries. Now let's review the metrics Azure Cosmos DB provides, and then clean up the resources.
+
+## Review SLAs in the Azure portal
++
+## Clean up resources
++
+## Next steps
+
+In this quickstart, you learned how to create an Azure Cosmos DB account, create a graph using the Data Explorer, and run a Python app to add data to the graph. You can now build more complex queries and implement powerful graph traversal logic using Gremlin.
+
+> [!div class="nextstepaction"]
+> [Query using Gremlin](tutorial-query-graph.md)
+
cosmos-db Diagnostic Queries Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/diagnostic-queries-gremlin.md
+
+ Title: Troubleshoot issues with advanced diagnostics queries for Gremlin API
+
+description: Learn how to query diagnostics logs for troubleshooting data stored in Azure Cosmos DB for Gremlin API
++++ Last updated : 06/12/2021+++
+# Troubleshoot issues with advanced diagnostics queries for Gremlin API
++
+> [!div class="op_single_selector"]
+> * [SQL (Core) API](../cosmos-db-advanced-queries.md)
+> * [MongoDB API](../mongodb/diagnostic-queries-mongodb.md)
+> * [Cassandra API](../cassandr)
+> * [Gremlin API](diagnostic-queries-gremlin.md)
+>
+
+In this article, we'll cover how to write more advanced queries to help troubleshoot issues with your Azure Cosmos DB account using diagnostics logs sent to **AzureDiagnostics (legacy)** and **Resource-specific (preview)** tables.
+
+For Azure Diagnostics tables, all data is written into one single table and users will need to specify which category they'd like to query. If you'd like to view the full-text query of your request, [follow this article](../cosmosdb-monitor-resource-logs.md#full-text-query) to learn how to enable this feature.
+
+For [resource-specific tables](../cosmosdb-monitor-resource-logs.md#create-setting-portal), data is written into individual tables for each category of the resource. We recommend this mode since it makes it much easier to work with the data, provides better discoverability of the schemas, and improves performance across both ingestion latency and query times.
+
+## Common queries
+
+- Top N(10) RU consuming requests/queries in a given time frame
+
+# [Resource-specific](#tab/resource-specific)
+
+ ```Kusto
+ let topRequestsByRUcharge = CDBDataPlaneRequests
+ | where TimeGenerated > ago(24h)
+ | project RequestCharge , TimeGenerated, ActivityId;
+ CDBGremlinRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner topRequestsByRUcharge on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , RequestCharge, TimeGenerated
+ | order by RequestCharge desc
+ | take 10
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+
+ ```Kusto
+ let topRequestsByRUcharge = AzureDiagnostics
+ | where Category == "DataPlaneRequests" and TimeGenerated > ago(1h)
+ | project requestCharge_s , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner topRequestsByRUcharge on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , requestCharge_s, TimeGenerated
+ | order by requestCharge_s desc
+ | take 10
+ ```
++
+- Requests throttled (statusCode = 429) in a given time window
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let throttledRequests = CDBDataPlaneRequests
+ | where StatusCode == "429"
+ | project OperationName , TimeGenerated, ActivityId;
+ CDBGremlinRequests
+ | project PIICommandText, ActivityId, DatabaseName , CollectionName
+ | join kind=inner throttledRequests on ActivityId
+ | project DatabaseName , CollectionName , PIICommandText , OperationName, TimeGenerated
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let throttledRequests = AzureDiagnostics
+ | where Category == "DataPlaneRequests"
+ | where statusCode_s == "429"
+ | project OperationName , TimeGenerated, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ | project piiCommandText_s, activityId_g, databasename_s , collectionname_s
+ | join kind=inner throttledRequests on activityId_g
+ | project databasename_s , collectionname_s , piiCommandText_s , OperationName, TimeGenerated
+ ```
++
+- Queries with large response lengths (payload size of the server response)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ let operationsbyUserAgent = CDBDataPlaneRequests
+ | project OperationName, DurationMs, RequestCharge, ResponseLength, ActivityId;
+ CDBGremlinRequests
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on ActivityId
+ | summarize max(ResponseLength) by PIICommandText
+ | order by max_ResponseLength desc
+ ```
+
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ let operationsbyUserAgent = AzureDiagnostics
+ | where Category=="DataPlaneRequests"
+ | project OperationName, duration_s, requestCharge_s, responseLength_s, activityId_g;
+ AzureDiagnostics
+ | where Category == "GremlinRequests"
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ | join kind=inner operationsbyUserAgent on activityId_g
+ | summarize max(responseLength_s1) by piiCommandText_s
+ | order by max_responseLength_s1 desc
+ ```
++
+- RU Consumption by physical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by toint(PartitionKeyRangeId)
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by toint(partitionKeyRangeId_s)
+ | render columnchart
+ ```
++
+- RU Consumption by logical partition (across all replicas in the replica set)
+
+# [Resource-specific](#tab/resource-specific)
+ ```Kusto
+ CDBPartitionKeyRUConsumption
+ | where TimeGenerated >= now(-1d)
+ //specify collection and database
+ //| where DatabaseName == "DBNAME" and CollectionName == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(RequestCharge)) by PartitionKey, PartitionKeyRangeId
+ | render columnchart
+ ```
+# [Azure Diagnostics](#tab/azure-diagnostics)
+ ```Kusto
+ AzureDiagnostics
+ | where TimeGenerated >= now(-1d)
+ | where Category == 'PartitionKeyRUConsumption'
+ //specify collection and database
+ //| where databasename_s == "DBNAME" and collectioname_s == "COLLECTIONNAME"
+ // filter by operation type
+ //| where operationType_s == 'Create'
+ | summarize sum(todouble(requestCharge_s)) by partitionKey_s, partitionKeyRangeId_s
+ | render columnchart
+ ```
++
+## Next steps
+* For more information on how to create diagnostic settings for Cosmos DB see [Creating Diagnostics settings](../cosmosdb-monitor-resource-logs.md) article.
+
+* For detailed information about how to create a diagnostic setting by using the Azure portal, CLI, or PowerShell, see [create diagnostic setting to collect platform logs and metrics in Azure](../../azure-monitor/essentials/diagnostic-settings.md) article.
cosmos-db Find Request Unit Charge Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/find-request-unit-charge-gremlin.md
+
+ Title: Find request unit (RU) charge for Gremlin API queries in Azure Cosmos DB
+description: Learn how to find the request unit (RU) charge for Gremlin queries executed against an Azure Cosmos container. You can use the Azure portal, .NET, Java drivers to find the RU charge.
+++ Last updated : 10/14/2020++++
+# Find the request unit charge for operations executed in Azure Cosmos DB Gremlin API
+
+Azure Cosmos DB supports many APIs, such as SQL, MongoDB, Cassandra, Gremlin, and Table. Each API has its own set of database operations. These operations range from simple point reads and writes to complex queries. Each database operation consumes system resources based on the complexity of the operation.
+
+The cost of all database operations is normalized by Azure Cosmos DB and is expressed by Request Units (or RUs, for short). Request charge is the request units consumed by all your database operations. You can think of RUs as a performance currency abstracting the system resources such as CPU, IOPS, and memory that are required to perform the database operations supported by Azure Cosmos DB. No matter which API you use to interact with your Azure Cosmos container, costs are always measured by RUs. Whether the database operation is a write, point read, or query, costs are always measured in RUs. To learn more, see the [request units and it's considerations](../request-units.md) article.
+
+This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB Gremlin API. If you are using a different API, see [API for MongoDB](../mongodb/find-request-unit-charge-mongodb.md), [Cassandra API](../cassandr) articles to find the RU/s charge.
+
+Headers returned by the Gremlin API are mapped to custom status attributes, which currently are surfaced by the Gremlin .NET and Java SDK. The request charge is available under the `x-ms-request-charge` key. When you use the Gremlin API, you have multiple options for finding the RU consumption for an operation against an Azure Cosmos container.
+
+## Use the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. [Create a new Azure Cosmos account](create-graph-console.md#create-a-database-account) and feed it with data, or select an existing account that already contains data.
+
+1. Go to the **Data Explorer** pane, and then select the container you want to work on.
+
+1. Enter a valid query, and then select **Execute Gremlin Query**.
+
+1. Select **Query Stats** to display the actual request charge for the request you executed.
++
+## Use the .NET SDK driver
+
+When you use the [Gremlin.NET SDK](https://www.nuget.org/packages/Gremlin.Net/), status attributes are available under the `StatusAttributes` property of the `ResultSet<>` object:
+
+```csharp
+ResultSet<dynamic> results = client.SubmitAsync<dynamic>("g.V().count()").Result;
+double requestCharge = (double)results.StatusAttributes["x-ms-request-charge"];
+```
+
+For more information, see [Quickstart: Build a .NET Framework or Core application by using an Azure Cosmos DB Gremlin API account](create-graph-dotnet.md).
+
+## Use the Java SDK driver
+
+When you use the [Gremlin Java SDK](https://mvnrepository.com/artifact/org.apache.tinkerpop/gremlin-driver), you can retrieve status attributes by calling the `statusAttributes()` method on the `ResultSet` object:
+
+```java
+ResultSet results = client.submit("g.V().count()");
+Double requestCharge = (Double)results.statusAttributes().get().get("x-ms-request-charge");
+```
+
+For more information, see [Quickstart: Create a graph database in Azure Cosmos DB by using the Java SDK](create-graph-java.md).
+
+## Next steps
+
+To learn about optimizing your RU consumption, see these articles:
+
+* [Request units and throughput in Azure Cosmos DB](../request-units.md)
+* [Optimize provisioned throughput cost in Azure Cosmos DB](../optimize-cost-throughput.md)
+* [Optimize query cost in Azure Cosmos DB](../optimize-cost-reads-writes.md)
cosmos-db Graph Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/graph-introduction.md
+
+ Title: 'Introduction to Azure Cosmos DB Gremlin API'
+description: Learn how you can use Azure Cosmos DB to store, query, and traverse massive graphs with low latency by using the Gremlin graph query language of Apache TinkerPop.
+++ Last updated : 07/26/2021+++
+# Introduction to Gremlin API in Azure Cosmos DB
+
+[Azure Cosmos DB](../introduction.md) is the globally distributed, multi-model database service from Microsoft for mission-critical applications. It is a multi-model database and supports document, key-value, graph, and column-family data models. Azure Cosmos DB provides a graph database service via the Gremlin API on a fully managed database service designed for any scale.
++
+This article provides an overview of the Azure Cosmos DB Gremlin API and explains how to use them to store massive graphs with billions of vertices and edges. You can query the graphs with millisecond latency and evolve the graph structure easily. Azure Cosmos DB's Gremlin API is built based on the [Apache TinkerPop](https://tinkerpop.apache.org), a graph computing framework. The Gremlin API in Azure Cosmos DB uses the Gremlin query language.
+
+Azure Cosmos DB's Gremlin API combines the power of graph database algorithms with highly scalable, managed infrastructure to provide a unique, flexible solution to most common data problems associated with lack of flexibility and relational approaches.
+
+> [!NOTE]
+> Azure Cosmos DB graph engine closely follows Apache TinkerPop specification. However, there are some differences in the implementation details that are specific for Azure Cosmos DB. Some features supported by Apache TinkerPop are not available in Azure Cosmos DB, to learn more about the unsupported features, see [compatibility with Apache TinkerPop](gremlin-support.md) article.
+
+## Features of Azure Cosmos DB's Gremlin API
+
+Azure Cosmos DB is a fully managed graph database that offers global distribution, elastic scaling of storage and throughput, automatic indexing and query, tunable consistency levels, and support for the TinkerPop standard.
+
+> [!NOTE]
+> The [serverless capacity mode](../serverless.md) is now available on Azure Cosmos DB's Gremlin API.
+
+The following are the differentiated features that Azure Cosmos DB Gremlin API offers:
+
+* **Elastically scalable throughput and storage**
+
+ Graphs in the real world need to scale beyond the capacity of a single server. Azure Cosmos DB supports horizontally scalable graph databases that can have a virtually unlimited size in terms of storage and provisioned throughput. As the graph database scale grows, the data will be automatically distributed using [graph partitioning](./graph-partitioning.md).
+
+* **Multi-region replication**
+
+ Azure Cosmos DB can automatically replicate your graph data to any Azure region worldwide. Global replication simplifies the development of applications that require global access to data. In addition to minimizing read and write latency anywhere around the world, Azure Cosmos DB provides automatic regional failover mechanism that can ensure the continuity of your application in the rare case of a service interruption in a region.
+
+* **Fast queries and traversals with the most widely adopted graph query standard**
+
+ Store heterogeneous vertices and edges and query them through a familiar Gremlin syntax. Gremlin is an imperative, functional query language that provides a rich interface to implement common graph algorithms.
+
+ Azure Cosmos DB enables rich real-time queries and traversals without the need to specify schema hints, secondary indexes, or views. Learn more in [Query graphs by using Gremlin](gremlin-support.md).
+
+* **Fully managed graph database**
+
+ Azure Cosmos DB eliminates the need to manage database and machine resources. Most existing graph database platforms are bound to the limitations of their infrastructure and often require a high degree of maintenance to ensure its operation.
+
+ As a fully managed service, Cosmos DB removes the need to manage virtual machines, update runtime software, manage sharding or replication, or deal with complex data-tier upgrades. Every graph is automatically backed up and protected against regional failures. This allows developers to focus on delivering application value instead of operating and managing their graph databases.
+
+* **Automatic indexing**
+
+ By default, Azure Cosmos DB automatically indexes all the properties within nodes (also called as vertices) and edges in the graph and doesn't expect or require any schema or creation of secondary indices. Learn more about [indexing in Azure Cosmos DB](../index-overview.md).
+
+* **Compatibility with Apache TinkerPop**
+
+ Azure Cosmos DB supports the [open-source Apache TinkerPop standard](https://tinkerpop.apache.org/). The Tinkerpop standard has an ample ecosystem of applications and libraries that can be easily integrated with Azure Cosmos DB's Gremlin API.
+
+* **Tunable consistency levels**
+
+ Azure Cosmos DB provides five well-defined consistency levels to achieve the right tradeoff between consistency and performance for your application. For queries and read operations, Azure Cosmos DB offers five distinct consistency levels: strong, bounded-staleness, session, consistent prefix, and eventual. These granular, well-defined consistency levels allow you to make sound tradeoffs among consistency, availability, and latency. Learn more in [Tunable data consistency levels in Azure Cosmos DB](../consistency-levels.md).
+
+## Scenarios that use Gremlin API
+
+Here are some scenarios where graph support of Azure Cosmos DB can be useful:
+
+* **Social networks/Customer 365**
+
+ By combining data about your customers and their interactions with other people, you can develop personalized experiences, predict customer behavior, or connect people with others with similar interests. Azure Cosmos DB can be used to manage social networks and track customer preferences and data.
+
+* **Recommendation engines**
+
+ This scenario is commonly used in the retail industry. By combining information about products, users, and user interactions, like purchasing, browsing, or rating an item, you can build customized recommendations. The low latency, elastic scale, and native graph support of Azure Cosmos DB is ideal for these scenarios.
+
+* **Geospatial**
+
+ Many applications in telecommunications, logistics, and travel planning need to find a location of interest within an area or locate the shortest/optimal route between two locations. Azure Cosmos DB is a natural fit for these problems.
+
+* **Internet of Things**
+
+ With the network and connections between IoT devices modeled as a graph, you can build a better understanding of the state of your devices and assets. You also can learn how changes in one part of the network can potentially affect another part.
+
+## Introduction to graph databases
+
+Data as it appears in the real world is naturally connected. Traditional data modeling focuses on defining entities separately and computing their relationships at runtime. While this model has its advantages, highly connected data can be challenging to manage under its constraints.
+
+A graph database approach relies on persisting relationships in the storage layer instead, which leads to highly efficient graph retrieval operations. Azure Cosmos DB's Gremlin API supports the [property graph model](https://tinkerpop.apache.org/docs/current/reference/#intro).
+
+### Property graph objects
+
+A property [graph](http://mathworld.wolfram.com/Graph.html) is a structure that's composed of [vertices](http://mathworld.wolfram.com/GraphVertex.html) and [edges](http://mathworld.wolfram.com/GraphEdge.html). Both objects can have an arbitrary number of key-value pairs as properties.
+
+* **Vertices/nodes** - Vertices denote discrete entities, such as a person, a place, or an event.
+
+* **Edges/relationships** - Edges denote relationships between vertices. For example, a person might know another person, be involved in an event, and recently been at a location.
+
+* **Properties** - Properties express information about the vertices and edges. There can be any number of properties in either vertices or edges, and they can be used to describe and filter the objects in a query. Example properties include a vertex that has name and age, or an edge, which can have a time stamp and/or a weight.
+
+* **Label** - A label is a name or the identifier of a vertex or an edge. Labels can group multiple vertices or edges such that all the vertices/edges in a group have a certain label. For example, a graph can have multiple vertices of label type "person".
+
+Graph databases are often included within the NoSQL or non-relational database category, since there is no dependency on a schema or constrained data model. This lack of schema allows for modeling and storing connected structures naturally and efficiently.
+
+### Graph database by example
+
+Let's use a sample graph to understand how queries can be expressed in Gremlin. The following figure shows a business application that manages data about users, interests, and devices in the form of a graph.
++
+This graph has the following *vertex* types (these are also called "label" in Gremlin):
+
+* **People**: The graph has three people, Robin, Thomas, and Ben
+* **Interests**: Their interests, in this example, the game of Football
+* **Devices**: The devices that people use
+* **Operating Systems**: The operating systems that the devices run on
+* **Place**: The places from which the devices are accessed
+
+We represent the relationships between these entities via the following *edge* types:
+
+* **Knows**: For example, "Thomas knows Robin"
+* **Interested**: To represent the interests of the people in our graph, for example, "Ben is interested in Football"
+* **RunsOS**: Laptop runs the Windows OS
+* **Uses**: To represent which device a person uses. For example, Robin uses a Motorola phone with serial number 77
+* **Located**: To represent the location from which the devices are accessed
+
+The Gremlin Console is an interactive terminal offered by the Apache TinkerPop and this terminal is used to interact with the graph data. To learn more, see the quickstart doc on [how to use the Gremlin console](create-graph-console.md). You can also perform these operations using Gremlin drivers in the platform of your choice (Java, Node.js, Python, or .NET). The following examples show how to run queries against this graph data using the Gremlin Console.
+
+First let's look at CRUD. The following Gremlin statement inserts the "Thomas" vertex into the graph:
+
+```java
+:> g.addV('person').property('id', 'thomas.1').property('firstName', 'Thomas').property('lastName', 'Andersen').property('age', 44)
+```
+
+Next, the following Gremlin statement inserts a "knows" edge between Thomas and Robin.
+
+```java
+:> g.V('thomas.1').addE('knows').to(g.V('robin.1'))
+```
+
+The following query returns the "person" vertices in descending order of their first names:
+
+```java
+:> g.V().hasLabel('person').order().by('firstName', decr)
+```
+
+Where graphs shine is when you need to answer questions like "What operating systems do friends of Thomas use?". You can run this Gremlin traversal to get that information from the graph:
+
+```java
+:> g.V('thomas.1').out('knows').out('uses').out('runsos').group().by('name').by(count())
+```
+
+## Next steps
+
+To learn more about graph support in Azure Cosmos DB, see:
+
+* Get started with the [Azure Cosmos DB graph tutorial](create-graph-dotnet.md).
+* Learn about how to [query graphs in Azure Cosmos DB by using Gremlin](gremlin-support.md).
cosmos-db Graph Modeling Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/graph-modeling-tools.md
+
+ Title: Third-party data modeling tools for Azure Cosmos DB graph data
+description: This article describes various tools to design the Graph data model.
++++ Last updated : 05/25/2021+++
+# Third-party data modeling tools for Azure Cosmos DB graph data
++
+It is important to design the data model and further important to maintain. Here are set of third-party visual design tools which help in designing & maintaining the graph data model.
+
+> [!IMPORTANT]
+> Solutions mentioned in this article are for information purpose only, the ownership lies to individual solution owner. We recommend users to do thorough evaluation and then select most suitable to you.
+
+## Hackolade
+
+Hackolade is a data modeling and schema design tool for NoSQL databases. It has a data modeling Studio, which helps in management of schemas for data-at-rest and data-in-motion.
+
+### How it works
+This tool provides the data modeling of vertices / edges and their respective properties. It supports several use cases, some of them are:
+- Start from a blank page and think through different options to graphically build your Cosmos DB Gremlin model. Then forward-engineer the model to your Azure instance to evaluate the result and continue the evolution. All such goodies without writing single line of code.
+- Reverse-engineer an existing graph on Azure to clearly understand its structure, so you could effectively query your graph too. Then enrich the data model with descriptions, metadata, and constraints to produce documentation. It supports HTML, Markdown or PDF format, and feeds to corporate data governance or dictionary systems.
+- Migrate from relational database to NoSQL through the de-normalization of data structures.
+- Integrate with a CI/CD pipeline via a Command-Line Interface
+- Collaboration and versioning using Git
+- And much more…
+
+### Sample
+
+The animation at Figure-2 provides a demonstration of reverse engineering, extraction of entities from RDBMS then Hackolade will discover relations from foreign key relationships then modifications.
+
+Sample DDL for source as SQL Server available at [here](https://github.com/Azure-Samples/northwind-ddl-sample/nw.sql)
++
+**Figure-1:** Graph Diagram (extracted the graph data model)
+
+After modification of data model, the tool can generate the gremlin script, which may include custom Cosmos DB index script to ensure optimal indexes are created, refer Figure-2 for full flow.
+
+The following image demonstrates reverse engineering from RDBMS & Hackolade in action:
+
+**Figure-2:** Hackolade in action (demonstrating SQL to Gremlin data model conversion)
+### Useful links
+- [Download a 14-day free trial](https://hackolade.com/download.html)
+- [Schedule a demo](https://c.x.ai/pdesmarets)
+- [Get more data models](https://hackolade.com/samplemodels.html#cosmosdb).
+- [Documentation of Hackolade](https://hackolade.com/help/CosmosDBGremlin.html)
+
+## Next steps
+- [Visualizing the data](/graph-visualization)
cosmos-db Graph Modeling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/graph-modeling.md
+
+ Title: 'Graph data modeling for Azure Cosmos DB Gremlin API'
+description: Learn how to model a graph database by using Azure Cosmos DB Gremlin API. This article describes when to use a graph database and best practices to model entities and relationships.
+++ Last updated : 12/02/2019++++
+# Graph data modeling for Azure Cosmos DB Gremlin API
+
+The following document is designed to provide graph data modeling recommendations. This step is vital in order to ensure the scalability and performance of a graph database system as the data evolves. An efficient data model is especially important with large-scale graphs.
+
+## Requirements
+
+The process outlined in this guide is based on the following assumptions:
+ * The **entities** in the problem-space are identified. These entities are meant to be consumed _atomically_ for each request. In other words, the database system isn't designed to retrieve a single entity's data in multiple query requests.
+ * There is an understanding of **read and write requirements** for the database system. These requirements will guide the optimizations needed for the graph data model.
+ * The principles of the [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) are well understood.
+
+## When do I need a graph database?
+
+A graph database solution can be optimally applied if the entities and relationships in a data domain have any of the following characteristics:
+
+* The entities are **highly connected** through descriptive relationships. The benefit in this scenario is the fact that the relationships are persisted in storage.
+* There are **cyclic relationships** or **self-referenced entities**. This pattern is often a challenge when using relational or document databases.
+* There are **dynamically evolving relationships** between entities. This pattern is especially applicable to hierarchical or tree-structured data with many levels.
+* There are **many-to-many relationships** between entities.
+* There are **write and read requirements on both entities and relationships**.
+
+If the above criteria is satisfied, it's likely that a graph database approach will provide advantages for **query complexity**, **data model scalability**, and **query performance**.
+
+The next step is to determine if the graph is going to be used for analytic or transactional purposes. If the graph is intended to be used for heavy computation and data processing workloads, it would be worth to explore the [Cosmos DB Spark connector](../create-sql-api-spark.md) and the use of the [GraphX library](https://spark.apache.org/graphx/).
+
+## How to use graph objects
+
+The [Apache Tinkerpop property graph standard](https://tinkerpop.apache.org/docs/current/reference/#graph-computing) defines two types of objects **Vertices** and **Edges**.
+
+The following are the best practices for the properties in the graph objects:
+
+| Object | Property | Type | Notes |
+| | | | |
+| Vertex | ID | String | Uniquely enforced per partition. If a value isn't supplied upon insertion, an auto-generated GUID will be stored. |
+| Vertex | label | String | This property is used to define the type of entity that the vertex represents. If a value isn't supplied, a default value "vertex" will be used. |
+| Vertex | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each vertex. |
+| Vertex | partition key | String, Boolean, Numeric | This property defines where the vertex and its outgoing edges will be stored. Read more about [graph partitioning](graph-partitioning.md). |
+| Edge | ID | String | Uniquely enforced per partition. Auto-generated by default. Edges usually don't have the need to be uniquely retrieved by an ID. |
+| Edge | label | String | This property is used to define the type of relationship that two vertices have. |
+| Edge | properties | String, Boolean, Numeric | A list of separate properties stored as key-value pairs in each edge. |
+
+> [!NOTE]
+> Edges don't require a partition key value, since its value is automatically assigned based on their source vertex. Learn more in the [graph partitioning](graph-partitioning.md) article.
+
+## Entity and relationship modeling guidelines
+
+The following are a set of guidelines to approach data modeling for an Azure Cosmos DB Gremlin API graph database. These guidelines assume that there's an existing definition of a data domain and queries for it.
+
+> [!NOTE]
+> The steps outlined below are presented as recommendations. The final model should be evaluated and tested before its consideration as production-ready. Additionally, the recommendations below are specific to Azure Cosmos DB's Gremlin API implementation.
+
+### Modeling vertices and properties
+
+The first step for a graph data model is to map every identified entity to a **vertex object**. A one to one mapping of all entities to vertices should be an initial step and subject to change.
+
+One common pitfall is to map properties of a single entity as separate vertices. Consider the example below, where the same entity is represented in two different ways:
+
+* **Vertex-based properties**: In this approach, the entity uses three separate vertices and two edges to describe its properties. While this approach might reduce redundancy, it increases model complexity. An increase in model complexity can result in added latency, query complexity, and computation cost. This model can also present challenges in partitioning.
++
+* **Property-embedded vertices**: This approach takes advantage of the key-value pair list to represent all the properties of the entity inside a vertex. This approach provides reduced model complexity, which will lead to simpler queries and more cost-efficient traversals.
++
+> [!NOTE]
+> The above examples show a simplified graph model to only show the comparison between the two ways of dividing entity properties.
+
+The **property-embedded vertices** pattern generally provides a more performant and scalable approach. The default approach to a new graph data model should gravitate towards this pattern.
+
+However, there are scenarios where referencing to a property might provide advantages. For example: if the referenced property is updated frequently. Using a separate vertex to represent a property that is constantly changed would minimize the amount of write operations that the update would require.
+
+### Relationship modeling with edge directions
+
+After the vertices are modeled, the edges can be added to denote the relationships between them. The first aspect that needs to be evaluated is the **direction of the relationship**.
+
+Edge objects have a default direction that is followed by a traversal when using the `out()` or `outE()` function. Using this natural direction results in an efficient operation, since all vertices are stored with their outgoing edges.
+
+However, traversing in the opposite direction of an edge, using the `in()` function, will always result in a cross-partition query. Learn more about [graph partitioning](graph-partitioning.md). If there's a need to constantly traverse using the `in()` function, it's recommended to add edges in both directions.
+
+You can determine the edge direction by using the `.to()` or `.from()` predicates to the `.addE()` Gremlin step. Or by using the [bulk executor library for Gremlin API](bulk-executor-graph-dotnet.md).
+
+> [!NOTE]
+> Edge objects have a direction by default.
+
+### Relationship labeling
+
+Using descriptive relationship labels can improve the efficiency of edge resolution operations. This pattern can be applied in the following ways:
+* Use non-generic terms to label a relationship.
+* Associate the label of the source vertex to the label of the target vertex with the relationship name.
++
+The more specific the label that the traverser will use to filter the edges, the better. This decision can have a significant impact on query cost as well. You can evaluate the query cost at any time [using the executionProfile step](graph-execution-profile.md).
++
+## Next steps:
+* Check out the list of supported [Gremlin steps](gremlin-support.md).
+* Learn about [graph database partitioning](graph-partitioning.md) to deal with large-scale graphs.
+* Evaluate your Gremlin queries using the [Execution Profile step](graph-execution-profile.md).
+* Third-party Graph [design data model](graph-modeling-tools.md)
cosmos-db Graph Partitioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/graph-partitioning.md
+
+ Title: Data partitioning in Azure Cosmos DB Gremlin API
+description: Learn how you can use a partitioned graph in Azure Cosmos DB. This article also describes the requirements and best practices for a partitioned graph.
+++++ Last updated : 06/24/2019++
+# Using a partitioned graph in Azure Cosmos DB
+
+One of the key features of the Gremlin API in Azure Cosmos DB is the ability to handle large-scale graphs through horizontal scaling. The containers can scale independently in terms of storage and throughput. You can create containers in Azure Cosmos DB that can be automatically scaled to store a graph data. The data is automatically balanced based on the specified **partition key**.
+
+Partitioning is done internally if the container is expected to store more than 20 GB in size or if you want to allocate more than 10,000 request units per second (RUs). Data is automatically partitioned based on the partition key you specify. Partition key is required if you create graph containers from the Azure portal or the 3.x or higher versions of Gremlin drivers. Partition key is not required if you use 2.x or lower versions of Gremlin drivers.
+
+The same general principles from the [Azure Cosmos DB partitioning mechanism](../partitioning-overview.md) apply with a few graph-specific optimizations described below.
++
+## Graph partitioning mechanism
+
+The following guidelines describe how the partitioning strategy in Azure Cosmos DB operates:
+
+- **Both vertices and edges are stored as JSON documents**.
+
+- **Vertices require a partition key**. This key will determine in which partition the vertex will be stored through a hashing algorithm. The partition key property name is defined when creating a new container and it has a format: `/partitioning-key-name`.
+
+- **Edges will be stored with their source vertex**. In other words, for each vertex its partition key defines where they are stored along with its outgoing edges. This optimization is done to avoid cross-partition queries when using the `out()` cardinality in graph queries.
+
+- **Edges contain references to the vertices they point to**. All edges are stored with the partition keys and IDs of the vertices that they are pointing to. This computation makes all `out()` direction queries always be a scoped partitioned query, and not a blind cross-partition query.
+
+- **Graph queries need to specify a partition key**. To take full advantage of the horizontal partitioning in Azure Cosmos DB, the partition key should be specified when a single vertex is selected, whenever it's possible. The following are queries for selecting one or multiple vertices in a partitioned graph:
+
+ - `/id` and `/label` are not supported as partition keys for a container in Gremlin API.
++
+ - Selecting a vertex by ID, then **using the `.has()` step to specify the partition key property**:
+
+ ```java
+ g.V('vertex_id').has('partitionKey', 'partitionKey_value')
+ ```
+
+ - Selecting a vertex by **specifying a tuple including partition key value and ID**:
+
+ ```java
+ g.V(['partitionKey_value', 'vertex_id'])
+ ```
+
+ - Specifying an **array of tuples of partition key values and IDs**:
+
+ ```java
+ g.V(['partitionKey_value0', 'verted_id0'], ['partitionKey_value1', 'vertex_id1'], ...)
+ ```
+
+ - Selecting a set of vertices with their IDs and **specifying a list of partition key values**:
+
+ ```java
+ g.V('vertex_id0', 'vertex_id1', 'vertex_id2', …).has('partitionKey', within('partitionKey_value0', 'partitionKey_value01', 'partitionKey_value02', …)
+ ```
+
+ - Using the **Partition strategy** at the beginning of a query and specifying a partition for the scope of the rest of the Gremlin query:
+
+ ```java
+ g.withStrategies(PartitionStrategy.build().partitionKey('partitionKey').readPartitions('partitionKey_value').create()).V()
+ ```
+
+## Best practices when using a partitioned graph
+
+Use the following guidelines to ensure performance and scalability when using partitioned graphs with unlimited containers:
+
+- **Always specify the partition key value when querying a vertex**. Getting vertex from a known partition is a way to achieve performance. All subsequent adjacency operations will always be scoped to a partition since Edges contain reference ID and partition key to their target vertices.
+
+- **Use the outgoing direction when querying edges whenever it's possible**. As mentioned above, edges are stored with their source vertices in the outgoing direction. So the chances of resorting to cross-partition queries are minimized when the data and queries are designed with this pattern in mind. On the contrary, the `in()` query will always be an expensive fan-out query.
+
+- **Choose a partition key that will evenly distribute data across partitions**. This decision heavily depends on the data model of the solution. Read more about creating an appropriate partition key in [Partitioning and scale in Azure Cosmos DB](../partitioning-overview.md).
+
+- **Optimize queries to obtain data within the boundaries of a partition**. An optimal partitioning strategy would be aligned to the querying patterns. Queries that obtain data from a single partition provide the best possible performance.
+
+## Next steps
+
+Next you can proceed to read the following articles:
+
+* Learn about [Partition and scale in Azure Cosmos DB](../partitioning-overview.md).
+* Learn about the [Gremlin support in Gremlin API](gremlin-support.md).
+* Learn about [Introduction to Gremlin API](graph-introduction.md).
cosmos-db Graph Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/graph-visualization-partners.md
+
+ Title: Visualize Azure Cosmos DB Gremlin API data using partner solutions
+description: Learn how to integrate Azure Cosmos DB graph data with different third-party visualization solutions.
+++++ Last updated : 07/22/2021++
+# Visualize graph data stored in Azure Cosmos DB Gremlin API with data visualization solutions
+
+You can visualize data stored in Azure Cosmos DB Gremlin API by using various data visualization solutions.
+
+> [!IMPORTANT]
+> Solutions mentioned in this article are for information purpose only, the ownership lies to individual solution owner. We recommend users to do thorough evaluation and then select most suitable to you.
+
+## Linkurious Enterprise
+
+[Linkurious Enterprise](https://linkurio.us/product/) uses graph technology and data visualization to turn complex datasets into interactive visual networks. The platform connects to your data sources and enables investigators to seamlessly navigate across billions of entities and relationships. The result is a new ability to detect suspicious relationships without juggling with queries or tables.
+
+The interactive interface of Linkurious Enterprise offers an easy way to investigate complex data. You can search for specific entities, expand connections to uncover hidden relationships, and apply layouts of your choice to untangle complex networks. Linkurious Enterprise is now compatible with Azure Cosmos DB Gremlin API. It's suitable for end-to-end graph visualization scenarios and supports read and write capabilities from the user interface. You can request a [demo of Linkurious with Azure Cosmos DB](https://linkurio.us/contact/)
++
+<b>Figure:</b> Linkurious Enterprise visualization flow
+### Useful links
+
+* [Product details](https://linkurio.us/product/)
+* [Documentation](https://doc.linkurio.us/)
+* [Demo](https://resources.linkurio.us/demo)
+* [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/linkurious.linkurious001?tab=overview)
+
+## Cambridge Intelligence
+
+[Cambridge IntelligenceΓÇÖs](https://cambridge-intelligence.com/products/) graph visualization toolkits support Azure Cosmos DB. The following two visualization toolkits are supported by Azure Cosmos DB:
+
+* [KeyLines for JavaScript developers](https://cambridge-intelligence.com/keylines/)
+
+* [Re-Graph for React developers](https://cambridge-intelligence.com/regraph/)
++
+<b>Figure:</b> KeyLines visualization example at various levels of detail.
+
+These toolkits let you design high-performance graph visualization and analysis applications. They harness powerful Web Graphics Library(WebGL) rendering and carefully crafted code to give users a fast and insightful visualization experience. These tools are compatible with any browser, device, server or database, and come with step-by-step tutorials, fully documented APIs, and interactive demos.
++
+<b>Figure:</b> Re-Graph visualization example at various levels of details
+### Useful links
+
+* [Try the toolkits](https://cambridge-intelligence.com/try/)
+* [KeyLines technology overview](https://cambridge-intelligence.com/keylines/technology/)
+* [Re-Graph technology overview](https://cambridge-intelligence.com/regraph/technology/)
+* [Graph visualization use cases](https://cambridge-intelligence.com/use-cases/)
+
+## Tom Sawyer
+
+[Tom Sawyer Perspectives](https://www.tomsawyer.com/perspectives/) is a robust platform for building enterprise grade graph data visualization and analysis applications. It is a low-code graph & data visualization development platform, which includes integrated design, preview interface, and extensive API libraries. The platform integrates enterprise data sources with powerful graph visualization, layout, and analysis technology to solve big data problems.
+
+Perspectives enables developers to quickly develop production-quality, data-oriented visualization applications. Two graphic modules, the "Designer" and the "Previewer" are used to build applications to visualize and analyze the specific data that drives each project. When used together, the Designer and Previewer provide an efficient round-trip process that dramatically speeds up application development. To visualize Azure Cosmos DB Gremlin API data using this platform, request a [free 60-day evaluation](https://www.tomsawyer.com/get-started) of this tool.
++
+<b>Figure:</b> Tom Sawyer Perspectives in action
+
+[Tom Sawyer Graph Database Browser](https://www.tomsawyer.com/graph-database-browser/) makes it easy to visualize and analyze data in Azure Cosmos DB Gremlin API. The Graph Database Browser helps you see and understand connections in your data without extensive knowledge of the query language or the schema. You can manually define the schema for your project or use schema extraction to create it. So, even less technical users can interact with the data by loading the neighbors of selected nodes and building the visualization in whatever direction they need. Advanced users can execute queries using Gremlin, Cypher, or SPARQL to gain other insights. When you define the schema then you can load the Azure Cosmos DB data into the Perspectives model. With the help of integrator definition, you can specify the location and configuration for the Gremlin endpoint. Later you can bind elements from the Azure Cosmos DB data source to elements in the Perspectives model and visualize your data.
+
+Users of all skill levels can take advantage of five unique graph layouts to display the graph in a way that provides the most meaning. And there are built-in centrality, clustering, and path-finding analyses to reveal previously unseen patterns. Using these techniques, organizations can identify critical patterns in areas like fraud detection, customer intelligence, and cybersecurity. Pattern recognition is very important for network analysts in areas such as general IT and network management, logistics, legacy system migration, and business transformation. Try a live demo of Tom Sawyer Graph Database Browser.
++
+<b>Figure:</b> Tom Sawyer Database Browser's visualization capabilities
+### Useful links
+
+* [Documentation](https://www.tomsawyer.com/graph-database-browser/)
+
+* [Trial for Tom Sawyer Perspectives](https://www.tomsawyer.com/get-started)
+
+* [Live Demo for Tom Sawyer Databrowser](https://support.tomsawyer.com/demonstrations/graph.database.browser.demo/)
+
+* [Deploy on Azure](https://www.tomsawyer.com/cs/c/?cta_guid=b85cf3fc-2978-426d-afb3-c1f858f38e73&signature=AAH58kGNc5criGRMHSUptSOwyD0Znf3lFw&pageId=41375082967&placement_guid=d6cb1de7-6d51-4a89-a012-5a167870a715&click=7bc863ee-3c45-4509-9334-ac7674b7e75e&hsutk=4fa7e492076c5cecf5f03faad22b4a19&canon=https%3A%2F%2Fwww.tomsawyer.com%2Fgraph-database-browser&utm_referrer=https%3A%2F%2Fwww.tomsawyer.com%2F&portal_id=8313502&redirect_url=APefjpF0sV6YjeRqi4bQCt0-ubf_cmTi_nSs28RvMy55Vk01NIf6jtTaTj3GUMJ9D9z5DvIwvPnfSw89Wj9JCS_7cNss_HxsDmlT7wmeJh7BUyuPNEGYGnhucgeUZUzWGqrEeWmReCZByeMdklbMuikFnwasX6046Op7hKKiuQJx84RGd4fe1Rvq7mRLaaySZxdvLlpMg13N_4xo_GzrHRl4P2_VGZGPRUgkS3EvsvLzfJzH36u2HHDSG6AuU9ZRNgiJiH2wMLAgGQT-vDzkSTnYRb0ljRFHCq9kPjsbVDw1bTn0G9R5ZmTbdskypc49-Ob_49MdHif1ufRA9BMLU3Ks6t9TCVJ6fo4R5255u5FK2_v3Jk10yd7y_EhLqzrAv2ov-TzxDd6b&__hstc=169273150.4fa7e492076c5cecf5f03faad22b4a19.1608290688565.1626359177649.1626364757376.11&__hssc=169273150.1.1626364757376&__hsfp=3487988390&contentType=standard-page)
+
+## Graphistry
+
+Graphistry automatically transforms your data into interactive, visual investigation maps built for the needs of analysts. It can quickly surface relationships between events and entities without having to write queries or wrangle data. You can harness your data without worrying about scale. You can detect security, fraud, and IT investigations to 3600 views of customers and supply chains, Graphistry turns the potential of your data into human insight and value.
++
+<b>Figure:</b> Graphistry Visualization snapshot
+
+With the Graphistry's GPU client/cloud technology, you can do interactive visualization. By using their standard browser and the cloud, you can use all the data you want, and still remain fast, responsive, and interactive. If you want to run the browser on your hardware, itΓÇÖs as easy as installing a Docker. That way you get the analytical power of GPUs without having to think about GPUs.
++
+<b>Figure:</b> Graphistry in action
+
+### Useful links
+
+* [Documentation](https://www.graphistry.com/docs)
+
+* [Video Guides](https://www.graphistry.com/videos)
+
+* [Deploy on Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/graphistry.graphistry-core-2-24-9)
+
+## Graphlytic
+
+Graphlytic is a highly customizable web application for graph visualization and analysis. Users can interactively explore the graph, look for patterns with the Gremlin language, or use filters to find answers to any graph question. Graph rendering is done with the 'Cytoscape.js' library, which allows Graphlytic to render tens of thousands of nodes and hundreds of thousands of relationships at once.
+
+Graphlytic is compatible with Azure Cosmos DB and can be deployed to Azure in minutes. GraphlyticΓÇÖs UI can be customized and extended in many ways, for instance the default [visualization configuration](https://graphlytic.biz/doc/latest/Visualization_settings.html), [data schema](https://graphlytic.biz/doc/latest/Data_schema.html), [style mappings](https://graphlytic.biz/doc/latest/Style_mappers.html), [virtual properties](https://graphlytic.biz/doc/latest/Virtual_properties.html) in the visualization, or custom implemented [widgets](https://graphlytic.biz/doc/latest/Widgets.html) that can enhance the visualization features with bespoke reports or integrations.
+
+The following are two example scenarios:
+
+* **IT Management use case**
+Companies running their IT operations on their own infrastructure, Telco, or IP providers, all need a solid network documentation and a functional configuration management. Impact analyses describing interdependencies among network elements (active and passive) are being developed to overcome blackouts, which cause significant financial losses, or even single outages causing no or low availability of service. Bottlenecks and single points of failure are determined and solved. Endpoint as well as route redundancies are being implemented.
+Graphlytic property graph visualization is a perfect enabler for all above mentioned points - network documentation, network configuration management, impact analysis and asset management. It stores and depicts all relevant network configuration information in one place, bringing a completely new added value to IT managers and field technicians.
+
+ :::image type="content" source="./media/graph-visualization-partners/graphlytic/it-management.gif" alt-text="Graphlytic IT Management use case demo" :::
+
+<b>Figure:</b> Graphlytic IT management use case
+
+* **Anti-fraud use case**
+Fraud pattern is a well-known term to every single insurance company, bank or e-commerce enterprise. Modern fraudsters build sophisticated fraud rings and schemes that are hard to unveil with traditional tools. It can cause serious losses if not detected properly and on time. On the other hand, traditional red flag systems with too strict criteria must be adjusted to eliminate false positive indicators, as it would lead to overwhelming fraud indications. Great amounts of time are spent trying to detect complex fraud, paralyzing investigators in their daily tasks.
+The basic idea behind Graphlytic is the fact that the human eye can simply distinguish and find any pattern in a graphical form much easier than in any table or data set. It means that the antifraud analyst can capture fraud schemes within graph visualization more easily, faster and smarter than with solely traditional tools.
+
+ :::image type="content" source="./media/graph-visualization-partners/graphlytic/antifraud.gif" alt-text="Graphlytic Fraud detection use case demo":::
+
+<b>Figure:</b> Graphlytic Fraud detection use case demo
+
+### Useful links
+
+* [Documentation](https://graphlytic.biz/doc/)
+* [Free Online Demo](https://graphlytic.biz/demo)
+* [Blog](https://graphlytic.biz/blog)
+* [REST API documentation](https://graphlytic.biz/doc/latest/REST_API.html)
+* [ETL job drivers & examples](https://graphlytic.biz/doc/latest/ETL_jobs.html)
+* [SMTP Email Server Integration](https://graphlytic.biz/doc/latest/SMTP_Email_Server_Connection.html)
+* [Geo Map Server Integration](https://graphlytic.biz/doc/latest/Geo_Map_Server_Integration.html)
+* [Single Sign-on Configuration](https://graphlytic.biz/doc/latest/Single_sign-on.html)
+
+## yWorks
+
+yWorks specializes in the development of professional software solutions that enable the clear visualization of graphs, diagrams, and networks. yWorks has brought together efficient data structures, complex algorithms, and advanced techniques that provide excellent user interaction on a multitude of target platforms. This allows the user to experience highly versatile and sophisticated diagram visualization in applications across many diverse areas.
+
+Azure Cosmos DB can be queried for data using Gremlin, an efficient graph traversal language. The user can query the database for the stored entities and use the relationships to traverse the connected neighborhood. This approach requires in-depth technical knowledge of the database itself and also the query language Gremlin to explore the stored data. Where as with yWorks visualization you can visually explore the Azure Cosmos DB data, identify significant structures, and get a better understanding of relationships. Besides the visual exploration, you can also interactively edit the stored data by modifying the diagram without any knowledge of the associated query language like Gremlin. This way it provides a high-quality visualization and can analyze large data sets from Azure Cosmos DB data. You can use yFiles to add visualization capabilities to your own applications, dashboards, and reports, or to create new, white-label apps and tools for both in-house and customer facing products.
++
+<b>Figure:</b> yWorks visualization snapshot
+
+With yWorks, you can create meaningful visualizations that help users gain insights into the data quickly and easily. Build interactive user-interfaces that match your company's corporate design and easily connect to existing infrastructure and services. Use highly sophisticated automatic graph layouts to generate clear visualizations of the data hidden in your Azure Cosmos DB account. Efficient implementations of the most important graph analysis algorithms enable the creation of responsive user interfaces that highlight the information the user is interested in or needs to be aware of. Use yFiles to create interactive apps that work on desktops, and mobile devices alike.
+
+Typical use-cases and data models include:
+
+* Social networks, money laundering data, and cash-flow networks, where similar entities are connected to each other
+* Process data where entities are being processed and move from one state to another
+* organizational charts and networks, showing team hierarchies, but also majority ownership dependencies and relationships between companies or customers
+* data lineage information & compliance data can be visualized, reviewed & audited
+* computer networks logs, website logs, customer journey logs
+* knowledge graphs, stored as triplets and in other formats
+* Product Lifecycle Management data
+* Bill of Material lists and Supply Chain data
+
+### Useful links
+
+* [Pricing](https://www.yworks.com/products/yfiles-for-html/pricing)
+* [Visualizing a Microsoft Azure Cosmos DB](https://www.yworks.com/pages/visualizing-a-microsoft-azure-cosmos-db)
+* [yFiles - the diagramming library](https://www.yworks.com/yfiles-overview)
+* [yWorks - Demos](https://www.yworks.com/products/yfiles/demos)
+
+### Next Steps
+
+* [Cosmos DB - Gremlin API Pricing](../how-pricing-works.md)
cosmos-db Gremlin Headers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/gremlin-headers.md
+
+ Title: Azure Cosmos DB Gremlin response headers
+description: Reference documentation for server response metadata that enables additional troubleshooting
+++ Last updated : 09/03/2019++++
+# Azure Cosmos DB Gremlin server response headers
+
+This article covers headers that Cosmos DB Gremlin server returns to the caller upon request execution. These headers are useful for troubleshooting request performance, building application that integrates natively with Cosmos DB service and simplifying customer support.
+
+Keep in mind that taking dependency on these headers you are limiting portability of your application to other Gremlin implementations. In return, you are gaining tighter integration with Cosmos DB Gremlin. These headers are not a TinkerPop standard.
+
+## Headers
+
+| Header | Type | Sample Value | When Included | Explanation |
+| | | | | |
+| **x-ms-request-charge** | double | 11.3243 | Success and Failure | Amount of collection or database throughput consumed in [request units (RU/s or RUs)](../request-units.md) for a partial response message. This header is present in every continuation for requests that have multiple chunks. It reflects the charge of a particular response chunk. Only for requests that consist of a single response chunk this header matches total cost of traversal. However, for majority of complex traversals this value represents a partial cost. |
+| **x-ms-total-request-charge** | double | 423.987 | Success and Failure | Amount of collection or database throughput consumed in [request units (RU/s or RUs)](../request-units.md) for entire request. This header is present in every continuation for requests that have multiple chunks. It indicates cumulative charge since the beginning of request. Value of this header in the last chunk indicates complete request charge. |
+| **x-ms-server-time-ms** | double | 13.75 | Success and Failure | This header is included for latency troubleshooting purposes. It indicates the amount of time, in milliseconds, that Cosmos DB Gremlin server took to execute and produce a partial response message. Using value of this header and comparing it to overall request latency applications can calculate network latency overhead. |
+| **x-ms-total-server-time-ms** | double | 130.512 | Success and Failure | Total time, in milliseconds, that Cosmos DB Gremlin server took to execute entire traversal. This header is included in every partial response. It represents cumulative execution time since the start of request. The last response indicates total execution time. This header is useful to differentiate between client and server as a source of latency. You can compare traversal execution time on the client to the value of this header. |
+| **x-ms-status-code** | long | 200 | Success and Failure | Header indicates internal reason for request completion or termination. Application is advised to look at the value of this header and take corrective action. |
+| **x-ms-substatus-code** | long | 1003 | Failure Only | Cosmos DB is a multi-model database that is built on top of unified storage layer. This header contains additional insights about the failure reason when failure occurs within lower layers of high availability stack. Application is advised to store this header and use it when contacting Cosmos DB customer support. Value of this header is useful for Cosmos DB engineer for quick troubleshooting. |
+| **x-ms-retry-after-ms** | string (TimeSpan) | "00:00:03.9500000" | Failure Only | This header is a string representation of a .NET [TimeSpan](/dotnet/api/system.timespan) type. This value will only be included in requests failed due provisioned throughput exhaustion. Application should resubmit traversal again after instructed period of time. |
+| **x-ms-activity-id** | string (Guid) | "A9218E01-3A3A-4716-9636-5BD86B056613" | Success and Failure | Header contains a unique server-side identifier of a request. Each request is assigned a unique identifier by the server for tracking purposes. Applications should log activity identifiers returned by the server for requests that customers may want to contact customer support about. Cosmos DB support personnel can find specific requests by these identifiers in Cosmos DB service telemetry. |
+
+## Status codes
+
+Most common codes returned for `x-ms-status-code` status attribute by the server are listed below.
+
+| Status | Explanation |
+| | |
+| **401** | Error message `"Unauthorized: Invalid credentials provided"` is returned when authentication password doesn't match Cosmos DB account key. Navigate to your Cosmos DB Gremlin account in the Azure portal and confirm that the key is correct.|
+| **404** | Concurrent operations that attempt to delete and update the same edge or vertex simultaneously. Error message `"Owner resource does not exist"` indicates that specified database or collection is incorrect in connection parameters in `/dbs/<database name>/colls/<collection or graph name>` format.|
+| **409** | `"Conflicting request to resource has been attempted. Retry to avoid conflicts."` This usually happens when vertex or an edge with an identifier already exists in the graph.|
+| **412** | Status code is complemented with error message `"PreconditionFailedException": One of the specified pre-condition is not met`. This error is indicative of an optimistic concurrency control violation between reading an edge or vertex and writing it back to the store after modification. Most common situations when this error occurs is property modification, for example `g.V('identifier').property('name','value')`. Gremlin engine would read the vertex, modify it, and write it back. If there is another traversal running in parallel trying to write the same vertex or an edge, one of them will receive this error. Application should submit traversal to the server again.|
+| **429** | Request was throttled and should be retried after value in **x-ms-retry-after-ms**|
+| **500** | Error message that contains `"NotFoundException: Entity with the specified id does not exist in the system."` indicates that a database and/or collection was re-created with the same name. This error will disappear within 5 minutes as change propagates and invalidates caches in different Cosmos DB components. To avoid this issue, use unique database and collection names every time.|
+| **1000** | This status code is returned when server successfully parsed a message but wasn't able to execute. It usually indicates a problem with the query.|
+| **1001** | This code is returned when server completes traversal execution but fails to serialize response back to the client. This error can happen when traversal generates complex result, that is too large or does not conform to TinkerPop protocol specification. Application should simplify the traversal when it encounters this error. |
+| **1003** | `"Query exceeded memory limit. Bytes Consumed: XXX, Max: YYY"` is returned when traversal exceeds allowed memory limit. Memory limit is **2 GB** per traversal.|
+| **1004** | This status code indicates malformed graph request. Request can be malformed when it fails deserialization, non-value type is being deserialized as value type or unsupported gremlin operation requested. Application should not retry the request because it will not be successful. |
+| **1007** | Usually this status code is returned with error message `"Could not process request. Underlying connection has been closed."`. This situation can happen if client driver attempts to use a connection that is being closed by the server. Application should retry the traversal on a different connection.
+| **1008** | Cosmos DB Gremlin server can terminate connections to rebalance traffic in the cluster. Client drivers should handle this situation and use only live connections to send requests to the server. Occasionally client drivers may not detect that connection was closed. When application encounters an error, `"Connection is too busy. Please retry after sometime or open more connections."` it should retry traversal on a different connection.
+| **1009** | The operation did not complete in the allotted time and was canceled by the server. Optimize your traversals to run quickly by filtering vertices or edges on every hop of traversal to narrow search scope. Request timeout default is **60 seconds**. |
+
+## Samples
+
+A sample client application based on Gremlin.Net that reads one status attribute:
+
+```csharp
+// Following example reads a status code and total request charge from server response attributes.
+// Variable "server" is assumed to be assigned to an instance of a GremlinServer that is connected to Cosmos DB account.
+using (GremlinClient client = new GremlinClient(server, new GraphSON2Reader(), new GraphSON2Writer(), GremlinClient.GraphSON2MimeType))
+{
+ ResultSet<dynamic> responseResultSet = await GremlinClientExtensions.SubmitAsync<dynamic>(client, requestScript: "g.V().count()");
+ long statusCode = (long)responseResultSet.StatusAttributes["x-ms-status-code"];
+ double totalRequestCharge = (double)responseResultSet.StatusAttributes["x-ms-total-request-charge"];
+
+ // Status code and request charge are logged into application telemetry.
+}
+```
+
+An example that demonstrates how to read status attribute from Gremlin java client:
+
+```java
+try {
+ ResultSet resultSet = this.client.submit("g.addV().property('id', '13')");
+ List<Result> results = resultSet.all().get();
+
+ // Process and consume results
+
+} catch (ResponseException re) {
+ // Check for known errors that need to be retried or skipped
+ if (re.getStatusAttributes().isPresent()) {
+ Map<String, Object> attributes = re.getStatusAttributes().get();
+ int statusCode = (int) attributes.getOrDefault("x-ms-status-code", -1);
+
+ // Now we can check for specific conditions
+ if (statusCode == 409) {
+ // Handle conflicting writes
+ }
+ }
+
+ // Check if we need to delay retry
+ if (attributes.containsKey("x-ms-retry-after-ms")) {
+ // Read the value of the attribute as is
+ String retryAfterTimeSpan = (String) attributes.get("x-ms-retry-after-ms"));
+
+ // Convert the value into actionable duration
+ LocalTime locaTime = LocalTime.parse(retryAfterTimeSpan);
+ Duration duration = Duration.between(LocalTime.MIN, locaTime);
+
+ // Perform a retry after "duration" interval of time has elapsed
+ }
+ }
+}
+
+```
+
+## Next steps
+* [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb)
+* [Common Azure Cosmos DB REST response headers](/rest/api/cosmos-db/common-cosmosdb-rest-response-headers)
+* [TinkerPop Graph Driver Provider Requirements]( http://tinkerpop.apache.org/docs/current/dev/provider/#_graph_driver_provider_requirements)
cosmos-db Gremlin Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/gremlin-limits.md
+
+ Title: Limits of Azure Cosmos DB Gremlin
+description: Reference documentation for runtime limitations of Graph engine
+++ Last updated : 10/04/2019++++
+# Azure Cosmos DB Gremlin limits
+
+This article talks about the limits of Azure Cosmos DB Gremlin engine and explains how they may impact customer traversals.
+
+Cosmos DB Gremlin is built on top of Cosmos DB infrastructure. Due to this, all limits explained in [Azure Cosmos DB service limits](../concepts-limits.md) still apply.
+
+## Limits
+
+When Gremlin limit is reached, traversal is canceled with a **x-ms-status-code** of 429 indicating a throttling error. See [Gremlin server response headers](gremlin-limits.md) for more information.
+
+**Resource** | **Default limit** | **Explanation**
+ | |
+*Script length* | **64 KB** | Maximum length of a Gremlin traversal script per request.
+*Operator depth* | **400** | Total number of unique steps in a traversal. For example, ```g.V().out()``` has an operator count of 2: V() and out(), ```g.V('label').repeat(out()).times(100)``` has operator depth of 3: V(), repeat(), and out() because ```.times(100)``` is a parameter to ```.repeat()``` operator.
+*Degree of parallelism* | **32** | Maximum number of storage partitions queried in a single request to storage layer. Graphs with hundreds of partitions will be impacted by this limit.
+*Repeat limit* | **32** | Maximum number of iterations a ```.repeat()``` operator can execute. Each iteration of ```.repeat()``` step in most cases runs breadth-first traversal, which means that any traversal is limited to at most 32 hops between vertices.
+*Traversal timeout* | **30 seconds** | Traversal will be canceled when it exceeds this time. Cosmos DB Graph is an OLTP database with vast majority of traversals completing within milliseconds. To run OLAP queries on Cosmos DB Graph, use [Apache Spark](https://azure.microsoft.com/services/cosmos-db/) with [Graph Data Frames](https://spark.apache.org/docs/latest/sql-programming-guide.html#datasets-and-dataframes) and [Cosmos DB Spark Connector](https://github.com/Azure/azure-cosmosdb-spark).
+*Idle connection timeout* | **1 hour** | Amount of time the Gremlin service will keep idle websocket connections open. TCP keep-alive packets or HTTP keep-alive requests don't extend connection lifespan beyond this limit. Cosmos DB Graph engine considers websocket connections to be idle if there are no active Gremlin requests running on it.
+*Resource token per hour* | **100** | Number of unique resource tokens used by Gremlin clients to connect to Gremlin account in a region. When the application exceeds hourly unique token limit, `"Exceeded allowed resource token limit of 100 that can be used concurrently"` will be returned on the next authentication request.
+
+## Next steps
+* [Azure Cosmos DB Gremlin response headers](gremlin-headers.md)
+* [Azure Cosmos DB Resource Tokens with Gremlin](how-to-use-resource-tokens-gremlin.md)
cosmos-db Gremlin Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/graph/gremlin-support.md
+
+ Title: Azure Cosmos DB Gremlin support and compatibility with TinkerPop features
+description: Learn about the Gremlin language from Apache TinkerPop. Learn which features and steps are available in Azure Cosmos DB and the TinkerPop Graph engine compatibility differences.
+++ Last updated : 07/06/2021++++
+# Azure Cosmos DB Gremlin graph support and compatibility with TinkerPop features
+
+Azure Cosmos DB supports [Apache Tinkerpop's](https://tinkerpop.apache.org) graph traversal language, known as [Gremlin](https://tinkerpop.apache.org/docs/3.3.2/reference/#graph-traversal-steps). You can use the Gremlin language to create graph entities (vertices and edges), modify properties within those entities, perform queries and traversals, and delete entities.
+
+Azure Cosmos DB Graph engine closely follows [Apache TinkerPop](https://tinkerpop.apache.org/docs/current/reference/#graph-traversal-steps) traversal steps specification but there are differences in the implementation that are specific for Azure Cosmos DB. In this article, we provide a quick walkthrough of Gremlin and enumerate the Gremlin features that are supported by the Gremlin API.
+
+## Compatible client libraries
+
+The following table shows popular Gremlin drivers that you can use against Azure Cosmos DB:
+
+| Download | Source | Getting Started | Supported connector version |
+| | | | |
+| [.NET](https://tinkerpop.apache.org/docs/3.4.6/reference/#gremlin-DotNet) | [Gremlin.NET on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-dotnet) | [Create Graph using .NET](create-graph-dotnet.md) | 3.4.6 |
+| [Java](https://mvnrepository.com/artifact/com.tinkerpop.gremlin/gremlin-java) | [Gremlin JavaDoc](https://tinkerpop.apache.org/javadocs/current/full/) | [Create Graph using Java](create-graph-java.md) | 3.2.0+ |
+| [Node.js](https://www.npmjs.com/package/gremlin) | [Gremlin-JavaScript on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-javascript) | [Create Graph using Node.js](create-graph-nodejs.md) | 3.3.4+ |
+| [Python](https://tinkerpop.apache.org/docs/3.3.1/reference/#gremlin-python) | [Gremlin-Python on GitHub](https://github.com/apache/tinkerpop/tree/master/gremlin-python) | [Create Graph using Python](create-graph-python.md) | 3.2.7 |
+| [PHP](https://packagist.org/packages/brightzone/gremlin-php) | [Gremlin-PHP on GitHub](https://github.com/PommeVerte/gremlin-php) | [Create Graph using PHP](create-graph-php.md) | 3.1.0 |
+| [Go Lang](https://github.com/supplyon/gremcos/) | [Go Lang](https://github.com/supplyon/gremcos/) | | This library is built by external contributors. The Azure Cosmos DB team doesn't offer any support or maintain the library. |
+| [Gremlin console](https://tinkerpop.apache.org/downloads.html) | [TinkerPop docs](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console) | [Create Graph using Gremlin Console](create-graph-console.md) | 3.2.0 + |
+
+## Supported Graph Objects
+
+TinkerPop is a standard that covers a wide range of graph technologies. Therefore, it has standard terminology to describe what features are provided by a graph provider. Azure Cosmos DB provides a persistent, high concurrency, writeable graph database that can be partitioned across multiple servers or clusters.
+
+The following table lists the TinkerPop features that are implemented by Azure Cosmos DB:
+
+| Category | Azure Cosmos DB implementation | Notes |
+| | | |
+| Graph features | Provides Persistence and ConcurrentAccess. Designed to support Transactions | Computer methods can be implemented via the Spark connector. |
+| Variable features | Supports Boolean, Integer, Byte, Double, Float, Integer, Long, String | Supports primitive types, is compatible with complex types via data model |
+| Vertex features | Supports RemoveVertices, MetaProperties, AddVertices, MultiProperties, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting vertices |
+| Vertex property features | StringIds, UserSuppliedIds, AddProperty, RemoveProperty, BooleanValues, ByteValues, DoubleValues, FloatValues, IntegerValues, LongValues, StringValues | Supports creating, modifying, and deleting vertex properties |
+| Edge features | AddEdges, RemoveEdges, StringIds, UserSuppliedIds, AddProperty, RemoveProperty | Supports creating, modifying, and deleting edges |
+| Edge property features | Properties, BooleanValues, ByteValues, DoubleValues, FloatValues, IntegerValues, LongValues, StringValues | Supports creating, modifying, and deleting edge properties |
+
+## Gremlin wire format
+
+Azure Cosmos DB uses the JSON format when returning results from Gremlin operations. Azure Cosmos DB currently supports the JSON format. For example, the following snippet shows a JSON representation of a vertex *returned to the client* from Azure Cosmos DB:
+
+```json
+ {
+ "id": "a7111ba7-0ea1-43c9-b6b2-efc5e3aea4c0",
+ "label": "person",
+ "type": "vertex",
+ "outE": {
+ "knows": [
+ {
+ "id": "3ee53a60-c561-4c5e-9a9f-9c7924bc9aef",
+ "inV": "04779300-1c8e-489d-9493-50fd1325a658"
+ },
+ {
+ "id": "21984248-ee9e-43a8-a7f6-30642bc14609",
+ "inV": "a8e3e741-2ef7-4c01-b7c8-199f8e43e3bc"
+ }
+ ]
+ },
+ "properties": {
+ "firstName": [
+ {
+ "value": "Thomas"
+ }
+ ],
+ "lastName": [
+ {
+ "value": "Andersen"
+ }
+ ],
+ "age": [
+ {
+ "value": 45
+ }
+ ]
+ }
+ }
+```
+
+The properties used by the JSON format for vertices are described below:
+
+| Property | Description |
+| | | |
+| `id` | The ID for the vertex. Must be unique (in combination with the value of `_partition` if applicable). If no value is provided, it will be automatically supplied with a GUID |
+| `label` | The label of the vertex. This property is used to describe the entity type. |
+| `type` | Used to distinguish vertices from non-graph documents |
+| `properties` | Bag of user-defined properties associated with the vertex. Each property can have multiple values. |
+| `_partition` | The partition key of the vertex. Used for [graph partitioning](graph-partitioning.md). |
+| `outE` | This property contains a list of out edges from a vertex. Storing the adjacency information with vertex allows for fast execution of traversals. Edges are grouped based on their labels. |
+
+Each property can store multiple values within an array.
+
+| Property | Description |
+| | |
+| `value` | The value of the property |
+
+And the edge contains the following information to help with navigation to other parts of the graph.
+
+| Property | Description |
+| | |
+| `id` | The ID for the edge. Must be unique (in combination with the value of `_partition` if applicable) |
+| `label` | The label of the edge. This property is optional, and used to describe the relationship type. |
+| `inV` | This property contains a list of in vertices for an edge. Storing the adjacency information with the edge allows for fast execution of traversals. Vertices are grouped based on their labels. |
+| `properties` | Bag of user-defined properties associated with the edge. |
+
+## Gremlin steps
+
+Now let's look at the Gremlin steps supported by Azure Cosmos DB. For a complete reference on Gremlin, see [TinkerPop reference](https://tinkerpop.apache.org/docs/3.3.2/reference).
+
+| step | Description | TinkerPop 3.2 Documentation |
+| | | |
+| `addE` | Adds an edge between two vertices | [addE step](https://tinkerpop.apache.org/docs/3.3.2/reference/#addedge-step) |
+| `addV` | Adds a vertex to the graph | [addV step](https://tinkerpop.apache.org/docs/3.3.2/reference/#addvertex-step) |
+| `and` | Ensures that all the traversals return a value | [and step](https://tinkerpop.apache.org/docs/3.3.2/reference/#and-step) |
+| `as` | A step modulator to assign a variable to the output of a step | [as step](https://tinkerpop.apache.org/docs/3.3.2/reference/#as-step) |
+| `by` | A step modulator used with `group` and `order` | [by step](https://tinkerpop.apache.org/docs/3.3.2/reference/#by-step) |
+| `coalesce` | Returns the first traversal that returns a result | [coalesce step](https://tinkerpop.apache.org/docs/3.3.2/reference/#coalesce-step) |
+| `constant` | Returns a constant value. Used with `coalesce`| [constant step](https://tinkerpop.apache.org/docs/3.3.2/reference/#constant-step) |
+| `count` | Returns the count from the traversal | [count step](https://tinkerpop.apache.org/docs/3.3.2/reference/#count-step) |
+| `dedup` | Returns the values with the duplicates removed | [dedup step](https://tinkerpop.apache.org/docs/3.3.2/reference/#dedup-step) |
+| `drop` | Drops the values (vertex/edge) | [drop step](https://tinkerpop.apache.org/docs/3.3.2/reference/#drop-step) |
+| `executionProfile` | Creates a description of all operations generated by the executed Gremlin step | [executionProfile step](graph-execution-profile.md) |
+| `fold` | Acts as a barrier that computes the aggregate of results| [fold step](https://tinkerpop.apache.org/docs/3.3.2/reference/#fold-step) |
+| `group` | Groups the values based on the labels specified| [group step](https://tinkerpop.apache.org/docs/3.3.2/reference/#group-step) |
+| `has` | Used to filter properties, vertices, and edges. Supports `hasLabel`, `hasId`, `hasNot`, and `has` variants. | [has step](https://tinkerpop.apache.org/docs/3.3.2/reference/#has-step) |
+| `inject` | Inject values into a stream| [inject step](https://tinkerpop.apache.org/docs/3.3.2/reference/#inject-step) |
+| `is` | Used to perform a filter using a boolean expression | [is step](https://tinkerpop.apache.org/docs/3.3.2/reference/#is-step) |
+| `limit` | Used to limit number of items in the traversal| [limit step](https://tinkerpop.apache.org/docs/3.3.2/reference/#limit-step) |
+| `local` | Local wraps a section of a traversal, similar to a subquery | [local step](https://tinkerpop.apache.org/docs/3.3.2/reference/#local-step) |
+| `not` | Used to produce the negation of a filter | [not step](https://tinkerpop.apache.org/docs/3.3.2/reference/#not-step) |
+| `optional` | Returns the result of the specified traversal if it yields a result else it returns the calling element | [optional step](https://tinkerpop.apache.org/docs/3.3.2/reference/#optional-step) |
+| `or` | Ensures at least one of the traversals returns a value | [or step](https://tinkerpop.apache.org/docs/3.3.2/reference/#or-step) |
+| `order` | Returns results in the specified sort order | [order step](https://tinkerpop.apache.org/docs/3.3.2/reference/#order-step) |
+| `path` | Returns the full path of the traversal | [path step](https://tinkerpop.apache.org/docs/3.3.2/reference/#path-step) |
+| `project` | Projects the properties as a Map | [project step](https://tinkerpop.apache.org/docs/3.3.2/reference/#project-step) |
+| `properties` | Returns the properties for the specified labels | [properties step](https://tinkerpop.apache.org/docs/3.3.2/reference/#_properties_step) |
+| `range` | Filters to the specified range of values| [range step](https://tinkerpop.apache.org/docs/3.3.2/reference/#range-step) |
+| `repeat` | Repeats the step for the specified number of times. Used for looping | [repeat step](https://tinkerpop.apache.org/docs/3.3.2/reference/#repeat-step) |
+| `sample` | Used to sample results from the traversal | [sample step](https://tinkerpop.apache.org/docs/3.3.2/reference/#sample-step) |
+| `select` | Used to project results from the traversal | [select step](https://tinkerpop.apache.org/docs/3.3.2/reference/#select-step) |
+| `store` | Used for non-blocking aggregates from the traversal | [store step](https://tinkerpop.apache.org/docs/3.3.2/reference/#store-step) |
+| `TextP.startingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the beginning of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.endingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the ending of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.containing(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property with the contents of a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.notStartingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't start with a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.notEndingWith(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't end with a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `TextP.notContaining(string)` | String filtering function. This function is used as a predicate for the `has()` step to match a property that doesn't contain a given string | [TextP predicates](https://tinkerpop.apache.org/docs/3.4.0/reference/#a-note-on-predicates) |
+| `tree` | Aggregate paths from a vertex into a tree | [tree step](https://tinkerpop.apache.org/docs/3.3.2/reference/#tree-step) |
+| `unfold` | Unroll an iterator as a step| [unfold step](https://tinkerpop.apache.org/docs/3.3.2/reference/#unfold-step) |
+| `union` | Merge results from multiple traversals| [union step](https://tinkerpop.apache.org/docs/3.3.2/reference/#union-step) |
+| `V` | Includes the steps necessary for traversals between vertices and edges `V`, `E`, `out`, `in`, `both`, `outE`, `inE`, `bothE`, `outV`, `inV`, `bothV`, and `otherV` for | [vertex steps](https://tinkerpop.apache.org/docs/3.3.2/reference/#vertex-steps) |
+| `where` | Used to filter results from the traversal. Supports `eq`, `neq`, `lt`, `lte`, `gt`, `gte`, and `between` operators | [where step](https://tinkerpop.apache.org/docs/3.3.2/reference/#where-step) |
+
+The write-optimized engine provided by Azure Cosmos DB supports automatic indexing of all properties within vertices and edges by default. Therefore, queries with filters, range queries, sorting, or aggregates on any property are processed from the index, and served efficiently. For more information on how indexing works in Azure Cosmos DB, see our paper on [schema-agnostic indexing](https://www.vldb.org/pvldb/vol8/p1668-shukla.pdf).
+
+## Behavior differences
+
+* Azure Cosmos DB Graph engine runs ***breadth-first*** traversal while TinkerPop Gremlin is depth-first. This behavior achieves better performance in horizontally scalable system like Cosmos DB.
+
+## Unsupported features
+
+* ***[Gremlin Bytecode](https://tinkerpop.apache.org/docs/current/tutorials/gremlin-language-variants/)*** is a programming language agnostic specification for graph traversals. Cosmos DB Graph doesn't support it yet. Use `GremlinClient.SubmitAsync()` and pass traversal as a text string.
+
+* ***`property(set, 'xyz', 1)`*** set cardinality isn't supported today. Use `property(list, 'xyz', 1)` instead. To learn more, see [Vertex properties with TinkerPop](http://tinkerpop.apache.org/docs/current/reference/#vertex-properties).
+
+* The ***`match()` step*** isn't currently available. This step provides declarative querying capabilities.
+
+* ***Objects as properties*** on vertices or edges aren't supported. Properties can only be primitive types or arrays.
+
+* ***Sorting by array properties*** `order().by(<array property>)` isn't supported. Sorting is supported only by primitive types.
+
+* ***Non-primitive JSON types*** aren't supported. Use `string`, `number`, or `true`/`false` types. `null` values aren't supported.
+
+* ***GraphSONv3*** serializer isn't currently supported. Use `GraphSONv2` Serializer, Reader, and Writer classes in the connection configuration. The results returned by the Azure Cosmos DB Gremlin API don't have the same format as the GraphSON format.
+
+* **Lambda expressions and functions** aren't currently supported. This includes the `.map{<expression>}`, the `.by{<expression>}`, and the `.filter{<expression>}` functions. To learn more, and to learn how to rewrite them using Gremlin steps, see [A Note on Lambdas](http://tinkerpop.apache.org/docs/current/reference/#a-note-on-lambdas).
+
+* ***Transactions*** aren't supported because of distributed nature of the system. Configure appropriate consistency model on Gremlin account to "read your own writes" and use optimistic concurrency to resolve conflicting writes.
+
+## Known limitations
+
+* **Index utilization for Gremlin queries with mid-traversal `.V()` steps**: Currently, only the first `.V()` call of a traversal will make use of the index to resolve any filters or predicates attached to it. Subsequent calls will not consult the index, which might increase the latency and cost of the query.
+
+Assuming default indexing, a typical read Gremlin query that starts with the `.V()` step would use parameters in its attached filtering steps, such as `.has()` or `.where()` to optimize the cost and performance of the query. For example:
+
+```java
+g.V().has('category', 'A')
+```
+
+However, when more than one `.V()` step is included in the Gremlin query, the resolution of the data for the query might not be optimal. Take the following query as an example:
+
+```java
+g.V().has('category', 'A').as('a').V().has('category', 'B').as('b').select('a', 'b')
+```
+
+This query will return two groups of vertices based on their property called `category`. In this case, only the first call, `g.V().has('category', 'A')` will make use of the index to resolve the vertices based on the values of their properties.
+
+A workaround for this query is to use subtraversal steps such as `.map()` and `union()`. This is exemplified below:
+
+```java
+// Query workaround using .map()
+g.V().has('category', 'A').as('a').map(__.V().has('category', 'B')).as('b').select('a','b')
+
+// Query workaround using .union()
+g.V().has('category', 'A').fold().union(unfold(), __.V().has('category', 'B'))
+```
+
+You can review the performance of the queries by using the [Gremlin `executionProfile()` step](graph-execution-profile.md).
+
+## Next steps
+
+* Get started building a graph application [using our SDKs](create-graph-dotnet.md)
+* Learn more about [graph support](graph-introduction.md) in Azure Cosmos DB
cosmos-db How To Create Container Gremlin