Updates from: 06/16/2023 01:10:22
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
In addition, users that are enabled for SMS sign-in cannot be synchronized throu
Provisioning manager attributes isn't supported.
-### Universal people search
-
-It's possible for synchronized users to appear in the global address list (GAL) of the target tenant for people search scenarios, but it isn't enabled by default. In attribute mappings for a configuration, you must update the value for the **showInAddressList** attribute. Set the mapping type as constant with a default value of `True`. For any newly created B2B collaboration users, the showInAddressList attribute will be set to true and they'll appear in people search scenarios. For more information, see [Configure cross-tenant synchronization](../multi-tenant-organizations/cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings).
+### Updating the showInAddressList property fails
For existing B2B collaboration users, the showInAddressList attribute will be updated as long as the B2B collaboration user doesn't have a mailbox enabled in the target tenant. If the mailbox is enabled in the target tenant, use the [Set-MailUser](/powershell/module/exchange/set-mailuser) PowerShell cmdlet to set the HiddenFromAddressListsEnabled property to a value of $false.
active-directory How To User Flow Sign Up Sign In Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-sign-up-sign-in-customers.md
Previously updated : 06/06/2023 Last updated : 06/14/2023
Follow these steps to create a user flow a customer can use to sign in or sign u
- **Email one-time passcode**: Allows new users to sign up and sign in using an email address as the sign-in name and email one-time passcode as their first-factor authentication method. > [!NOTE]
- > Other identity providers will be listed here only after you set up federation with them. For example, if you set up federation with [Google](how-to-google-federation-customers.md) or [Facebook](how-to-facebook-federation-customers.md), you'll be able to select them here ([learn more](concept-authentication-methods-customers.md)).
+ > The **Azure Active Directory Sign up** option is unavailable because although customers can sign up for a local account using an email from another Azure AD organization, Azure AD federation isn't used to authenticate them. **[Google](how-to-google-federation-customers.md)** and **[Facebook](how-to-facebook-federation-customers.md)** become available only after you set up federation with them. [Learn more about authentication methods and identity providers](concept-authentication-methods-customers.md).
:::image type="content" source="media/how-to-user-flow-sign-up-sign-in-customers/create-user-flow-identity-providers.png" alt-text="Screenshot of Identity provider options on the Create a user flow page.":::
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about these updates, see [Filter audit logs](../reports-mon
-## June 2019
-### New riskDetections API for Microsoft Graph (Public preview)
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-We're pleased to announce the new riskDetections API for Microsoft Graph is now in public preview. You can use this new API to view a list of your organization's Identity Protection-related user and sign-in risk detections. You can also use this API to more efficiently query your risk detections, including details about the detection type, status, level, and more.
-
-For more information, see the [Risk detection API reference documentation](/graph/api/resources/riskdetection).
---
-### New Federated Apps available in Azure AD app gallery - June 2019
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In June 2019, we've added these 22 new apps with Federation support to the app gallery:
-
-[Azure AD SAML Toolkit](../saas-apps/saml-toolkit-tutorial.md), [Otsuka Shokai (大塚商会)](../saas-apps/otsuka-shokai-tutorial.md), [ANAQUA](../saas-apps/anaqua-tutorial.md), [Azure VPN Client](https://portal.azure.com/), [ExpenseIn](../saas-apps/expensein-tutorial.md), [Helper Helper](../saas-apps/helper-helper-tutorial.md), [Costpoint](../saas-apps/costpoint-tutorial.md), [GlobalOne](../saas-apps/globalone-tutorial.md), [Mercedes-Benz In-Car Office](https://me.secure.mercedes-benz.com/), [Skore](https://app.justskore.it/), [Oracle Cloud Infrastructure Console](../saas-apps/oracle-cloud-tutorial.md), [CyberArk SAML Authentication](../saas-apps/cyberark-saml-authentication-tutorial.md), [Scrible Edu](https://www.scrible.com/sign-in/#/create-account), [PandaDoc](../saas-apps/pandadoc-tutorial.md), [Vtiger CRM (SAML)](../saas-apps/vtiger-crm-saml-tutorial.md), Oracle Access Manager for Oracle Retail Merchandising, Oracle Access Manager for Oracle E-Business Suite, Oracle IDCS for E-Business Suite, Oracle IDCS for PeopleSoft, Oracle IDCS for JD Edwards
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Automate user account provisioning for these newly supported SaaS apps
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Monitoring & Reporting
-
-You can now automate creating, updating, and deleting user accounts for these newly integrated apps:
--- [Zoom](../saas-apps/zoom-provisioning-tutorial.md)--- [Envoy](../saas-apps/envoy-provisioning-tutorial.md)--- [Proxyclick](../saas-apps/proxyclick-provisioning-tutorial.md)--- [4me](../saas-apps/4me-provisioning-tutorial.md)-
-For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md)
---
-### View the real-time progress of the Azure AD provisioning service
-
-**Type:** Changed feature
-**Service category:** App Provisioning
-**Product capability:** Identity Lifecycle Management
-
-We've updated the Azure AD provisioning experience to include a new progress bar that shows you how far you are in the user provisioning process. This updated experience also provides information about the number of users provisioned during the current cycle, as well as how many users have been provisioned to date.
-
-For more information, see [Check the status of user provisioning](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md).
---
-### Company branding now appears on sign out and error screens
-
-**Type:** Changed feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-We've updated Azure AD so that your company branding now appears on the sign out and error screens, as well as the sign-in page. You don't have to do anything to turn on this feature, Azure AD simply uses the assets you've already set up in the **Company branding** area of the Azure portal.
-
-For more information about setting up your company branding, see [Add branding to your organization's Azure Active Directory pages](./customize-branding.md).
---
-### Azure Active Directory Multi-Factor Authentication (MFA) Server is no longer available for new deployments
-
-**Type:** Deprecated
-**Service category:** MFA
-**Product capability:** Identity Security & Protection
-
-As of July 1, 2019, Microsoft will no longer offer multifactor authentication (MFA) Server for new deployments. New customers who want to require multifactor authentication in their organization must now use cloud-based Azure AD Multi-Factor Authentication. Customers who activated multifactor authentication (MFA) Server prior to July 1 won't see a change. You'll still be able to download the latest version, get future updates, and generate activation credentials.
-
-For more information, see [Getting started with the Azure Active Directory Multi-Factor Authentication Server](../authentication/howto-mfaserver-deploy.md). For more information about cloud-based Azure AD Multi-Factor Authentication, see [Planning a cloud-based Azure AD Multi-Factor Authentication deployment](../authentication/howto-mfa-getstarted.md).
---
-## May 2019
-
-### Service change: Future support for only TLS 1.2 protocols on the Application Proxy service
-
-**Type:** Plan for change
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-To help provide best-in-class encryption for our customers, we're limiting access to only TLS 1.2 protocols on the Application Proxy service. This change is gradually being rolled out to customers who are already only using TLS 1.2 protocols, so you shouldn't see any changes.
-
-Deprecation of TLS 1.0 and TLS 1.1 happens on August 31, 2019, but we'll provide additional advanced notice, so you'll have time to prepare for this change. To prepare for this change make sure your client-server and browser-server combinations, including any clients your users use to access apps published through Application Proxy, are updated to use the TLS 1.2 protocol to maintain the connection to the Application Proxy service. For more information, see [Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md#prerequisites).
---
-### Use the usage and insights report to view your app-related sign-in data
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Monitoring & Reporting
-
-You can now use the usage and insights report, located in the **Enterprise applications** area of the Azure portal, to get an application-centric view of your sign-in data, including info about:
--- Top used apps for your organization--- Apps with the most failed sign-ins--- Top sign-in errors for each app-
-For more information about this feature, see [Usage and insights report in the Azure portal](../reports-monitoring/concept-usage-insights-report.md)
---
-### Automate your user provisioning to cloud apps using Azure AD
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** Monitoring & Reporting
-
-Follow these new tutorials to use the Azure AD Provisioning Service to automate the creation, deletion, and updating of user accounts for the following cloud-based apps:
--- [Comeet](../saas-apps/comeet-recruiting-software-provisioning-tutorial.md)--- [DynamicSignal](../saas-apps/dynamic-signal-provisioning-tutorial.md)--- [KeeperSecurity](../saas-apps/keeper-password-manager-digitalvault-provisioning-tutorial.md)-
-You can also follow this new [Dropbox tutorial](../saas-apps/dropboxforbusiness-provisioning-tutorial.md), which provides info about how to provision group objects.
-
-For more information about how to better secure your organization through automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
--
-### Identity secure score is now available in Azure AD (General availability)
-**Type:** New feature
-**Service category:** N/A
-**Product capability:** Identity Security & Protection
-
-You can now monitor and improve your identity security posture by using the identity secure score feature in Azure AD. The identity secure score feature uses a single dashboard to help you:
--- Objectively measure your identity security posture, based on a score between 1 and 223.--- Plan for your identity security improvements--- Review the success of your security improvements-
-For more information about the identity security score feature, see [What is the identity secure score in Azure Active Directory?](./identity-secure-score.md).
---
-### New App registrations experience is now available (General availability)
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** Developer Experience
-
-The new [App registrations](https://aka.ms/appregistrations) experience is now in general availability. This new experience includes all the key features you're familiar with from the Azure portal and the Application Registration portal and improves upon them through:
--- **Better app management.** Instead of seeing your apps across different portals, you can now see all your apps in one location.--- **Simplified app registration.** From the improved navigation experience to the revamped permission selection experience, it's now easier to register and manage your apps.--- **More detailed information.** You can find more details about your app, including quickstart guides and more.-
-For more information, see [Microsoft identity platform](../develop/index.yml) and the [App registrations experience is now generally available!](https://developer.microsoft.com/identity/blogs/new-app-registrations-experience-is-now-generally-available/) blog announcement.
---
-### New capabilities available in the Risky Users API for Identity Protection
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-We're pleased to announce that you can now use the Risky Users API to retrieve users' risk history, dismiss risky users, and to confirm users as compromised. This change helps you to more efficiently update the risk status of your users and understand their risk history.
-
-For more information, see the [Risky Users API reference documentation](/graph/api/resources/riskyuser).
---
-### New Federated Apps available in Azure AD app gallery - May 2019
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-In May 2019, we've added these 21 new apps with Federation support to the app gallery:
-[Freedcamp](../saas-apps/freedcamp-tutorial.md), [Real Links](../saas-apps/real-links-tutorial.md), [Kianda](https://app.kianda.com/sso/OpenID/AzureAD/), [Simple Sign](../saas-apps/simple-sign-tutorial.md), [Braze](../saas-apps/braze-tutorial.md), [Displayr](../saas-apps/displayr-tutorial.md), [Templafy](../saas-apps/templafy-tutorial.md), [Marketo Sales Engage](https://toutapp.com/login), [ACLP](../saas-apps/aclp-tutorial.md), [OutSystems](../saas-apps/outsystems-tutorial.md), [Meta4 Global HR](../saas-apps/meta4-global-hr-tutorial.md), [Quantum Workplace](../saas-apps/quantum-workplace-tutorial.md), [Cobalt](../saas-apps/cobalt-tutorial.md), [webMethods API Cloud](../saas-apps/webmethods-integration-cloud-tutorial.md), [RedFlag](https://pocketstop.com/redflag/), [Whatfix](../saas-apps/whatfix-tutorial.md), [Control](../saas-apps/control-tutorial.md), [JOBHUB](../saas-apps/jobhub-tutorial.md), [NEOGOV](../saas-apps/neogov-tutorial.md), [Foodee](../saas-apps/foodee-tutorial.md), [MyVR](../saas-apps/myvr-tutorial.md)
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### Improved groups creation and management experiences in the Azure portal
-
-**Type:** New feature
-**Service category:** Group Management
-**Product capability:** Collaboration
-We've made improvements to the groups-related experiences in the Azure portal. These improvements allow administrators to better manage groups lists, members lists, and to provide additional creation options.
-Improvements include:
-- Basic filtering by membership type and group type. -- Addition of new columns, such as Source and Email address. -- Ability to multi-select groups, members, and owner lists for easy deletion. -- Ability to choose an email address and add owners during group creation.
-For more information, see [Create a basic group and add members using Azure Active Directory](./active-directory-groups-create-azure-portal.md).
-
-### Configure a naming policy for Office 365 groups in Azure portal (General availability)
-**Type:** Changed feature
-**Service category:** Group Management
-**Product capability:** Collaboration
-Administrators can now configure a naming policy for Office 365 groups, using the Azure portal. This change helps to enforce consistent naming conventions for Office 365 groups created or edited by users in your organization.
-You can configure naming policy for Office 365 groups in two different ways:
-- Define prefixes or suffixes, which are automatically added to a group name.--- Upload a customized set of blocked words for your organization, which aren't allowed in group names (for example, "CEO, Payroll, HR").-
-For more information, see [Enforce a Naming Policy for Office 365 groups](../enterprise-users/groups-naming-policy.md).
---
-### Microsoft Graph API endpoints are now available for Azure AD activity logs (General availability)
-
-**Type:** Changed feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-We're happy to announce general availability of Microsoft Graph API endpoints support for Azure AD activity logs. With this release, you can now use Version 1.0 of both the Azure AD audit logs, as well as the sign-in logs APIs.
-
-For more information, see [Azure AD audit log API overview](/graph/api/resources/azure-ad-auditlog-overview).
---
-### Administrators can now use Conditional Access for the combined registration process (Public preview)
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-Administrators can now create Conditional Access policies for use by the combined registration page. This includes applying policies to allow registration if:
--- Users are on a trusted network.--- Users are a low sign-in risk.--- Users are on a managed device.--- Users agree to the organization's terms of use (TOU).-
-For more information about Conditional Access and password reset, you can see the [Conditional Access for the Azure AD combined MFA and password reset registration experience blog post](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Conditional-access-for-the-Azure-AD-combined-MFA-and-password/ba-p/566348). For more information about Conditional Access policies for the combined registration process, see [Conditional Access policies for combined registration](../authentication/howto-registration-mfa-sspr-combined.md#conditional-access-policies-for-combined-registration). For more information about the Azure AD terms of use feature, see [Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md).
---
-## April 2019
-
-### New Azure AD threat intelligence detection is now available as part of Azure AD Identity Protection
-
-**Type:** New feature
-**Service category:** Azure AD Identity Protection
-**Product capability:** Identity Security & Protection
-
-Azure AD threat intelligence detection is now available as part of the updated Azure AD Identity Protection feature. This new functionality helps to indicate unusual user activity for a specific user or activity that's consistent with known attack patterns based on Microsoft's internal and external threat intelligence sources.
-
-For more information about the refreshed version of Azure AD Identity Protection, see the [Four major Azure AD Identity Protection enhancements are now in public preview](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Four-major-Azure-AD-Identity-Protection-enhancements-are-now-in/ba-p/326935) blog and the [What is Azure Active Directory Identity Protection (refreshed)?](../identity-protection/overview-identity-protection.md) article. For more information about Azure AD threat intelligence detection, see the [Azure Active Directory Identity Protection risk detections](../identity-protection/concept-identity-protection-risks.md) article.
---
-### Azure AD entitlement management is now available (Public preview)
-
-**Type:** New feature
-**Service category:** Identity Governance
-**Product capability:** Identity Governance
-
-Azure AD entitlement management, now in public preview, helps customers to delegate management of access packages, which defines how employees and business partners can request access, who must approve, and how long they have access. Access packages can manage membership in Azure AD and Office 365 groups, role assignments in enterprise applications, and role assignments for SharePoint Online sites. Read more about entitlement management at the [overview of Azure AD entitlement management](../governance/entitlement-management-overview.md). To learn more about the breadth of Azure AD Identity Governance features, including Privileged Identity Management, access reviews and terms of use, see [What is Azure AD Identity Governance?](../governance/identity-governance-overview.md).
---
-### Configure a naming policy for Office 365 groups in Azure portal (Public preview)
-
-**Type:** New feature
-**Service category:** Group Management
-**Product capability:** Collaboration
-Administrators can now configure a naming policy for Office 365 groups, using the Azure portal. This change helps to enforce consistent naming conventions for Office 365 groups created or edited by users in your organization.
-
-You can configure naming policy for Office 365 groups in two different ways:
--- Define prefixes or suffixes, which are automatically added to a group name.--- Upload a customized set of blocked words for your organization, which are not allowed in group names (for example, "CEO, Payroll, HR").-
-For more information, see [Enforce a Naming Policy for Office 365 groups](../enterprise-users/groups-naming-policy.md).
---
-### Azure AD Activity logs are now available in Azure Monitor (General availability)
-
-**Type:** New feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-To help address your feedback about visualizations with the Azure AD Activity logs, we're introducing a new Insights feature in Log Analytics. This feature helps you gain insights about your Azure AD resources by using our interactive templates, called Workbooks. These pre-built Workbooks can provide details for apps or users, and include:
--- **Sign-ins.** Provides details for apps and users, including sign-in location, the in-use operating system or browser client and version, and the number of successful or failed sign-ins.--- **Legacy authentication and Conditional Access.** Provides details for apps and users using legacy authentication, including multifactor authentication usage triggered by Conditional Access policies, apps using Conditional Access policies, and so on.--- **Sign-in failure analysis.** Helps you to determine if your sign-in errors are occurring due to a user action, policy issues, or your infrastructure.--- **Custom reports.** You can create new, or edit existing Workbooks to help customize the Insights feature for your organization.-
-For more information, see [How to use Azure Monitor workbooks for Azure Active Directory reports](../reports-monitoring/howto-use-azure-monitor-workbooks.md).
---
-### New Federated Apps available in Azure AD app gallery - April 2019
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In April 2019, we've added these 21 new apps with Federation support to the app gallery:
-
-[SAP Fiori](../saas-apps/sap-fiori-tutorial.md), [HRworks Single Sign-On](../saas-apps/hrworks-single-sign-on-tutorial.md), [Percolate](../saas-apps/percolate-tutorial.md), [MobiControl](../saas-apps/mobicontrol-tutorial.md), [Citrix NetScaler](../saas-apps/citrix-netscaler-tutorial.md), [Shibumi](../saas-apps/shibumi-tutorial.md), [Benchling](../saas-apps/benchling-tutorial.md), [MileIQ](https://mileiq.onelink.me/991934284/7e980085), [PageDNA](../saas-apps/pagedna-tutorial.md), [EduBrite LMS](../saas-apps/edubrite-lms-tutorial.md), [RStudio Connect](../saas-apps/rstudio-connect-tutorial.md), [AMMS](../saas-apps/amms-tutorial.md), [Mitel Connect](../saas-apps/mitel-connect-tutorial.md), [Alibaba Cloud (Role-based SSO)](../saas-apps/alibaba-cloud-service-role-based-sso-tutorial.md), [Certent Equity Management](../saas-apps/certent-equity-management-tutorial.md), [Sectigo Certificate Manager](../saas-apps/sectigo-certificate-manager-tutorial.md), [GreenOrbit](../saas-apps/greenorbit-tutorial.md), [Workgrid](../saas-apps/workgrid-tutorial.md), [monday.com](../saas-apps/mondaycom-tutorial.md), [SurveyMonkey Enterprise](../saas-apps/surveymonkey-enterprise-tutorial.md), [Indiggo](https://indiggolead.com/)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### New access reviews frequency option and multiple role selection
-
-**Type:** New feature
-**Service category:** Access Reviews
-**Product capability:** Identity Governance
-
-New updates in Azure AD access reviews allow you to:
--- Change the frequency of your access reviews to **semi-annually**, in addition to the previously existing options of weekly, monthly, quarterly, and annually.--- Select multiple Azure AD and Azure resource roles when creating a single access review. In this situation, all roles are set up with the same settings and all reviewers are notified at the same time.-
-For more information about how to create an access review, see [Create an access review of groups or applications in Azure AD access reviews](../governance/create-access-review.md).
---
-### Azure AD Connect email alert system(s) are transitioning, sending new email sender information for some customers
-
-**Type:** Changed feature
-**Service category:** AD Sync
-**Product capability:** Platform
-
-Azure AD Connect is in the process of transitioning our email alert system(s), potentially showing some customers a new email sender. To address this, you must add `azure-noreply@microsoft.com` to your organization's allowlist or you won't be able to continue receiving important alerts from your Office 365, Azure, or your Sync services.
---
-### UPN suffix changes are now successful between Federated domains in Azure AD Connect
-
-**Type:** Fixed
-**Service category:** AD Sync
-**Product capability:** Platform
-
-You can now successfully change a user's UPN suffix from one Federated domain to another Federated domain in Azure AD Connect. This fix means you should no longer experience the FederatedDomainChangeError error message during the synchronization cycle or receive a notification email stating, "Unable to update this object in Azure Active Directory, because the attribute [FederatedUser.UserPrincipalName], is not valid. Update the value in your local directory services".
----
-### Increased security using the app protection-based Conditional Access policy in Azure AD (Public preview)
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-App protection-based Conditional Access is now available by using the **Require app protection** policy. This new policy helps to increase your organization's security by helping to prevent:
--- Users gaining access to apps without a Microsoft Intune license.--- Users being unable to get a Microsoft Intune app protection policy.--- Users gaining access to apps without a configured Microsoft Intune app protection policy.-
-For more information, see [How to Require app protection policy for cloud app access with Conditional Access](../conditional-access/app-protection-based-conditional-access.md).
---
-### New support for Azure AD single sign-on and Conditional Access in Microsoft Edge (Public preview)
-
-**Type:** New feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-We've enhanced our Azure AD support for Microsoft Edge, including providing new support for Azure AD single sign-on and Conditional Access. If you've previously used Microsoft Intune Managed Browser, you can now use Microsoft Edge instead.
-
-For more information about setting up and managing your devices and apps using Conditional Access, see [Require managed devices for cloud app access with Conditional Access](../conditional-access/require-managed-devices.md) and [Require approved client apps for cloud app access with Conditional Access](../conditional-access/app-based-conditional-access.md). For more information about how to manage access using Microsoft Edge with Microsoft Intune policies, see [Manage Internet access using a Microsoft Intune policy-protected browser](/intune/app-configuration-managed-browser).
---
-## March 2019
-
-### Identity Experience Framework and custom policy support in Azure Active Directory B2C is now available (GA)
-
-**Type:** New feature
-**Service category:** B2C - Consumer Identity Management
-**Product capability:** B2B/B2C
-
-You can now create custom policies in Azure AD B2C, including the following tasks, which are supported at-scale and under our Azure SLA:
--- Create and upload custom authentication user journeys by using custom policies.--- Describe user journeys step-by-step as exchanges between claims providers.--- Define conditional branching in user journeys.--- Transform and map claims for use in real-time decisions and communications.--- Use REST API-enabled services in your custom authentication user journeys. For example, with email providers, CRMs, and proprietary authorization systems.--- Federate with identity providers who are compliant with the OpenIDConnect protocol. For example, with multi-tenant Azure AD, social account providers, or two-factor verification providers.-
-For more information about creating custom policies, see [Developer notes for custom policies in Azure Active Directory B2C](../../active-directory-b2c/custom-policy-developer-notes.md) and read [Alex Simon's blog post, including case studies](https://techcommunity.microsoft.com/t5/Azure-Active-Directory-Identity/Azure-AD-B2C-custom-policies-to-build-your-own-identity-journeys/ba-p/382791).
---
-### New Federated Apps available in Azure AD app gallery - March 2019
-
-**Type:** New feature
-**Service category:** Enterprise Apps
-**Product capability:** 3rd Party Integration
-
-In March 2019, we've added these 14 new apps with Federation support to the app gallery:
-
-[ISEC7 Mobile Exchange Delegate](https://www.isec7.com/english/), [MediusFlow](https://office365.cloudapp.mediusflow.com/), [ePlatform](../saas-apps/eplatform-tutorial.md), [Fulcrum](../saas-apps/fulcrum-tutorial.md), [ExcelityGlobal](../saas-apps/excelityglobal-tutorial.md), [Explanation-Based Auditing System](../saas-apps/explanation-based-auditing-system-tutorial.md), [Lean](../saas-apps/lean-tutorial.md), [Powerschool Performance Matters](../saas-apps/powerschool-performance-matters-tutorial.md), [Cinode](https://cinode.com/), [Iris Intranet](../saas-apps/iris-intranet-tutorial.md), [Empactis](../saas-apps/empactis-tutorial.md), [SmartDraw](../saas-apps/smartdraw-tutorial.md), [Confirmit Horizons](../saas-apps/confirmit-horizons-tutorial.md), [TAS](../saas-apps/tas-tutorial.md)
-
-For more information about the apps, see [SaaS application integration with Azure Active Directory](../saas-apps/tutorial-list.md). For more information about listing your application in the Azure AD app gallery, see [List your application in the Azure Active Directory application gallery](../manage-apps/v2-howto-app-gallery-listing.md).
---
-### New Zscaler and Atlassian provisioning connectors in the Azure AD gallery - March 2019
-
-**Type:** New feature
-**Service category:** App Provisioning
-**Product capability:** 3rd Party Integration
-
-Automate creating, updating, and deleting user accounts for the following apps:
-
-[Zscaler](../saas-apps/zscaler-provisioning-tutorial.md), [Zscaler Beta](../saas-apps/zscaler-beta-provisioning-tutorial.md), [Zscaler One](../saas-apps/zscaler-one-provisioning-tutorial.md), [Zscaler Two](../saas-apps/zscaler-two-provisioning-tutorial.md), [Zscaler Three](../saas-apps/zscaler-three-provisioning-tutorial.md), [Zscaler ZSCloud](../saas-apps/zscaler-zscloud-provisioning-tutorial.md), [Atlassian Cloud](../saas-apps/atlassian-cloud-provisioning-tutorial.md)
-
-For more information about how to better secure your organization through automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md).
---
-### Restore and manage your deleted Office 365 groups in the Azure portal
-
-**Type:** New feature
-**Service category:** Group Management
-**Product capability:** Collaboration
-
-You can now view and manage your deleted Office 365 groups from the Azure portal. This change helps you to see which groups are available to restore, along with letting you permanently delete any groups that aren't needed by your organization.
-
-For more information, see [Restore expired or deleted groups](../enterprise-users/groups-restore-deleted.md#view-and-manage-the-deleted-microsoft-365-groups-that-are-available-to-restore).
---
-### Single sign-on is now available for Azure AD SAML-secured on-premises apps through Application Proxy (public preview)
-
-**Type:** New feature
-**Service category:** App Proxy
-**Product capability:** Access Control
-
-You can now provide a single sign-on (SSO) experience for on-premises, SAML-authenticated apps, along with remote access to these apps through Application Proxy. For more information about how to set up SAML SSO with your on-premises apps, see [SAML single sign-on for on-premises applications with Application Proxy (Preview)](../app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md).
---
-### Client apps in request loops will be interrupted to improve reliability and user experience
-
-**Type:** New feature
-**Service category:** Authentications (Logins)
-**Product capability:** User Authentication
-
-Client apps can incorrectly issue hundreds of the same login requests over a short period of time. These requests, whether they're successful or not, all contribute to a poor user experience and heightened workloads for the IDP, increasing latency for all users and reducing the availability of the IDP.
-
-This update sends an `invalid_grant` error: `AADSTS50196: The server terminated an operation because it encountered a loop while processing a request` to client apps that issue duplicate requests multiple times over a short period of time, beyond the scope of normal operation. Client apps that encounter this issue should show an interactive prompt, requiring the user to sign in again. For more information about this change and about how to fix your app if it encounters this error, see [What's new for authentication?](../develop/reference-breaking-changes.md#looping-clients-will-be-interrupted).
---
-### New Audit Logs user experience now available
-
-**Type:** Changed feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-We've created a new Azure AD **Audit logs** page to help improve both readability and how you search for your information. To see the new **Audit logs** page, select **Audit logs** in the **Activity** section of Azure AD.
-
-![New Audit logs page, with sample info](media/whats-new/audit-logs-page.png)
-
-For more information about the new **Audit logs** page, see [Audit activity reports in the Azure portal](../reports-monitoring/concept-audit-logs.md).
---
-### New warnings and guidance to help prevent accidental administrator lockout from misconfigured Conditional Access policies
-
-**Type:** Changed feature
-**Service category:** Conditional Access
-**Product capability:** Identity Security & Protection
-
-To help prevent administrators from accidentally locking themselves out of their own tenants through misconfigured Conditional Access policies, we've created new warnings and updated guidance in the Azure portal. For more information about the new guidance, see [What are service dependencies in Azure Active Directory Conditional Access](../conditional-access/service-dependencies.md).
---
-### Improved end-user terms of use experiences on mobile devices
-
-**Type:** Changed feature
-**Service category:** Terms of use
-**Product capability:** Governance
-
-We've updated our existing terms of use experiences to help improve how you review and consent to terms of use on a mobile device. You can now zoom in and out, go back, download the information, and select hyperlinks. For more information about the updated terms of use, see [Azure Active Directory terms of use feature](../conditional-access/terms-of-use.md#what-terms-of-use-looks-like-for-users).
---
-### New Azure AD Activity logs download experience available
-
-**Type:** Changed feature
-**Service category:** Reporting
-**Product capability:** Monitoring & Reporting
-
-You can now download large amounts of activity logs directly from the Azure portal. This update lets you:
--- Download up to 250,000 rows.--- Get notified after the download completes.--- Customize your file name.--- Determine your output format, either JSON or CSV.-
-For more information about this feature, see [Quickstart: Download an audit report using the Azure portal](../reports-monitoring/howto-download-logs.md)
---
-### Breaking change: Updates to condition evaluation by Exchange ActiveSync (EAS)
-
-**Type:** Plan for change
-**Service category:** Conditional Access
-**Product capability:** Access Control
-
-We're in the process of updating how Exchange ActiveSync (EAS) evaluates the following conditions:
--- User location, based on country/region or IP address--- Sign-in risk--- Device platform-
-If you've previously used these conditions in your Conditional Access policies, be aware that the condition behavior might change. For example, if you previously used the user location condition in a policy, you might find the policy now being skipped based on the location of your user.
--
active-directory Exchange Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/exchange-hybrid.md
+
+ Title: 'Exchange hybrid writeback with cloud sync'
+description: This article describes how to enable exchange hybrid writeback scenarios.
+
+documentationcenter: ''
++
+editor: ''
++
+ na
+ Last updated : 06/15/2023++++++++
+# Exchange hybrid writeback with cloud sync (Public Preview)
+
+An Exchange hybrid deployment offers organizations the ability to extend the feature-rich experience and administrative control they have with their existing on-premises Microsoft Exchange organization to the cloud. A hybrid deployment provides the seamless look and feel of a single Exchange organization between an on-premises Exchange organization and Exchange Online.
+
+ :::image type="content" source="media/exchange-hybrid/exchange-hybrid.png" alt-text="Conceptual image of exchange hybrid scenario." lightbox="media/exchange-hybrid/exchange-hybrid.png":::
+
+This scenario is now supported in cloud sync. Cloud sync detects the Exchange on-premises schema attributes and then "writes back" the exchange on-line attributes to your on-premises AD environment.
+
+For more information on Exchange Hybrid deployments, see [Exchange Hybrid](/exchange/exchange-hybrid)
+
+## Prerequisites
+Before deploying Exchange Hybrid with cloud sync you must meet the following prerequisites.
+
+ - The [provisioning agent](what-is-provisioning-agent.md) must be version 1.1.1107.0 or later.
+ - Your on-premises Active Directory must be extended to contain the Exchange schema.
+ - To extend your schema for Exchange see [Prepare Active Directory and domains for Exchange Server](/exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019&preserve-view=true)
+ >[!NOTE]
+ >If your schema has been extended after you have installed the provisioning agent, you will need to restart it in order to pick up the schema changes.
+
+## How to enable
+Exchange Hybrid Writeback is disabled by default.
+
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+ 4. Click on an existing configuration.
+ 5. At the top, select **Properties**. You should see Exchange hybrid writeback disabled.
+ 6. Select the pencil next to **Basic**.
+ :::image type="content" source="media/exchange-hybrid/exchange-hybrid-1.png" alt-text="Screenshot of the basic properties." lightbox="media/exchange-hybrid/exchange-hybrid-1.png":::
+
+ 7. On the right, place a check in **Exchange hybrid writeback** and click **Apply**.
+ :::image type="content" source="media/exchange-hybrid/exchange-hybrid-2.png" alt-text="Screenshot of enabling Exchange writeback." lightbox="media/exchange-hybrid/exchange-hybrid-2.png":::
+
+ >[!NOTE]
+ >If the checkbox for **Exchange hybrid writeback** is disabled, it means that the schema has not been detected. Verify that the prerequisites are met and that you have re-started the provisioning agent.
+
+## Attributes synchronized
+Cloud sync writes Exchange On-line attributes back to users in order to enable Exchange hybrid scenarios. The following table is a list of the attributes and the mappings.
+
+|Azure AD attribute|AD attribute|Object Class|Mapping Type|
+|--|--|--|--|
+|cloudAnchor|msDS-ExternalDirectoryObjectId|User, InetOrgPerson|Direct|
+|cloudLegacyExchangeDN|proxyAddresses|User, Contact, InetOrgPerson|Expression|
+|cloudMSExchArchiveStatus|msExchArchiveStatus|User, InetOrgPerson|Direct|
+|cloudMSExchBlockedSendersHash|msExchBlockedSendersHash|User, InetOrgPerson|Expression|
+|cloudMSExchSafeRecipientsHash|msExchSafeRecipientsHash|User, InetOrgPerson|Expression|
+|cloudMSExchSafeSendersHash|msExchSafeSendersHash|User, InetOrgPerson|Expression|
+|cloudMSExchUCVoiceMailSettings|msExchUCVoiceMailSettings|User, InetOrgPerson|Expression|
+|cloudMSExchUserHoldPolicies|msExchUserHoldPolicies|User, InetOrgPerson|Expression|
++
+## Provisioning on-demand
+Provisioning on-demand with Exchange hybrid writeback requires two steps. You need to first provision or create the user. Exchange online then populates the necessary attributes on the user. Then cloud sync can then "write back" these attributes to the user. The steps are:
+
+- Provision and sync the initial user - this brings the user into the cloud and allows them to be populated with Exchange online attributes.
+- Write back exchange attributes to Active Directory - this writes the Exchange online attributes to the user on-premises.
+
+Provisioning on-demand with Exchange hybrid use the following steps
++
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+ 4. Under **Configuration**, select your configuration.
+ 5. On the left, select **Provision on demand**.
+ 6. Enter the distinguished name of a user and select the **Provision** button.
+ 7. A success screen appears with four green check marks.
+ :::image type="content" source="media/exchange-hybrid/exchange-hybrid-3.png" alt-text="Screenshot of the initial Exchange writeback." lightbox="media/exchange-hybrid/exchange-hybrid-3.png":::
+
+ 8. Click **Next**. On the **Writeback exchange attributes to Active Directory** tab, the synchronization starts.
+ 9. You should see the success details.
+ :::image type="content" source="media/exchange-hybrid/exchange-hybrid-4.png" alt-text="Screenshot of Exchange attributes being written back." lightbox="media/exchange-hybrid/exchange-hybrid-4.png":::
+
+ >[!NOTE]
+ >This final step may take up to 2 minutes to complete.
+
+## Exchange hybrid writeback using MS Graph
+You can use MS Graph API to enable Exchange hybrid writeback. For more information, see [Exchange hybrid writeback with MS Graph](how-to-inbound-synch-ms-graph.md#exchange-hybrid-writeback-public-preview)
+
+## Next steps
+
+- [What is provisioning?](../what-is-provisioning.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory How To Inbound Synch Ms Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-inbound-synch-ms-graph.md
The structure of how to do this consists of the following steps. They are:
- [Create sync job](#create-sync-job) - [Update targeted domain](#update-targeted-domain) - [Enable Sync password hashes on configuration blade](#enable-sync-password-hashes-on-configuration-blade)
+ - [Exchange hybrid writeback](#exchange-hybrid-writeback-public-preview)
- [Accidental deletes](#accidental-deletes) - [Enabling and setting the threshold](#enabling-and-setting-the-threshold) - [Allowing deletes](#allowing-deletes)
The output of the above command returns the objectId of the service principal th
Documentation for creating a sync job can be found [here](/graph/api/synchronization-synchronizationjob-post?tabs=http&view=graph-rest-beta&preserve-view=true).
-If you did not record the ID above, you can find the service principal by running the following MS Graph call. You'll need Directory.Read.All permissions to make that call:
+If you didn't record the ID above, you can find the service principal by running the following MS Graph call. You'll need Directory.Read.All permissions to make that call:
`GET https://graph.microsoft.com/beta/servicePrincipals`
Add the following key/value pair in the below value array based on what you're t
- Enable both PHS and sync tenant flags { key: "AppKey", value: "{"appKeyScenario":"AD2AADPasswordHash"}" } -- Enable only sync tenant flag (do not turn on PHS)
+- Enable only sync tenant flag (don't turn on PHS)
{ key: "AppKey", value: "{"appKeyScenario":"AD2AADProvisioning"}" } ```
Here, the highlighted "Domain" value is the name of the on-premises Active Direc
Copy/paste the mapping from the **Create AD2AADProvisioning and AD2AADPasswordHash jobs** step above into the attributeMappings array.
- Order of elements in this array doesn't matter (the backend sorts for you). Be careful about adding this attribute mapping if the name exists already in the array (e.g. if there's already an item in attributeMappings that has the targetAttributeName CredentialData) - you may get conflict errors, or the pre-existing and new mappings may be combined together (usually not desired outcome). Backend does not dedupe for you.
+ Order of elements in this array doesn't matter (the backend sorts for you). Be careful about adding this attribute mapping if the name exists already in the array (e.g. if there's already an item in attributeMappings that has the targetAttributeName CredentialData) - you may get conflict errors, or the pre-existing and new mappings may be combined together (usually not desired outcome). Backend doesn't dedupe for you.
Remember to do this for both Users and inetOrgpersons.
Here, the highlighted "Domain" value is the name of the on-premises Active Direc
Add the Schema in the request body.
+## Exchange hybrid writeback (Public Preview)
+
+This section covers how to enable/disable and use [Exchange hybrid writeback](exchange-hybrid.md) programmatically.
+
+Enabling Exchange hybrid writeback programmatically requires two steps.
+
+ 1. Schema verification
+ 2. Create the Exchange hybrid writeback job
+
+### Schema verification
+Prior to enabling and using Exchange hybrid writeback, cloud sync needs to determine whether or not the on-premises Active Directory has been extended to include the Exchange schema.
+
+You can use the [directoryDefinition:discover](/graph/api/directorydefinition-discover?view=graph-rest-beta&tabs=http&preserve-view=true) to initiate schema discovery.
+
+```
+POST https://graph.microsoft.com/beta/servicePrincipals/[SERVICE_PRINCIPAL_ID]/synchronization/jobs/[AD2AADProvisioningJobId]/schema/directories/[ADDirectoryID]/discover
+```
+The expected response is …
+HTTP 200/OK
+
+The response should look similar to the following:
+
+```
+HTTP/1.1 200 OK
+Content-type: application/json
+{
+ "objects": [
+ {
+ "name": "user",
+ "attributes": [
+ {
+ "name": "mailNickName",
+ "type": "String"
+ },
+ ...
+ ]
+ },
+ ...
+ ]
+}
+```
+
+Now check to see if the **mailNickName** attribute is present. If it is, then your schema is verified and contains the Exchange attributes. If not, review the [prerequisites](exchange-hybrid.md#prerequisites) for Exchange hybrid writeback.
+++
+### Create the Exchange hybrid writeback job
+Once you have verified the schema you can create the job.
+
+```
+POST https://graph.microsoft.com/beta/servicePrincipals/[SERVICE_PRINCIPAL_ID]/synchronization/jobs
+Content-type: application/json
+{
+"templateId":"AAD2ADExchangeHybridWriteback"
+}
+```
+++ ## Accidental deletes This section covers how to programmatically enable/disable and use [accidental deletes](how-to-accidental-deletes.md) programmatically.
Request body -
The "Enabled" setting in the example is for enabling/disabling notification emails when the job is quarantined.
-Currently, we do not support PATCH requests for secrets, so you need to add all the values in the body of the PUT request(like in the example) in order to preserve the other values.
+Currently, we don't support PATCH requests for secrets, so you need to add all the values in the body of the PUT request(like in the example) in order to preserve the other values.
The existing values for all the secrets can be retrieved by using:
Request Body:
## Start sync job
-The job can be retrieved again via the following command:
+The jobs can be retrieved again via the following command:
`GET https://graph.microsoft.com/beta/servicePrincipals/[SERVICE_PRINCIPAL_ID]/synchronization/jobs/` Documentation for retrieving jobs can be found [here](/graph/api/synchronization-synchronizationjob-list?tabs=http&view=graph-rest-beta&preserve-view=true).
-To start the job, issue this request, using the objectId of the service principal created in the first step, and the job identifier returned from the request that created the job.
+To start the jobs, issue this request, using the objectId of the service principal created in the first step, and the job identifiers returned from the request that created the job.
Documentation for how to start a job can be found [here](/graph/api/synchronization-synchronizationjob-start?tabs=http&view=graph-rest-beta&preserve-view=true).
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/what-is-cloud-sync.md
To determine if cloud sync is right for your organization, use the link below.
The following table provides a comparison between Azure AD Connect and Azure AD Connect cloud sync:
-| Feature | Azure Active Directory Connect sync| Azure Active Directory Connect cloud sync |
+| Feature | Connect sync| Cloud sync |
|: |::|::| |Connect to single on-premises AD forest|ΓùÅ |ΓùÅ | | Connect to multiple on-premises AD forests |ΓùÅ |ΓùÅ |
The following table provides a comparison between Azure AD Connect and Azure AD
| Support for group writeback|ΓùÅ | | | Support for merging user attributes from multiple domains|ΓùÅ | | | Azure AD Domain Services support|ΓùÅ | |
-| [Exchange hybrid writeback](../connect/reference-connect-sync-attributes-synchronized.md#exchange-hybrid-writeback) |ΓùÅ | |
+| [Exchange hybrid writeback](exchange-hybrid.md) |ΓùÅ |ΓùÅ |
| Unlimited number of objects per AD domain |ΓùÅ | | | Support for up to 150,000 objects per AD domain |ΓùÅ |ΓùÅ | | Groups with up to 50,000 members |ΓùÅ |ΓùÅ |
active-directory Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/common-scenarios.md
The following table outlines the most common and supported sync scenarios.
|High availability - latency (I need high availability)|ΓùÅ|N/A|ΓùÅ|N/A| |Migration from connect sync to cloud sync|ΓùÅ|ΓùÅ|N/A|N/A| |Hybrid Azure AD Join|N/A|ΓùÅ|N/A|N/A|
-|Exchange hybrid|N/A|ΓùÅ|N/A|N/A|
+|Exchange hybrid|ΓùÅ|ΓùÅ|N/A|N/A|
|User accounts in one forest / mailboxes in resource forest|N/A|ΓùÅ|N/A|N/A| |Sync large domains with more than 250K objects|N/A|ΓùÅ|ΓùÅ|N/A| |Filter directory objects based on attribute values|N/A|ΓùÅ|ΓùÅ|N/A|
active-directory Exchange Hybrid Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/exchange-hybrid-writeback.md
+
+ Title: 'Exchange hybrid writeback with sync'
+description: This article describes the Exchange hybrid writeback feature with sync clients.
+
+documentationcenter: ''
++
+editor: ''
++
+ na
+ Last updated : 06/15/2023+++++
+# Exchange hybrid writeback
+A hybrid deployment offers organizations the ability to extend the feature-rich experience and administrative control they have with their existing on-premises Microsoft Exchange organization to the cloud. A hybrid deployment provides the seamless look and feel of a single Exchange organization between an on-premises Exchange organization and Exchange Online.
+
+To accomplish this scenario and allow your on-premises users to take full advantage of Exchange online, attributes from the cloud, must be written back to your on-premises users. Both cloud sync or connect sync can write back the attributes.
+
+ :::image type="content" source="cloud-sync/media/exchange-hybrid/exchange-hybrid.png" alt-text="Conceptual image of exchange hybrid scenario." lightbox="cloud-sync/media/exchange-hybrid/exchange-hybrid.png":::
+
+## Cloud sync (Public Preview)
+You can enable this scenario using cloud sync by ensuring you're using the latest provisioning agent and following the documentation. For more information, see [Exchange hybrid writeback with cloud sync](cloud-sync/exchange-hybrid.md)
+
+## Connect sync
+You can enable the connect sync scenario through the installer. By selecting custom install, you can choose Exchange hybrid writeback. For more information, see [custom install for connect sync](connect/how-to-connect-install-custom.md)
++
+## Next steps
+- [Exchange hybrid writeback with cloud sync](cloud-sync/exchange-hybrid.md)
+- [Common scenarios](common-scenarios.md)
+- [Tools for synchronization](sync-tools.md)
+- [Choosing the right sync tool](https://setup.microsoft.com/azure/add-or-sync-users-to-azure-ad)
+- [Prerequisites](prerequisites.md)
active-directory Configure Admin Consent Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-admin-consent-workflow.md
Previously updated : 09/02/2022 Last updated : 06/14/2023
# Configure the admin consent workflow
-In this article, you'll learn how to configure the admin consent workflow to enable users to request access to applications that require admin consent. You enable the ability to make requests by using an admin consent workflow. For more information on consenting to applications, see [Azure Active Directory consent framework](../develop/consent-framework.md).
+In this article, you learn how to configure the admin consent workflow to enable users to request access to applications that require admin consent. You enable the ability to make requests by using an admin consent workflow. For more information on consenting to applications, see [User and admin consent](user-admin-consent-overview.md).
The admin consent workflow gives admins a secure way to grant access to applications that require admin approval. When a user tries to access an application but is unable to provide consent, they can send a request for admin approval. The request is sent via email to admins who have been designated as reviewers. A reviewer takes action on the request, and the user is notified of the action.
To enable the admin consent workflow and choose reviewers:
1. Configure the following settings:
- - **Select users, groups, or roles that will be designated as reviewers for admin consent requests** - Reviewers can view, block, or deny admin consent requests, but only global administrators can approve admin consent requests. People designated as reviewers can view incoming requests in the **My Pending** tab after they have been set as reviewers. Any new reviewers won't be able to act on existing or expired admin consent requests.
+ - **Who can review admin consent requests** - Select users, groups, or roles that are designated as reviewers for admin consent requests. Reviewers can view, block, or deny admin consent requests, but only global administrators can approve admin consent requests. People designated as reviewers can view incoming requests in the **My Pending** tab after they have been set as reviewers. Any new reviewers aren't able to act on existing or expired admin consent requests.
- **Selected users will receive email notifications for requests** - Enable or disable email notifications to the reviewers when a request is made.
- - **Selected users will receive request expiration reminders** - Enable or disable reminder email notifications to the reviewers when a request is about to expire. The first about-to-expire reminder email is likely sent out in the middle of the configured "Consent request expires after (days)." For example, if consent is configured to expire in three days, the first reminder email is usually sent out on the second day, and the last expiration email is almost immediately out once the consent is expired.
+ - **Selected users will receive request expiration reminders** - Enable or disable reminder email notifications to the reviewers when a request is about to expire. The first about-to-expire reminder email is likely sent out in the middle of the configured "Consent request expires after (days)." For example, if you configure the consent request to expire in three days, the first reminder email is sent out on the second day, and the last expiration email is sent out almost immediately the consent request expires.
- **Consent request expires after (days)** - Specify how long requests stay valid. 1. Select **Save**. It can take up to an hour for the workflow to become enabled. > [!NOTE]
-> You can add or remove reviewers for this workflow by modifying the **Who can review admin consent requests** list. A current limitation of this feature is that a reviewer retains the ability to review requests that were made while they were designated as a reviewer and will receive expiration reminder emails for those requests after they're removed from the reviewers list. Additionally, new reviewers will not be assigned to requests that were created before they were set as a reviewer.
+> You can add or remove reviewers for this workflow by modifying the **Who can review admin consent requests** list. A current limitation of this feature is that a reviewer retains the ability to review requests that were made while they were designated as a reviewer and will receive expiration reminder emails for those requests after they're removed from the reviewers list. Additionally, new reviewers won't be assigned to requests that were created before they were set as a reviewer.
## Configure the admin consent workflow using Microsoft Graph
active-directory Grant Admin Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-admin-consent.md
To grant tenant-wide admin consent, you need:
## Grant tenant-wide admin consent in Enterprise apps
-You can grant tenant-wide admin consent through *Enterprise applications* if the application has already been provisioned in your tenant. For example, an app could be provisioned in your tenant if at least one user has already consented to the application. For more information, see [How and why applications are added to Azure Active Directory](../develop/how-applications-are-added.md).
+You can grant tenant-wide admin consent through the **Enterprise applications** panel if the application has already been provisioned in your tenant. For example, an app could be provisioned in your tenant if at least one user has already consented to the application. For more information, see [How and why applications are added to Azure Active Directory](../develop/how-applications-are-added.md).
:::zone pivot="portal"
active-directory Myapps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/myapps-overview.md
An icon is added to the right of the address bar, which enables sign in and cust
### Permissions
-Permissions that have been granted to an application can be reviewed by looking at the permissions tab for it. To access the permissions tab, select the upper right corner of the tile that represents the application and then select **Manage your application**.
+Permissions that have been granted to an application can be reviewed by selecting the upper right corner of the tile that represents the application and then selecting **Manage your application**.
The permissions that are shown have been consented to by an administrator or have been consented to by the user. Permissions consented to by the user can be revoked by the user.
-The following image shows the `email` permission for Microsoft Graph consented to the application by the administrator of the tenant.
-- ### Self-service access Access can be granted on a tenant level, assigned to specific users, or from self-service access. Before users can self-discover applications from the My Apps portal, enable self-service application access in the Azure portal. This feature is available for applications when added using these methods:
active-directory Review Admin Consent Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/review-admin-consent-requests.md
Previously updated : 07/21/2022 Last updated : 06/14/2023
In this article, you learn how to review and take action on admin consent reques
To review and take action on admin consent requests, you need: - An Azure account. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- A designated reviewer with the appropriate role to [review admin consent requests](grant-admin-consent.md#prerequisites).
## Review and take action on admin consent requests To review the admin consent requests and take action: 1. Sign in to the [Azure portal](https://portal.azure.com) as one of the registered reviewers of the admin consent workflow.
-1. Select **All services** at the top of the left-hand navigation menu.
-1. In the filter search box, type and select **Azure Active Directory**.
+1. Search for and select **Azure Active Directory**.
1. From the navigation menu, select **Enterprise applications**. 1. Under **Activity**, select **Admin consent requests**. 1. Select **My Pending** tab to view and act on the pending requests. 1. Select the application that is being requested from the list. 1. Review details about the request:
+ - To see what permissions are being requested by the application, select **Review permissions and consent**.
- To view the application details, select the **App details** tab. - To see who is requesting access and why, select the **Requested by** tab.
- - To see what permissions are being requested by the application, select **Review permissions and consent**.
:::image type="content" source="media/configure-admin-consent-workflow/review-consent-requests.png" alt-text="Screenshot of the admin consent requests in the portal."::: 1. Evaluate the request and take the appropriate action: - **Approve the request**. To approve a request, grant admin consent to the application. Once a request is approved, all requestors are notified that they have been granted access. Approving a request allows all users in your tenant to access the application unless otherwise restricted with user assignment.
- - **Deny the request**. To deny a request, you must provide a justification that will be provided to all requestors. Once a request is denied, all requestors are notified that they have been denied access to the application. Denying a request won't prevent users from requesting admin consent to the application again in the future.
- - **Block the request**. To block a request, you must provide a justification that will be provided to all requestors. Once a request is blocked, all requestors are notified they've been denied access to the application. Blocking a request creates a service principal object for the application in your tenant in a disabled state. Users won't be able to request admin consent to the application in the future.
+ - **Deny the request**. To deny a request, you must provide a justification that is provided to all requestors. Once a request is denied, all requestors are notified that they have been denied access to the application. Denying a request won't prevent users from requesting admin consent to the application again in the future.
+ - **Block the request**. To block a request, you must provide a justification that is provided to all requestors. Once a request is blocked, all requestors are notified they've been denied access to the application. Blocking a request creates a service principal object for the application in your tenant in a disabled state. Users won't be able to request admin consent to the application in the future.
## Review admin consent requests using Microsoft Graph
-To review the admin consent requests programmatically, use the [appConsentRequest resource type](/graph/api/resources/appconsentrequest) and [userConsentRequest resource type](/graph/api/resources/userconsentrequest) and their associated methods in Microsoft Graph. You cannot approve or deny consent requests using Microsoft Graph.
+To review the admin consent requests programmatically, use the [appConsentRequest resource type](/graph/api/resources/appconsentrequest) and [userConsentRequest resource type](/graph/api/resources/userconsentrequest) and their associated methods in Microsoft Graph. You can't approve or deny consent requests using Microsoft Graph.
## Next steps - [Review permissions granted to apps](manage-application-permissions.md)
active-directory Lusid Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lusid-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
-* A user account in LUSID with Admin permissions.
+* A LUSID license for SCIM (contact LUSID support).
+* A user account in your LUSID domain with the **lusid-administrator** role
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and LUSID](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure LUSID to support provisioning with Azure AD
-Contact LUSID support to configure LUSID to support provisioning with Azure AD.
+After generating [an access token](https://support.lusid.com/knowledgebase/article/KA-01654/), make a request to LUSID's [AddScim](https://www.lusid.com/identity/swagger/https://docsupdatetracker.net/index.html) endpoint:
+
+```
+curl --request PUT 'https://<your-lusid-domain>.lusid.com/identity/api/identityprovider/scim' \
+--header 'Authorization: Bearer <your-API-access-token>'
+```
+
+The response will include the `baseUrl` (**Tenant URL** in Azure AD) and `apiToken` (**Secret Token** in Azure AD) to be entered into the LUSID Azure AD app later.
## Step 3. Add LUSID from the Azure AD application gallery
This section guides you through the steps to configure the Azure AD provisioning
![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your LUSID Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to LUSID. If the connection fails, ensure your LUSID account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your LUSID Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to LUSID.
![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
This section guides you through the steps to configure the Azure AD provisioning
|name.familyName|String||&check; |externalId|String||&check;
-1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to LUSID**.
+1. If you'd like to synchronize Azure AD groups to LUSID then under the **Mappings** section, select **Synchronize Azure Active Directory Groups to LUSID**.
1. Review the group attributes that are synchronized from Azure AD to LUSID in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in LUSID for update operations. Select the **Save** button to commit any changes.
active-directory Wiggledesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wiggledesk-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure WiggleDesk for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to WiggleDesk.
++
+writer: twimmers
+
+ms.assetid: 6fb0b9d7-649b-404f-9627-68bfcf5a845f
++++ Last updated : 06/15/2023+++
+# Tutorial: Configure WiggleDesk for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both WiggleDesk and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [WiggleDesk](https://wiggledesk.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in WiggleDesk.
+> * Remove users in WiggleDesk when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and WiggleDesk.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to WiggleDesk (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in WiggleDesk with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and WiggleDesk](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure WiggleDesk to support provisioning with Azure AD
+Contact WiggleDesk support to configure WiggleDesk to support provisioning with Azure AD.
+
+## Step 3. Add WiggleDesk from the Azure AD application gallery
+
+Add WiggleDesk from the Azure AD application gallery to start managing provisioning to WiggleDesk. If you have previously setup WiggleDesk for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to WiggleDesk
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+
+### To configure automatic user provisioning for WiggleDesk in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **WiggleDesk**.
+
+ ![Screenshot of the WiggleDesk link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your WiggleDesk Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to WiggleDesk. If the connection fails, ensure your WiggleDesk account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to WiggleDesk**.
+
+1. Review the user attributes that are synchronized from Azure AD to WiggleDesk in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in WiggleDesk for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the WiggleDesk API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by WiggleDesk|
+ |||||
+ |userName|String|&check;|&check;
+ |externalId|String|&check;|&check;
+ |active|Boolean||&check;
+ |emails[type eq "work"].value|String||&check;
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for WiggleDesk, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to WiggleDesk by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Nist Authenticator Assurance Level 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/nist-authenticator-assurance-level-2.md
The following table has authenticator types permitted for AAL2:
| Hardware protected certificate (smartcard/security key/TPM) <br> FIDO 2 security key <br> Windows Hello for Business with hardware TPM | Multi-factor crypto hardware | |Microsoft Authenticator app (Passwordless) | Multi-factor out-of-band | **Additional methods** | |
-| Password <br> **AND** <br>- Microsoft Authenticator app (Push Notification) <br>- **OR** <br>- Phone (SMS) | Memorized secret <br>**AND**<br> Single-factor out-of-band |
-| Password <br> **AND** <br>- OATH hardware tokens (preview) <br>- **OR**<br>- Microsoft Authenticator app (OTP)<br>- **OR**<br>- OATH software tokens | Memorized secret <br>**AND** <br>Single-factor OTP|
+| Password <br> **AND** <br>- Microsoft Authenticator app (Push Notification) <br>- **OR** <br>- Microsoft Authenticator Lite (Push Notification) <br>- **OR** <br>- Phone (SMS) | Memorized secret <br>**AND**<br> Single-factor out-of-band |
+| Password <br> **AND** <br>- OATH hardware tokens (preview) <br>- **OR**<br>- Microsoft Authenticator app (OTP)<br>- **OR**<br>- Microsoft Authenticator Lite (OTP)<br>- **OR** <br>- OATH software tokens | Memorized secret <br>**AND** <br>Single-factor OTP|
| Password <br>**AND** <br>- Single-factor software certificate <br>- **OR**<br>- Azure AD joined with software TPM <br>- **OR**<br>- Hybrid Azure AD joined with software TPM <br>- **OR**<br>- Compliant mobile device | Memorized secret <br>**AND**<br> Single-factor crypto software | | Password <br>**AND**<br>- Azure AD joined with hardware TPM <br>- **OR**<br>- Hybrid Azure AD joined with hardware TPM| Memorized secret <br>**AND**<br>Single-factor crypto hardware |
aks Quick Kubernetes Deploy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-terraform.md
+
+ Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster by using Terraform'
+description: In this article, you learn how to quickly create a Kubernetes cluster using Terraform and deploy an application in Azure Kubernetes Service (AKS)
+ Last updated : 06/13/2023+
+#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
++
+# Quickstart: Create an Azure Kubernetes Service (AKS) cluster by using Terraform
+
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
+
+* Deploy an AKS cluster using Terraform. The sample code is fully encapsulated such that it automatically creates a service principal and SSH key pair (using the [AzAPI provider](/azure/developer/terraform/overview-azapi-provider)).
+* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+> * Access the configuration of the AzureRM provider to get the Azure Object ID using [azurerm_client_config](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/client_config).
+> * Create a Kubernetes cluster using [azurerm_kubernetes_cluster](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster).
+> * Create an AzAPI resource [azapi_resource](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource).
+> * Create an AzAPI resource to generate an SSH key pair using [azapi_resource_action](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource_action).
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+- **Kubernetes command-line tool (kubectl):** [Download kubectl](https://kubernetes.io/releases/download/).
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/201-k8s-cluster-with-tf-and-aks). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/201-k8s-cluster-with-tf-and-aks/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-k8s-cluster-with-tf-and-aks/providers.tf)]
+
+1. Create a file named `ssh.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-k8s-cluster-with-tf-and-aks/ssh.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-k8s-cluster-with-tf-and-aks/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-k8s-cluster-with-tf-and-aks/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-k8s-cluster-with-tf-and-aks/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Run [az aks list](/cli/azure/aks#az-aks-list) to display the name of the new Kubernetes cluster.
+
+ ```azurecli
+ az aks list \
+ --resource-group $resource_group_name \
+ --query "[].{\"K8s cluster name\":name}" \
+ --output table
+ ```
+
+1. Get the Kubernetes configuration from the Terraform state and store it in a file that kubectl can read.
+
+ ```console
+ echo "$(terraform output kube_config)" > ./azurek8s
+ ```
+
+1. Verify the previous command didn't add an ASCII EOT character.
+
+ ```console
+ cat ./azurek8s
+ ```
+
+ **Key points:**
+
+ - If you see `<< EOT` at the beginning and `EOT` at the end, remove these characters from the file. Otherwise, you could receive the following error message: `error: error loading config file "./azurek8s": yaml: line 2: mapping values are not allowed in this context`
+
+1. Set an environment variable so that kubectl picks up the correct config.
+
+ ```console
+ export KUBECONFIG=./azurek8s
+ ```
+
+1. Verify the health of the cluster.
+
+ ```console
+ kubectl get nodes
+ ```
+
+ ![Screenshot showing how the kubectl tool allows you to verify the health of your Kubernetes cluster.](./media/quick-kubernetes-deploy-terraform/kubectl-get-nodes.png)
+
+**Key points:**
+
+- When the AKS cluster was created, monitoring was enabled to capture health metrics for both the cluster nodes and pods. These health metrics are available in the Azure portal. For more information on container health monitoring, see [Monitor Azure Kubernetes Service health](/azure/azure-monitor/insights/container-insights-overview).
+- Several key values were output when you applied the Terraform execution plan. For example, the host address, AKS cluster user name, and AKS cluster password are output.
+
+## Deploy the application
+
+A [Kubernetes manifest file](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests) defines a cluster's desired state, such as which container images to run.
+
+In this quickstart, you use a manifest to create all the objects needed to run the [Azure Vote application](https://github.com/Azure-Samples/azure-voting-app-redis.git). This manifest includes two [Kubernetes deployments](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests):
+
+* The sample Azure Vote Python applications.
+* A Redis instance.
+
+Two [Kubernetes Services](/azure/aks/concepts-network#services) are created:
+
+* An internal service for the Redis instance.
+* An external service to access the Azure Vote application from the internet.
+
+1. Create a file named `azure-vote.yaml` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/201-k8s-cluster-with-tf-and-aks/azure-vote.yaml)]
+
+ **Key points:**
+
+ - For more information about YAML manifest files, see [Deployments and YAML manifests](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests).
+
+1. Run [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) to deploy the application.
+
+ ```console
+ kubectl apply -f azure-vote.yaml
+ ```
+
+### Test the application
+
+1. When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Run [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) with the `--watch` argument to monitor progress.
+
+ ```console
+ kubectl get service azure-vote-front --watch
+ ```
+
+1. The **EXTERNAL-IP** output for the `azure-vote-front` service initially shows as *pending*. Once the **EXTERNAL-IP** address displays an IP address, use `CTRL-C` to stop the `kubectl` watch process.
+
+1. To see the **Azure Vote** app in action, open a web browser to the external IP address of your service.
+
+ :::image type="content" source="media/quick-kubernetes-deploy-terraform/azure-voting-application.png" alt-text="Screenshot of Azure Vote sample application.":::
+
+## Clean up resources
+
+### Delete AKS resources
++
+### Delete service principal
+
+1. Get the service principal ID.
+
+ ```azurecli
+ sp=$(terraform output -raw sp)
+ ```
+
+1. Run [az ad sp delete](/cli/azure/ad/sp#az-ad-sp-delete) to delete the service principal.
+
+ ```azurecli
+ az ad sp delete --id $sp
+ ```
+
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn more about using AKS](/azure/aks)
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
api-center Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/key-concepts.md
+
+ Title: Azure API Center (preview) - Key concepts
+description: Key concepts of Azure API Center. API Center enables tracking APIs in a centralized location for discovery, reuse, and governance.
+
+editor: ''
+
++ Last updated : 06/05/2023++++
+# Azure API Center (preview) - key concepts
+
+This article goes into more detail about key concepts of [Azure API Center](overview.md). API Center enables tracking APIs in a centralized location for discovery, reuse, and governance.
++
+## API
+
+A top-level logical entity in API Center that represents any real-world API. API Center supports APIs of any type, including REST, GraphQL, gRPC, SOAP, WebSocket, and Webhook.
+
+An API can be managed by any API management solution (such as Azure [API Management](../api-management/api-management-key-concepts.md) or solutions from other providers), or unmanaged.
+
+## API version
+
+APIs typically have multiple versions across lifecycle stages. In API Center, associate one or more versions with each API, aligned with specific API changes. Some versions may introduce major or breaking changes, while others add minor improvements. An API version can be at any lifecycle stage ΓÇô from design, to preview, production, or deprecated.
+
+Each API version may be defined with a specification file, such as an OpenAPI definition for a REST API. API Center allows any specification file formatted as text (YAML, JSON, markdown, and so on). You can upload OpenAPI, gRPC, GraphQL, AsyncAPI, WSDL, and WADL specifications, among others.
+
+## Environment
+
+Use API Center to maintain information about your APIs' environments. An environment represents a location where an API runtime could be deployed, typically an API management platform, API gateway, or compute service. Each environment has a type (such as production or staging) and may include information about developer portal or management interfaces.
+
+## Deployment
+
+In API Center, a deployment identifies a specific environment used for the runtime of an API version. For example, an API version could have two deployments: a deployment in a staging Azure API Management service and a deployment in a production Azure API Management service.
+
+## Metadata and metadata schema
+
+In API Center, you organize your APIs and other assets by setting values of metadata properties, which can be used for searching and filtering and to enforce governance standards. API Center provides several common built-in properties such as "API type" and "Lifecycle". An API Center owner can augment the built-in properties by defining custom properties in a metadata schema to organize their APIs, deployments, and environments according to their organization's requirements. For example, create a *Line of business* property to identify the business unit that owns an API.
+
+API Center supports properties of type array, boolean, number, object, predefined choices, and string.
+
+API Center's metadata schema is compatible with JSON and YAML schema specifications, to allow for schema validation in developer tooling and automated pipelines.
+
+## Workspace
+
+To enable multiple teams to work independently in a single deployment, API Center provides workspaces. Similar to API Management [workspaces](../api-management/workspaces-overview.md), workspaces in API Center allow separate teams to access and manage a part of the API inventory. Access is controlled through Azure role-based access control (RBAC).
+
+## Next steps
+
+* [Set up your API center](set-up-api-center.md)
+
api-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md
+
+ Title: Azure API Center (preview) - Overview
+description: Introduction to key scenarios and capabilities of Azure API Center. API Center inventories an organization's APIs to promote API discovery, reuse, and governance at scale.
+
+editor: ''
+
++ Last updated : 06/05/2023++++
+# What is Azure API Center (preview)?
+
+API Center enables tracking all of your APIs in a centralized location for discovery, reuse, and governance. Use API Center to develop and maintain a structured and organized inventory of your organization's APIs - regardless of their type, lifecycle stage, or deployment location - along with related information such as version details, specification files, and common metadata.
++
+> [!NOTE]
+> API Center is a solution for API inventory management. If you're looking for a solution to manage, secure, and publish your organization's API backends through an API gateway, see the [Azure API Management](../api-management/api-management-key-concepts.md) service.
+
+## Benefits
+
+With API Center, stakeholders throughout your organization - including API program managers, application developers, and API developers - can discover, govern, and reuse APIs.
+
+* **API program managers**, usually IT or enterprise architects leading organizational API programs, who foster API reuse, quality, and compliance. API Center provides these users with a centralized inventory view of all APIs in the organization and information about those APIs, such as their deployments.
+* **Application developers**, including both professional developers and low-code/no-code developers, who discover and consume APIs to accelerate or enable development of applications. API Center helps these users find, understand, and get access to available APIs and reach the API developer teams who support them.
+* **API developers**, who design, develop, document, and publish APIs that meet organizational standards and comply with industry regulations. API Center helps these users reduce duplication, boost adoption, and track their APIs throughout their lifecycles.
+
+## Key capabilities
+
+In preview, create and use an API Center in the Azure portal for the following:
+
+* **API inventory management** - Register all of your organization's APIs for inclusion in a centralized inventory.
+* **Real-world API representation** - Add real-world information about each API including versions and specifications such as OpenAPI specifications. List API deployments and associate them with runtime environments, for example representing API management solutions.
+* **Metadata properties** - Organize and filter APIs and related resources using built-in and custom metadata properties, to help with API governance and discoverability by API consumers.
+* **Workspaces** - Enable multiple teams to work independently in API Center by creating workspaces with permissions based on role-based access control.
+
+For more information about the information assets and capabilities in API Center, see [Key concepts](key-concepts.md).
+
+## Preview limitations
+
+* In preview, API Center is available in the following Azure regions:
+
+ * East US
+ * UK South
+ * Central India
+ * Australia East
+
+## Frequently asked questions
+
+### Q: Is API Center part of Azure API Management?
+
+A: API Center is a stand-alone Azure service that's complementary to Azure API Management and API management services from other providers. API Center provides a unified API inventory for all APIs in the organization, including APIs that don't run in API gateways (such as those that are still in design) and those that are managed with different API management solutions.
+
+### Q: Is my data encrypted in API Center?
+
+A: Yes, all data in API Center is encrypted at rest.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get access to the preview](https://aka.ms/apicenter/joinpreview)
++
+> [!div class="nextstepaction"]
+> [Set up your API center](set-up-api-center.md)
+
+> [!div class="nextstepaction"]
+> [Provide feedback](https://aka.ms/apicenter/preview/feedback)
api-center Set Up Api Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center.md
+
+ Title: Tutorial - Get started with Azure API Center (preview) | Microsoft Docs
+description: Follow this tutorial to set up your API center for API discovery, reuse, and governance. Register APIs, add versions and specifications, set metadata properties, and more.
+++ Last updated : 06/05/2023++++
+# Tutorial: Get started with your API center (preview)
+
+Set up your [API center](overview.md) to start an inventory of your organization's APIs. API Center enables tracking APIs in a centralized location for discovery, reuse, and governance.
+
+In this tutorial, you learn how to use the portal to:
+> [!div class="checklist"]
+> * Create an API center
+> * Define metadata properties in the schema
+> * Register one or more APIs in your API center
+> * Add a version to an API
+> * Add information about API environments and deployments
+
+For background information about the assets you can organize in API Center, see [Key concepts](key-concepts.md).
+++
+## Prerequisites
+
+* Access to the API Center preview. See [access instructions](https://aka.ms/apicenter/joinpreview):
+
+ 1. Register the **Azure API Center Preview** feature in your subscription (or subscriptions).
+ 1. Submit the access request form.
+ 1. Wait for a notification email from Microsoft that access to API Center is enabled in the requested Azure subscription.
+
+* At least a Contributor role assignment or equivalent permissions in the Azure subscription.
+
+* One or more APIs that you want to register in your API center. Here are two examples, with links to their OpenAPI specifications for download:
+
+ * [Swagger Petstore API](https://github.com/swagger-api/swagger-petstore/blob/master/src/main/resources/openapi.yaml)
+ * [Azure Demo Conference API](https://conferenceapi.azurewebsites.net?format=json)
+
+## Register the API Center provider
+
+After you've been added to the API Center preview, you need to register the **Microsoft.ApiCenter** resource provider in your subscription, using the portal or other tools. You only need to register the resource provider once. For steps, see [Register resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
+
+## Create an API center
+
+1. [Sign in](https://portal.azure.com) to the portal.
+
+1. In the search bar, enter *API Centers*.
+
+1. Select **+ Create**.
+
+1. On the **Basics** tab, select or enter the following settings:
+
+ 1. Select your Azure subscription.
+
+ 1. Select an existing resource group, or select **New** to create a new one.
+
+ 1. Enter a **Name** for your API center. It must be unique in your subscription.
+
+ 1. In **Region**, select one of the [available regions](overview.md#preview-limitations) for API Center preview.
+
+1. Optionally, on the **Tags** tab, add one or more name/value pairs to help you categorize your Azure resources.
+
+1. Select **Review + create**.
+
+1. After validation completes, select **Create**.
+
+After deployment, your API center is ready to use!
+
+## Define properties in the metadata schema
+
+Each API center provides a configurable metadata schema to help you organize APIs and other assets according to properties that you define. Here you define two example properties: *Line of business* and *Public-facing*; if you prefer, define other properties of your own. When you add or update APIs and other assets in your API center, you'll set values for these properties and any common built-in properties.
+
+> [!IMPORTANT]
+> Take care not to include any sensitive, confidential, or personal information in the titles (names) of metadata properties you define. These titles are visible in monitoring logs that are used by Microsoft to improve the functionality of the service. However, other metadata details and values are your protected customer data.
+
+1. In the left menu, select **Metadata > + Add property**.
+
+1. On the **Details** tab, enter information about the property.
+
+ 1. In **Title**, enter *Line of business*.
+
+ 1. Select type **Predefined choices** and enter choices such as *Marketing, Finance, IT, Sales*, and so on. Optionally enable **Allow additional choices**.
+
+1. On the **Assignments tab**, select **Required** for APIs. Select **Optional** for Deployments and Environments.
+
+1. On the **Review + Create** tab, review the settings and select **Create**.
+
+ The property is added to the **Metadata** list.
+
+1. Select **+ Add property** to add another property.
+
+1. On the **Details** tab, enter information about the property.
+
+ 1. In **Title**, enter *Public-facing*.
+
+ 1. Select type **Boolean**.
+
+1. On the **Assignments tab**, select **Required** for APIs. Select **Not applicable** for Deployments and Environments.
+
+ The property is added to the **Metadata** list.
+
+1. Select **View schema > API** to see the properties that you added to the schema for APIs.
+
+ :::image type="content" source="media/set-up-api-center/metadata-schema.png" alt-text="Screenshot of metadata schema in the portal.":::
+
+## Add APIs
+
+Now add (register) APIs in your API center. Each API registration includes an optional API specification file and built-in and custom metadata properties, including:
+
+* API name, description, and summary
+* Links to external documentation
+* Version identifier
+* Custom properties, like the *Line of business* property you defined in the previous section
+
+The following steps register two sample APIs: Swagger Petstore API and Demo Conference API (see [Prerequisites](#prerequisites)). If you prefer, register APIs of your own.
+
+1. In the portal, navigate to your API center.
+
+1. In the left menu, select **APIs** > **+ Register API**.
+
+1. In the **Register API** page, add the following information for the Swagger Petstore API. You'll see the custom *Line of business* and *Public-facing* metadata properties that you defined in the preceding section at the bottom of the page.
+
+ |Setting|Value|Description|
+ |-|--|--|
+ |**API title**| Enter *Swagger Petstore API*.| Name you choose for the API. |
+ |**Identification**|After you enter the preceding title, API Center generates this identifier, which you can override.| Azure resource name for the API.|
+ |**API type**| Select **REST** from the dropdown.| Type of API.|
+ | **Summary** | Optionally enter a summary. | Summary description of the API. |
+ | **Description** | Optionally enter a description. | Description of the API. |
+ | **Version** | | |
+ |**Version title**| Enter a version title of your choice, such as *v1*.|Name you choose for the API version.|
+ |**Version identification**|After you enter the preceding title, API Center generates this identifier, which you can override.| Azure resource name for the version.|
+ |**Version lifecycle** | Make a selection from the dropdown, for example, **Testing** or **Production**. | Lifecycle stage of the API version. |
+ |**Specification** | Optionally upload YAML file for Swagger Petstore API. | API specification file, such as an OpenAPI specification for a REST API. |
+ |**External documentation** | Optionally add one or more links to external documentation. | Name, description, and URL of documentation for the API. |
+ |**Contact** | Optionally add information for one or more contacts. | Name, email, and URL of a contact for the API. |
+ | **Line of business** | If you added this custom property, make a selection from the dropdown, such as **Marketing**. | Custom metadata property that identifies the business unit that owns the API. |
+ | **Public-facing** | If you added this custom property, select the checkbox. | Custom metadata property that identifies whether the API is public-facing or internal only. |
+
+1. Select **Create**.
+1. Repeat the preceding three steps to register another API, such as the Demo Conference API.
+
+The APIs appear on the **APIs** page in the portal. When you've added a large number of APIs to the API center, use the search box and filters on this page to find the APIs you want.
++
+> [!TIP]
+> After registering an API, you can view or edit the API's properties. On the **APIs** page, select the API to see options to manage the API registration.
+
+## Add an API version
+
+Throughout its lifecycle, an API could have multiple versions. You can add a version to an existing API in your API center, optionally with an updated specification file.
+
+Here you add a version to one of your APIs:
+
+1. In the portal, navigate to your API center.
+
+1. In the left menu, select **APIs**, and then select an API, for example, *Demo Conference API*.
+
+1. Select **Versions** > **+ Add version**.
+
+ :::image type="content" source="media/set-up-api-center/add-version.png" alt-text="Screenshot of adding an API version in the portal.":::
+
+1. In the **Add version** page, enter or select the following information:
+
+ |Setting|Value|Description|
+ |-|--|--|
+ |**Version title**| Enter a version title of your choice, such as *v2*.|Name you choose for the API version.|
+ |**Version identification**|After you enter the preceding title, API Center generates this identifier, which you can override.| Azure resource name for the version.|
+ |**Version lifecycle** | Make a selection from the dropdown, such as **Production**. | Lifecycle stage of the API version. |
+ |**Specification** | Optionally upload an updated Demo Conference API JSON file. | API specification file, such as an OpenAPI specification for a REST API. |
+
+## Add an environment
+
+Your API center helps you keep track of your real-world API environments. For example, you might use Azure API Management or another solution to distribute, secure, and monitor some of your APIs. Or you might directly serve some APIs using a compute service or a Kubernetes cluster. You can add multiple environments to your API center, each aligned with a phase such as development, testing, staging, or production.
+
+Here you add a fictitious Azure API Management environment to your API center. If you prefer, add information about one of your existing environments. You'll configure both built-in properties and any custom metadata properties you've defined.
++
+1. In the portal, navigate to your API center.
+
+1. In the left menu, select **Environments** > **Add environment**.
+
+1. In the **Create environment** page, add the following information. You'll see the custom *Line of business* metadata property that you defined at the bottom of the page.
+
+ |Setting|Value|Description|
+ |-|--|--|
+ |**Title**| Enter *My Testing*.| Name you choose for the environment. |
+ |**Identification**|After you enter the preceding title, API Center generates this identifier, which you can override.| Azure resource name for the environment.|
+ |**Environment type**| Optionally select **Testing** from the dropdown.| Type of environment for APIs.|
+ | **Description** | Optionally enter a description. | Description of the environment. |
+ | **Server** | | |
+ |**Type**| Optionally select **Azure API Management** from the dropdown.|Type of API management solution used.|
+ | **Management portal URL** | Optionally enter a URL such as `https://admin.contoso.com` | URL of management interface for environment. |
+ | **Onboarding** | | |
+ | **Development portal URL** | Optionally enter a URL such as `https://developer.contoso.com` | URL of interface for developer onboarding in the environment. |
+ | **Instructions** | Optionally select **Edit** and enter onboarding instructions in standard Markdown. | Instructions to onboard to APIs from the environment. |
+ | **Line of business** | If you added this custom property, optionally make a selection from the dropdown, such as **IT**. | Custom metadata property that identifies the business unit that manages APIs in the environment. |
+
+## Add a deployment
+
+API center can also help you catalog your API deployments - the environments where specific API versions are deployed.
+
+Here you create a deployment by associating one of your API versions with the environment you created in the previous section. You'll configure both built-in properties and any custom metadata properties you've defined.
+
+1. In the portal, navigate to your API center.
+
+1. In the left menu, select **APIs** and then select an API, for example, the *Demo Conference API*.
+
+1. On the **Demo Conference API** page, select **Versions** and then select a version, such as *v1*.
+
+1. On the Version page, select **Deployments**.
+
+ :::image type="content" source="media/set-up-api-center/deployments.png" alt-text="Screenshot of API deployments in the portal.":::
+
+1. Select **+ Add deployment**.
+
+1. In the **Add deployment** page, add the following information. You'll see the custom *Line of business* metadata property that you defined at the bottom of the page.
+
+ |Setting|Value|Description|
+ |-|--|--|
+ |**Title**| Enter *v1 Deployment*.| Name you choose for the deployment. |
+ |**Identification**|After you enter the preceding title, API Center generates this identifier, which you can override.| Azure resource name for the deployment.|
+ | **Description** | Optionally enter a description. | Description of the deployment. |
+ | **Environment** | Make a selection from the dropdown, such as *My Testing*, or optionally select **Create new**.| New or existing environment where the API version is deployed. |
+ | **Runtime URL** | Enter a base URL such as `https://api.contoso.com/conference`. | Base runtime URL for the API in the environment. |
+ | **Line of business** | If you added this custom property, optionally make a selection from the dropdown, such as **IT**. | Custom metadata property that identifies the business unit that consumes APIs from the deployment. |
+
+## Next steps
+
+In this tutorial, you learned how to use the portal to:
+> [!div class="checklist"]
+> * Create an API center
+> * Define metadata properties in the schema
+> * Register one or more APIs in your API center
+> * Add a version to an API
+> * Add information about API environments and deployments
+
+> [!div class="nextstepaction"]
+> [Learn more about API Center](key-concepts.md)
api-management Front Door Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/front-door-api-management.md
Azure Front Door is a modern application delivery network platform providing a secure, scalable content delivery network (CDN), dynamic site acceleration, and global HTTP(s) load balancing for your global web applications. When used in front of API Management, Front Door can provide TLS offloading, end-to-end TLS, load balancing, response caching of GET requests, and a web application firewall, among other capabilities. For a full list of supported features, see [What is Azure Front Door?](../frontdoor/front-door-overview.md)
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ This article shows how to: * Set up an Azure Front Door Standard/Premium profile in front of a publicly accessible Azure API Management instance: either non-networked, or injected in a virtual network in [external mode](api-management-using-with-vnet.md).
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
This article shows how to defend your Azure API Management instance against distributed denial of service (DDoS) attacks by enabling [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md). Azure DDoS Protection provides enhanced DDoS mitigation features to defend against volumetric and protocol DDoS attacks.ΓÇï
+> [!NOTE]
+> For web workloads, we highly recommend utilizing Azure DDoS protection and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to deploy [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ [!INCLUDE [premium-dev.md](../../includes/api-management-availability-premium-dev.md)] ## Supported configurations
api-management Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
To operate properly, each self-hosted gateway needs outbound connectivity on por
| Description | Required for v1 | Required for v2 | Notes | |:|:|:|:|
-| Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | Connectivity to v2 endpoint requires DNS resolution of the default hostname.<br/><br/>Currently, API Management doesn't enable configuring a custom domain name for the v2 endpoint<sup>1</sup>. |
+| Hostname of the configuration endpoint | `<apim-service-name>.management.azure-api.net` | `<apim-service-name>.configuration.azure-api.net` | Connectivity to v2 endpoint requires DNS resolution of the default hostname. |
| Public IP address of the API Management instance | ✔️ | ✔️ | IP addresses of primary location is sufficient. | | Public IP addresses of Azure Storage [service tag](../virtual-network/service-tags-overview.md) | ✔️ | Optional<sup>2</sup> | IP addresses must correspond to primary location of API Management instance. | | Hostname of Azure Blob Storage account | ✔️ | Optional<sup>2</sup> | Account associated with instance (`<blob-storage-account-name>.blob.core.windows.net`) |
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 04/03/2023 Last updated : 06/12/2023
application-gateway Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/features.md
![Application Gateway conceptual](media/overview/figure1-720.png)
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ Application Gateway includes the following features:
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
Title: Azure Automation Change Tracking and Inventory overview using Azure Monit
description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent (Preview), which helps you identify software and Microsoft service changes in your environment. Previously updated : 05/29/2023 Last updated : 06/15/2023
To enable tracking of Windows Services data, you must upgrade CT extension and u
#### [For Arc-enabled Windows VMs](#tab/win-arc-vm) ```powershell-interactive
-ΓÇô az connectedmachine extension create --name ChangeTracking-Linux --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Linux --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+ΓÇô az connectedmachine extension create --name ChangeTracking-Linux --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Windows --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
``` #### [For Arc-enabled Linux VMs](#tab/lin-arc-vm) ```powershell-interactive-- az connectedmachine extension create --name ChangeTracking-Windows --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Windows --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+- az connectedmachine extension create --name ChangeTracking-Windows --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Linux --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
``` #### Configure frequency
-The default collection frequency for Windows services is 30 minutes. To configure the frequency:
+The default collection frequency for Windows services is 30 minutes. To configure the frequency,
- under **Edit** Settings, use a slider on the **Windows services** tab. :::image type="content" source="media/overview-monitoring-agent/frequency-slider-inline.png" alt-text="Screenshot of frequency slider." lightbox="media/overview-monitoring-agent/frequency-slider-expanded.png":::
automation Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
azure-app-configuration Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
DNS Server must have internal and external endpoint resolution. The appliance VM
### Gateway IP
-DNS Server must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three must be able to reach the required URLs for deployment.
-
+The gateway IP should be an IP from within the subnet designated in the IP address prefix.
### Example minimum configuration for static IP deployment
The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0
- Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md). +
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
The appliance VM has the following requirements:
- Static IP assigned (strongly recommended), used for the `k8snodeippoolstart` in configuration command. This IP address should only be used for the appliance VM and not in-use anywhere else on the network. (If using DHCP, then the address must be reserved.) - Appliance VM IP address must be from within the IP address prefix provided during configuration creation command. - Ability to reach a DNS server that can resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses, container registry names, and other [required URLs](network-requirements.md#outbound-connectivity).-- If using a proxy, the proxy server configuration is provided when running the `createconfig` command, which is used to create the configuration files of the appliance VM. The proxy should allow internet access on the appliance VM to connect to [required URLs](network-requirements.md#outbound-connectivity) needed for deployment, such as the URL to download OS images.
+- If using a proxy, the proxy server configuration is provided when creating the configuration files for Arc resource bridge. The proxy should allow internet access on the appliance VM to connect to [required URLs](network-requirements.md#outbound-connectivity) needed for deployment, such as the URL to download OS images. The proxy server has to also be reachable from IPs within the IP prefix, including the appliance VM IP.
## Reserved appliance VM IP requirements
The reserved appliance VM IP has the following requirements:
- Internet access. - Connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy and firewall. - Static IP assigned, used for the `k8snodeippoolend` in configuration command. (If using DHCP, then the address must be reserved.)-- Ability to reach a DNS server that can resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses, container registry names, and other [required URLs](network-requirements.md#outbound-connectivity).++
+- If using a proxy, the proxy server has to also be reachable from IPs within the IP prefix, including the reserved appliance VM IP.
## Control plane IP requirements
The control plane IP has the following requirements:
- Open communication with the management machine. - The control plane needs to be able to resolve the management machine and vice versa.-- Static IP address assigned; the IP should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network. If you're using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).++
+- If using a proxy, the proxy server has to also be reachable from IPs within the IP prefix, including the reserved appliance VM IP.
## User account and credentials
When deploying Arc resource bridge with AKS on Azure Stack HCI (AKS Hybrid), the
+
azure-arc Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
For more information on how to export, see [Import and Export data in Azure Cach
> [!IMPORTANT] > As announced in [What's new](cache-whats-new.md#upgrade-your-azure-cache-for-redis-instances-to-use-redis-version-6-by-june-30-2023), we'll retire version 4 for Azure Cache for Redis instances on June 30, 2023. Before that date, you need to upgrade any of your cache instances to version 6. >
-> For more information on the retirement of Redis 4, see [Retirements](cache-retired-features.md).
+> For more information on the retirement of Redis 4, see [Retirements](cache-retired-features.md) and [Frequently asked questions](cache-retired-features.md#redis-4-retirement-questions)
> ## Prerequisites
azure-cache-for-redis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
See the complete regional availability of Functions on the [Azure web site](http
|Region| Windows | Linux | |--| -- | -- |
-|Australia Central| 100 | Not Available |
+|Australia Central| 100 | 20 |
|Australia Central 2| 100 | Not Available | |Australia East| 100 | 40 | |Australia Southeast | 100 | 20 |
azure-maps Choose Map Style https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-map-style.md
See the following articles for more code samples to add to your maps:
[Add map controls]: map-add-controls.md [Add a symbol layer]: map-add-pin.md [Add a bubble layer]: map-add-bubble-layer.md
-[Map style options]: https://samples.azuremaps.com/?search=style%20option&sample=map-style-options
+[Map style options]: https://samples.azuremaps.com/map/map-style-options
[Azure Maps Samples]: https://samples.azuremaps.com
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
See code examples to add functionality to your app:
[aggregate expression]: data-driven-style-expressions-web-sdk.md#aggregate-expression [Azure Maps Samples]: https://samples.azuremaps.com
-[Point Clusters in Bubble Layer]: https://samples.azuremaps.com/?search=bubble%20layer&sample=point-clusters-in-bubble-layer
-[Display clusters with a Symbol Layer]: https://samples.azuremaps.com/?search=symbol%20layer&sample=display-clusters-with-a-symbol-layer
-[Cluster weighted Heat Map]: https://samples.azuremaps.com/?search=heat%20maps&sample=cluster-weighted-heat-map
-[Display cluster area with Convex Hull]: https://samples.azuremaps.com/?search=cluster%20area&sample=display-cluster-area-with-convex-hull
-[Cluster aggregates]: https://samples.azuremaps.com/?search=clusters&sample=cluster-aggregates
+[Point Clusters in Bubble Layer]: https://samples.azuremaps.com/bubble-layer/point-clusters-in-bubble-layer
+[Display clusters with a Symbol Layer]: https://samples.azuremaps.com/symbol-layer/display-clusters-with-a-symbol-layer
+[Cluster weighted Heat Map]: https://samples.azuremaps.com/heat-map-layer/cluster-weighted-heat-map
+[Display cluster area with Convex Hull]: https://samples.azuremaps.com/spatial-math/display-cluster-area-with-convex-hull
+[Cluster aggregates]: https://samples.azuremaps.com/bubble-layer/cluster-aggregates
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
See the following articles for more code samples to add to your maps:
<! External Links > [Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec
-[Vector tile line layer]: https://samples.azuremaps.com/?search=Vector%20tile&sample=vector-tile-line-layer
+[Vector tile line layer]: https://samples.azuremaps.com/vector-tiles/vector-tile-line-layer
[Azure Maps Samples]: https://samples.azuremaps.com
azure-maps Creator Qgis Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md
+
+ Title: View and edit data with the Azure Maps QGIS plugin
+
+description: How to view and edit indoor map data using the Azure Maps QGIS plugin
++ Last updated : 06/14/2023+++++
+# Work with datasets using the QGIS plugin
+
+[QGIS] is an open-source [geographic information system (GIS)] application that supports viewing, editing, and analysis of geospatial data.
+
+The [Azure Maps QGIS plugin] is used to view and edit [datasets] in [QGIS]. It enables you to navigate floors using a custom floor-picker and perform CRUD operations for multiple features simultaneously. All QGIS functionalities, such as copying features, rotating, resizing, flipping, can be used to for advanced editing. The plugin also supports error handling for data editing. Logs created by the plugin are useful to understand the APIs and debug errors.
+
+## Prerequisites
+
+- Understanding of [Creator concepts].
+- An Azure Maps Creator [dataset]. If you have never used Azure Maps Creator to create an indoor map, you might find the [Use Creator to create indoor maps] tutorial helpful.
+- A basic working knowledge of [QGIS]
+
+## Get started
+
+This section provides information on how to install QGIS and the [Azure Maps QGIS plugin], then how to open and view a dataset.
+
+### Install QGIS
+
+If you don't already have QGIS installed, see [Download QGIS]. You can use the latest version, however, it's recommended using the most stable version, which can be found on the same page, by selecting **"Looking for the most stable version?".**
+
+![A screenshot showing the QGIS download page with the Looking for the most stable version link outlined in red.](./media/creator-indoor-maps/qgis/stable-version.png)
+
+### Install the Azure Maps QGIS plugin
+
+To install the Azure Maps QGIS plugin:
+
+1. Select **Manage and Install Plugins** from the **Plugins** menu to open the **Plugin Manager**.
+
+1. In the dialog that opens, select the **Azure Maps** plugin then the **Install plugin**:
+
+![A screenshot showing the QGIS install plugin.](./media/creator-indoor-maps/qgis/install-plugin.png)
+
+For detailed instructions on installing a plugin in QGIS, see [Installing New Plugins] in the QGIS Documentation.
+
+Once you have the plugin installed, the AzureMaps symbol appears on the plugins toolbar.
++
+## Working with datasets in the QGIS plugin
+
+Your Azure Maps dataset contains the data describing your indoor map. A dataset consists of layers that define a building. Each layer contains entries called features. Each feature is a row in the dataset. A feature usually has a geometry associated with it. Each geometry consists of a set of properties that describe it.
+
+A `featureClass` is a collection of similar features. A building has facility and level feature classes, containing features such as rooms and furniture. For example, a building has a facility `featureClass`, containing facility features. It also has a levels `featureClass` that defines the levels of the building, each level is a feature with its own set of properties that describe that level. Another `featureClass` could be furniture, with each individual piece of furniture described as a feature of the `featureClass` with its own unique set of properties.
+
+### Open dataset
+
+The following steps describe how to open your dataset in QGIS using the Azure Maps QGIS plugin.
+
+1. Select the **Azure Maps symbol** on the QGIS toolbar to open the **Azure Maps plugin dialog box**.
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/azure-maps-symbol.png" alt-text="A screenshot showing the toolbar in QGIS with the Azure Maps button highlighted.":::
+
+1. Select your location, the United States or Europe, from the Geography drop down list.
+1. Enter your [subscription key].
+1. To get a list of all the dataset IDs associated with your Azure Maps account, select the **List Datasets** button.
+1. Select the desired `datasetId` from the **DatasetId** drop down list.
+1. (Optional) Change the location where your logs are saved if you don't want them saved to the default location.
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/azure-maps-plugin-dialog-box.png" alt-text="A screenshot showing the Azure Maps plugin dialog box.":::
+
+1. Select the **Get Features** button to load your indoor map data into QGIS, once loaded your map appears in the **Map canvas**.
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/map-qgis-full-screen.png" alt-text="A screenshot showing the QGIS product with the indoor map." lightbox="./media/creator-indoor-maps/qgis/map-qgis-full-screen.png":::
+
+### View dataset
+
+Once the dataset has been loaded, you can view the different feature classes it contains in the **Layers** panel. The ordering of the layers determines how features are shown on the map; layers at a higher order in the list are displayed on top.
+
+Some layers have a drop-down containing multiple layers within it, followed by the geometry of the layer, as the following image shows:
++
+This happens in the case when the [layer definition] shows that the layer can hold features of different geometries. since QGIS only supports one geometry per layer, the plugin splits these layers by their possible geometries.
+
+> [!NOTE]
+> The geometry geometryCollection is not supported by QGIS.
+
+You can navigate to different floor by using the **Level** drop-down list in the plugins toolbar, located next to the Azure Maps plugin symbol as sown in the following image:
+
+![A screenshot showing the level selection drop-down as it appears on the plugin toolbar.](./media/creator-indoor-maps/qgis/level-dropdown-closed.png)
+
+## Edit dataset
+
+You can add, edit and delete the features of your dataset using QGIS.
+
+> [!TIP]
+> You will be using the digitizing toolbar when editing the features of your dataset in QGIS, for more information, see [Digitizing an existing layer].
+
+### Add features
+
+Dataset additions involve adding features to a layer.
+
+1. In the **Layers** panel, select the layer that you want to add the new feature to.
+
+1. Toggle edit mode to `on` in the digitizing toolbar. To view the digitizing toolbar, navigate to **View > Toolbar > Digitizing Toolbar**.
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/digitizing-toolbar-toggle-editing-mode.png"alt-text="A screenshot showing editing mode on the digitizing toolbar.":::
+
+1. Select any add feature options from the digitizing toolbar and make the desired changes.
+
+1. Select the save button in the digitizing toolbar to save changes
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/digitizing-toolbar-save-changes.png"alt-text="A screenshot showing the save changes button on the digitizing toolbar.":::
+
+### Edit features
+
+Dataset edits involve editing feature geometries and properties.
+
+#### Edit a feature geometry
+
+1. In the **Layers** panel, select the layer containing the feature you want to edit.
+
+1. Toggle edit mode to `on` in the digitizing toolbar.
+
+1. Select the **Vertex tool** from the digitizing toolbar.
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/vertex-tool.png"alt-text="A screenshot showing the Vertex Tool button on the digitizing toolbar.":::
+
+1. Once you're done with your changes, select the save button in the digitizing toolbar.
+
+#### Edit a feature property
+
+To edit a feature property using the attribute table
+
+1. Open the attribute table for the layer containing the feature you want to edit.
+
+ ![A screenshot showing the attribute table.](./media/creator-indoor-maps/qgis/attribute-table.png)
+
+ > [!NOTE]
+ > The attribute table shows each feature, with their properties, in a tabular form. It can be accessed by right-clicking on any layer in the **Layers** panel then selecting **Open Attribute Table**.
+
+1. Toggle edit mode on.
+
+1. Edit the desired property.
+
+1. Select the save button to save changes.
+
+### Delete feature
+
+1. Select the feature you want to delete.
+
+1. Select the delete feature option from the digitizing toolbar.
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/digitizing-toolbar-delete.png"alt-text="A screenshot showing the delete feature option in the digitizing toolbar.":::
+
+1. Select the save button in the digitizing toolbar to save changes.
+
+## Advanced editing
+
+To learn more about advance editing features offered in QGIS, such as moving, scaling, copying and rotating features, see [Advanced digitizing] in the QGIS Documentation.
+
+## Logs
+
+Azure Maps QGIS plugin logs information related to the requests made to Azure Maps. You can set the location of log file in the Azure Maps plugin Dialog box. By default, log files are stored in the folder containing your downloaded plugin.
+
+![A screenshot of the Azure Maps QGIS plugin dialog box with the logs section highlighted.](./media/creator-indoor-maps/qgis/plugin-dialog-logs.png)
+
+You can view your log files in two ways:
+
+1. **QGIS**. You can view the Logs in QGIS by activating the **Logs Message Panel**:
+
+ :::image type="content" source="./media/creator-indoor-maps/qgis/logs-message-panel.png"alt-text="A screenshot of the Logs Message Panel.":::
+
+Logs contain:
+
+- Information about server requests and response.
+- Errors received from the server or QGIS.
+- Statistics about the number of features loaded
+
+### Error logs for edits
+
+Error logs for edits are also stored in a separate folder called "AzureMaps_ErrorLogs". They contain more detailed information about the request made, including headers and body, and the response received from the server.
+
+### Python Logs
+
+Any errors received from the QGIS framework are displayed in the **Python Logs** tab.
+
+## Additional information
+
+If you have question related to Azure Maps, see [MICROSOFT Q&A]. Be sure and tag your questions with "Azure Maps".
+
+[Creator concepts]: creator-indoor-maps.md
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[QGIS]: https://qgis.org/en/site/
+[geographic information system (GIS)]: https://www.usgs.gov/faqs/what-geographic-information-system-gis
+[datasets]: creator-indoor-maps.md#datasets
+[dataset]: creator-indoor-maps.md#datasets
+[Download QGIS]: https://qgis.org/en/site/forusers/download.html
+[layer definition]: /rest/api/maps/2023-03-01-preview/features/get-collection-definition?tabs=HTTP
+[Advanced digitizing]: https://docs.qgis.org/3.28/en/docs/user_manual/working_with_vector/editing_geometry_attributes.html#advanced-digitizing
+[Azure Maps QGIS Plugin]: https://plugins.qgis.org/plugins/AzureMapsCreator/
+[Installing New Plugins]: https://docs.qgis.org/3.28/en/docs/training_manual/qgis_plugins/fetching_plugins.html#basic-fa-installing-new-plugins
+[Digitizing an existing layer]: https://docs.qgis.org/3.28/en/docs/user_manual/working_with_vector/editing_geometry_attributes.html?highlight=digitizing%20toolbar#digitizing-an-existing-layer
+[MICROSOFT Q&A]: /answers/questions/ask
+[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Check out more code samples:
> [Code sample page](https://aka.ms/AzureMapsSamples) [Azure Maps Samples]:https://samples.azuremaps.com
-[Drawing tool events]: https://samples.azuremaps.com/?search=Drawing%20tool&sample=drawing-tools-events
-[Select data in drawn polygon area]:https://samples.azuremaps.com/?search=Drawing%20tool&sample=select-data-in-drawn-polygon-area
-[Draw and search polygon area]: https://samples.azuremaps.com/?search=Drawing%20tool&sample=draw-and-search-polygon-area
-[Create a measuring tool]: https://samples.azuremaps.com/?search=Drawing%20tool&sample=create-a-measuring-tool
+[Drawing tool events]: https://samples.azuremaps.com/drawing-tools-module/drawing-tools-events
+[Select data in drawn polygon area]: https://samples.azuremaps.com/drawing-tools-module/select-data-in-drawn-polygon-area
+[Draw and search polygon area]: https://samples.azuremaps.com/drawing-tools-module/draw-and-search-polygon-area
+[Create a measuring tool]: https://samples.azuremaps.com/drawing-tools-module/create-a-measuring-tool
azure-maps How To Use Image Templates Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md
See the following articles for more code samples where image templates can be us
> [Add HTML Makers](map-add-bubble-layer.md) [Azure Maps Samples]: https://samples.azuremaps.com
-[Symbol layer with built-in icon template]: https://samples.azuremaps.com/?search=symbol%20layer&sample=symbol-layer-with-built-in-icon-template
-[Line layer with built-in icon template]: https://samples.azuremaps.com/?search=template&sample=line-layer-with-built-in-icon-template
-[Fill polygon with built-in icon template]: https://samples.azuremaps.com/?search=template&sample=fill-polygon-with-built-in-icon-template
-[HTML Marker with built-in icon template]: https://samples.azuremaps.com/?search=template&sample=html-marker-with-built-in-icon-template
-[Add custom icon template to atlas namespace]: https://samples.azuremaps.com/?search=template&sample=add-custom-icon-template-to-atlas-namespace
+[Symbol layer with built-in icon template]: https://samples.azuremaps.com/symbol-layer/symbol-layer-with-built-in-icon-template
+[Line layer with built-in icon template]: https://samples.azuremaps.com/line-layer/line-layer-with-built-in-icon-template
+[Fill polygon with built-in icon template]: https://samples.azuremaps.com/polygons/fill-polygon-with-built-in-icon-template
+[HTML Marker with built-in icon template]: https://samples.azuremaps.com/html-markers/html-marker-with-built-in-icon-template
+[Add custom icon template to atlas namespace]: https://samples.azuremaps.com/map/add-custom-icon-template-to-atlas-namespace
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Take a look at these useful accessibility tools:
> [!div class="nextstepaction"] > [No Coffee Vision Simulator](https://uxpro.cc/toolbox/nocoffee/)
-[Accessible popups]: https://samples.azuremaps.com/?search=keyboard&sample=accessible-popups
+[Accessible popups]: https://samples.azuremaps.com/popups/accessible-popups
[Accessibility Conformance Reports]: https://cloudblogs.microsoft.com/industry-blog/government/2018/09/11/accessibility-conformance-reports/ [Accessible Rich Internet Applications (ARIA)]: https://www.w3.org/WAI/standards-guidelines/aria/
azure-maps Map Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer.md
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)
-[Bubble Layer Options]: https://samples.azuremaps.com/?search=bubble&sample=bubble-layer-options
+[Bubble Layer Options]: https://samples.azuremaps.com/bubble-layer/bubble-layer-options
[bubble layer]: /javascript/api/azure-maps-control/atlas.layer.bubblelayer
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
See the following articles for full code:
[PitchControl]: /javascript/api/azure-maps-control/atlas.control.pitchcontrol [CompassControl]: /javascript/api/azure-maps-control/atlas.control.compasscontrol [StyleControl]: /javascript/api/azure-maps-control/atlas.control.stylecontrol
-[Navigation Control Options]: https://samples.azuremaps.com/?search=Map%20Navigation%20Control%20Options&sample=map-navigation-control-options
+[Navigation Control Options]: https://samples.azuremaps.com/controls/map-navigation-control-options
[choose a map style]: choose-map-style.md [Add a pin]: map-add-pin.md [Add a popup]: map-add-popup.md
azure-maps Map Add Custom Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-custom-html.md
For more code examples to add to your maps, see the following articles:
> [!div class="nextstepaction"] > [Add a bubble layer]
-[Simple HTML Marker]: https://samples.azuremaps.com/?search=HTML%20marker&sample=simple-html-marker
+[Simple HTML Marker]: https://samples.azuremaps.com/html-markers/simple-html-marker
[Azure Maps Samples]: https://samples.azuremaps.com/
-[HTML Marker with Custom SVG Template]: https://samples.azuremaps.com/?search=HTML%20marker&sample=html-marker-with-custom-svg-template
+[HTML Marker with Custom SVG Template]: https://samples.azuremaps.com/html-markers/html-marker-with-custom-svg-template
[How to use image templates]: how-to-use-image-templates-web-sdk.md
-[CSS Styled HTML Marker]: https://samples.azuremaps.com/?search=HTML%20marker&sample=css-styled-html-marker
-[Draggable HTML Marker]: https://samples.azuremaps.com/?search=HTML%20marker&sample=draggable-html-marker
-[HTML Marker events]: https://samples.azuremaps.com/?search=HTML%20marker&sample=html-marker-events
+[CSS Styled HTML Marker]: https://samples.azuremaps.com/html-markers/css-styled-html-marker
+[Draggable HTML Marker]: https://samples.azuremaps.com/html-markers/draggable-html-marker
+[HTML Marker events]: https://samples.azuremaps.com/html-markers/html-marker-events
[HtmlMarker]: /javascript/api/azure-maps-control/atlas.htmlmarker [HtmlMarkerOptions]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions
azure-maps Map Add Drawing Toolbar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md
Learn more about the classes and methods used in this article:
> [Drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager) [Azure Maps Samples]: https://samples.azuremaps.com
-[Add drawing toolbar to map]: https://samples.azuremaps.com/?search=add%20drawing%20toolbar&sample=add-drawing-toolbar-to-map
-[Change drawing rendering style]: https://samples.azuremaps.com/?search=render&sample=change-drawing-rendering-style
+[Add drawing toolbar to map]: https://samples.azuremaps.com/drawing-tools-module/add-drawing-toolbar-to-map
+[Change drawing rendering style]: https://samples.azuremaps.com/drawing-tools-module/change-drawing-rendering-style
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
For more code examples to add to your maps, see the following articles:
> [!div class="nextstepaction"] > [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)
-[Simple Heat Map Layer]: https://samples.azuremaps.com/?search=heat%20map%20layer&sample=simple-heat-map-layer
-[Heat Map Layer Options]: https://samples.azuremaps.com/?search=heat%20map%20layer&sample=heat-map-layer-options
-[Consistent zoomable Heat Map]: https://samples.azuremaps.com/?search=zoom&sample=consistent-zoomable-heat-map
+[Simple Heat Map Layer]: https://samples.azuremaps.com/heat-map-layer/simple-heat-map-layer
+[Heat Map Layer Options]: https://samples.azuremaps.com/heat-map-layer/heat-map-layer-options
+[Consistent zoomable Heat Map]: https://samples.azuremaps.com/heat-map-layer/consistent-zoomable-heat-map
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Add a tile layer](./map-add-tile-layer.md)
-[Simple Image Layer]: https://samples.azuremaps.com/?search=image%20layer&sample=simple-image-layer
+[Simple Image Layer]: https://samples.azuremaps.com/image-layer/simple-image-layer
[Azure Maps Samples]: https://samples.azuremaps.com
-[KML Ground Overlay as Image Layer]: https://samples.azuremaps.com/?search=KML&sample=kml-ground-overlay-as-image-layer
-[Image Layer Options]: https://samples.azuremaps.com/?search=image%20layer&sample=image-layer-options
+[KML Ground Overlay as Image Layer]: https://samples.azuremaps.com/image-layer/kml-ground-overlay-as-image-layer
+[Image Layer Options]: https://samples.azuremaps.com/image-layer/image-layer-options
azure-maps Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Add a polygon layer](map-add-shape.md)
-[Line with Stroke Gradient]: https://samples.azuremaps.com/?search=line&sample=line-with-stroke-gradient
+[Line with Stroke Gradient]: https://samples.azuremaps.com/line-layer/line-with-stroke-gradient
[Azure Maps Samples]: https://samples.azuremaps.com
-[Line Layer Options]: https://samples.azuremaps.com/?search=line&sample=line-layer-options
+[Line Layer Options]: https://samples.azuremaps.com/line-layer/line-layer-options
azure-maps Map Add Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-pin.md
Title: Add a Symbol layer to a map | Microsoft Azure Maps description: Learn how to add customized symbols, such as text or icons, to maps. See how to use data sources and symbol layers in the Azure Maps Web SDK for this purpose.-- Previously updated : 07/29/2019-++ Last updated : 06/14/2023+ - # Add a symbol layer to a map
The maps image sprite manager loads custom images used by the symbol layer. It s
## Add a symbol layer
-Before you can add a symbol layer to the map, you need to take a couple of steps. First, create a data source, and add it to the map. Create a symbol layer. Then, pass in the data source to the symbol layer, to retrieve the data from the data source. Finally, add data into the data source, so that there's something to be rendered.
+Before you can add a symbol layer to the map, you need to take a couple of steps. First, create a data source, and add it to the map. Create a symbol layer. Then, pass in the data source to the symbol layer, to retrieve the data from the data source. Finally, add data into the data source, so that there's something to be rendered.
-The code below demonstrates what should be added to the map after it has loaded. This sample renders a single point on the map using a symbol layer.
+The code below demonstrates what should be added to the map after it has loaded. This sample renders a single point on the map using a symbol layer.
```javascript //Create a data source and add it to the map.
There are four different types of point data that can be added to the map:
- GeoJSON Point geometry - This object only contains a coordinate of a point and nothing else. The `atlas.data.Point` helper class can be used to easily create these objects. - GeoJSON MultiPoint geometry - This object contains the coordinates of multiple points and nothing else. The `atlas.data.MultiPoint` helper class can be used to easily create these objects. - GeoJSON Feature - This object consists of any GeoJSON geometry and a set of properties that contain metadata associated to the geometry. The `atlas.data.Feature` helper class can be used to easily create these objects.-- `atlas.Shape` class is similar to the GeoJSON feature. Both consist of a GeoJSON geometry and a set of properties that contain metadata associated to the geometry. If a GeoJSON object is added to a data source, it can easily be rendered in a layer. However, if the coordinates property of that GeoJSON object is updated, the data source and map don't change. That's because there's no mechanism in the JSON object to trigger an update. The shape class provides functions for updating the data it contains. When a change is made, the data source and map are automatically notified and updated.
+- `atlas.Shape` class is similar to the GeoJSON feature. Both consist of a GeoJSON geometry and a set of properties that contain metadata associated to the geometry. If a GeoJSON object is added to a data source, it can easily be rendered in a layer. However, if the coordinates property of that GeoJSON object is updated, the data source and map don't change. That's because there's no mechanism in the JSON object to trigger an update. The shape class provides functions for updating the data it contains. When a change is made, the data source and map are automatically notified and updated.
The following code sample creates a GeoJSON Point geometry and passes it into the `atlas.Shape` class to make it easy to update. The center of the map is initially used to render a symbol. A click event is added to the map such that when it fires, the coordinates of the mouse are used with the shapes `setCoordinates` function. The mouse coordinates are recorded at the time of the click event. Then, the `setCoordinates` updates the location of the symbol on the map.
-<br/>
+```javascript
+function InitMap()
+{
+ var map = new atlas.Map('myMap', {
+ center: [-122.33, 47.64],
+ zoom: 13,
+ view: "Auto",
+
+ //Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ authType: 'subscriptionKey',
+ subscriptionKey: '{Your-Azure-Maps-Subscription-key}'
+ }
+ });
+
+ //Wait until the map resources are ready.
+ map.events.add('ready', function () {
+
+ /*Create a data source and add it to the map*/
+ var dataSource = new atlas.source.DataSource();
+ map.sources.add(dataSource);
+ var point = new atlas.Shape(new atlas.data.Point([-122.33, 47.64]));
+ //Add the symbol to the data source.
+ dataSource.add([point]);
+
+ /* Gets co-ordinates of clicked location*/
+ map.events.add('click', function(e){
+ /* Update the position of the point feature to where the user clicked on the map. */
+ point.setCoordinates(e.position);
+ });
+
+ //Create a symbol layer using the data source and add it to the map
+ map.layers.add(new atlas.layer.SymbolLayer(dataSource, null));
+ });
+}
+
+```
+
+<!-
<iframe height='500' scrolling='no' title='Switch pin location' src='//codepen.io/azuremaps/embed/ZqJjRP/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ZqJjRP/'>Switch pin location</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
> [!TIP] > By default, symbol layers optimize the rendering of symbols by hiding symbols that overlap. As you zoom in, the hidden symbols become visible. To disable this feature and render all symbols at all times, set the `allowOverlap` property of the `iconOptions` options to `true`. ## Add a custom icon to a symbol layer
-Symbol layers are rendered using WebGL. As such all resources, such as icon images, must be loaded into the WebGL context. This sample shows how to add a custom icon to the map resources. This icon is then used to render point data with a custom symbol on the map. The `textField` property of the symbol layer requires an expression to be specified. In this case, we want to render the temperature property. Since temperature is a number, it needs to be converted to a string. Additionally we want to append "┬░F" to it. An expression can be used to do this concatenation; `['concat', ['to-string', ['get', 'temperature']], '┬░F']`.
+Symbol layers are rendered using WebGL. As such all resources, such as icon images, must be loaded into the WebGL context. This sample shows how to add a custom icon to the map resources. This icon is then used to render point data with a custom symbol on the map. The `textField` property of the symbol layer requires an expression to be specified. In this case, we want to render the temperature property. Since temperature is a number, it needs to be converted to a string. Additionally we want to append "┬░F" to it. An expression can be used to do this concatenation; `['concat', ['to-string', ['get', 'temperature']], '┬░F']`.
-<br/>
+```javascript
+function InitMap()
+{
+ var map = new atlas.Map('myMap', {
+ center: [-73.985708, 40.75773],
+ zoom: 12,
+ view: "Auto",
+
+ //Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ authType: 'subscriptionKey',
+ subscriptionKey: '{Your-Azure-Maps-Subscription-key}'
+ }
+ });
+
+ map.events.add('ready', function () {
+
+ //Load the custom image icon into the map resources.
+ map.imageSprite.add('my-custom-icon', 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1717245/showers.png').then(function () {
+
+ //Create a data source and add it to the map.
+ var datasource = new atlas.source.DataSource();
+ map.sources.add(datasource);
+
+ //Create a point feature and add it to the data source.
+ datasource.add(new atlas.data.Feature(new atlas.data.Point([-73.985708, 40.75773]), {
+ temperature: 64
+ }));
+
+ //Add a layer for rendering point data as symbols.
+ map.layers.add(new atlas.layer.SymbolLayer(datasource, null, {
+ iconOptions: {
+ //Pass in the id of the custom icon that was loaded into the map resources.
+ image: 'my-custom-icon',
+
+ //Optionally scale the size of the icon.
+ size: 0.5
+ },
+ textOptions: {
+ //Convert the temperature property of each feature into a string and concatenate "┬░F".
+ textField: ['concat', ['to-string', ['get', 'temperature']], '┬░F'],
+
+ //Offset the text so that it appears on top of the icon.
+ offset: [0, -2]
+ }
+ }));
+ });
+ });
+}
+```
+
+<!-
<iframe height='500' scrolling='no' title='Custom Symbol Image Icon' src='//codepen.io/azuremaps/embed/WYWRWZ/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WYWRWZ/'>Custom Symbol Image Icon</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
> [!TIP] > The Azure Maps web SDK provides several customizable image templates you can use with the symbol layer. For more information, see the [How to use image templates](how-to-use-image-templates-web-sdk.md) document.
-## Customize a symbol layer
+## Customize a symbol layer
-The symbol layer has many styling options available. Here is a tool to test out these various styling options.
+The symbol layer has many styling options available. The [Symbol Layer Options] sample shows how the different options of the symbol layer that affects rendering.
-<br/>
+<!-
<iframe height='700' scrolling='no' title='Symbol Layer Options' src='//codepen.io/azuremaps/embed/PxVXje/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/PxVXje/'>Symbol Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
> [!TIP] > When you want to render only text with a symbol layer, you can hide the icon by setting the `image` property of the icon options to `'none'`.
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Add HTML Makers](map-add-bubble-layer.md)+
+[Symbol Layer Options]: https://samples.azuremaps.com/?search=symbol%20layer&sample=symbol-layer-options
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
See the following great articles for full code samples:
> [!div class="nextstepaction"] > [Add a polygon layer](map-add-shape.md)
-[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?search=popup&sample=reusing-popup-with-multiple-pins
+[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/popups/reusing-popup-with-multiple-pins
[Azure Maps Samples]: https://samples.azuremaps.com
-[Customize a popup]: https://samples.azuremaps.com/?search=popup&sample=customize-a-popup
-[Reuse a popup template]: https://samples.azuremaps.com/?search=Reuse&sample=reuse-a-popup-template
-[Popup events]: https://samples.azuremaps.com/?search=Popup%20events&sample=popup-events
+[Customize a popup]: https://samples.azuremaps.com/popups/customize-a-popup
+[Reuse a popup template]: https://samples.azuremaps.com/popups/reuse-a-popup-template
+[Popup events]: https://samples.azuremaps.com/popups/popup-events
azure-maps Map Add Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md
Title: Add a polygon layer to a map | Microsoft Azure Maps description: Learn how to add polygons or circles to maps. See how to use the Azure Maps Web SDK to customize geometric shapes and make them easy to update and maintain.-- Previously updated : 07/29/2019-++ Last updated : 06/07/2023+ - # Add a polygon layer to the map This article shows you how to render the areas of `Polygon` and `MultiPolygon` feature geometries on the map using a polygon layer. The Azure Maps Web SDK also supports the creation of Circle geometries as defined in the [extended GeoJSON schema](extend-geojson.md#circle). These circles are transformed into polygons when rendered on the map. All feature geometries can easily be updated when wrapped with the [atlas.Shape](/javascript/api/azure-maps-control/atlas.shape) class.
-## Use a polygon layer
+## Use a polygon layer
When a polygon layer is connected to a data source and loaded on the map, it renders the area with `Polygon` and `MultiPolygon` features. To create a polygon, add it to a data source, and render it with a polygon layer using the [PolygonLayer](/javascript/api/azure-maps-control/atlas.layer.polygonlayer) class.
+The following sample code demonstrates creating a polygon layer that covers New York City's Central Park with a red polygon.
+ ```javascript
-//Create a data source and add it to the map.
-var dataSource = new atlas.source.DataSource();
-map.sources.add(dataSource);
+
+function InitMap()
+{
+ var map = new atlas.Map('myMap', {
+ center: [-73.97, 40.78],
+ zoom: 11,
+ view: "Auto",
+
+ //Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ authType: 'subscriptionKey',
+ subscriptionKey: '{Your-Azure-Maps-Subscription-key}'
+ }
+ });
-//Create a rectangular polygon.
-dataSource.add(new atlas.data.Feature(
+ //Wait until the map resources are ready.
+ map.events.add('ready', function () {
+
+ /*Create a data source and add it to the map*/
+ var dataSource = new atlas.source.DataSource();
+ map.sources.add(dataSource);
+
+ /*Create a rectangle*/
+ dataSource.add(new atlas.Shape(new atlas.data.Feature(
new atlas.data.Polygon([[
- [-73.98235, 40.76799],
- [-73.95785, 40.80044],
- [-73.94928, 40.7968],
- [-73.97317, 40.76437],
- [-73.98235, 40.76799]
+ [-73.98235, 40.76799],
+ [-73.95785, 40.80044],
+ [-73.94928, 40.7968],
+ [-73.97317, 40.76437],
+ [-73.98235, 40.76799]
]])
-));
+ )));
-//Create and add a polygon layer to render the polygon to the map, below the label layer.
-map.layers.add(new atlas.layer.PolygonLayer(dataSource, null,{
- fillColor: 'red',
+ /*Create and add a polygon layer to render the polygon to the map*/
+ map.layers.add(new atlas.layer.PolygonLayer(dataSource, null,{
+ fillColor: "red",
fillOpacity: 0.7
-}), 'labels');
-```
+ }), 'labels')
+ });
+}
-Below is the complete and running sample of the above code.
+```
-<br/>
+ <!--
<iframe height='500' scrolling='no' title='Add a polygon to a map ' src='//codepen.io/azuremaps/embed/yKbOvZ/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/yKbOvZ/'>Add a polygon to a map </a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
## Use a polygon and line layer together A line layer is used to render the outline of polygons. The following code sample renders a polygon like the previous example, but now adds a line layer. This line layer is a second layer connected to the data source.
-<br/>
+```javascript
+function InitMap()
+{
+ var map = new atlas.Map('myMap', {
+ center: [-73.97, 40.78],
+ zoom: 11,
+ view: "Auto",
+
+ //Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authType: 'subscriptionKey',
+ subscriptionKey: '{subscription key}'
+ }
+ });
+
+ //Wait until the map resources are ready.
+ map.events.add('ready', function () {
+
+ /*Create a data source and add it to the map*/
+ var dataSource = new atlas.source.DataSource();
+ map.sources.add(dataSource);
+
+ /*Create a rectangle*/
+ dataSource.add(new atlas.data.Polygon([[
+ [-73.98235, 40.76799],
+ [-73.95785, 40.80045],
+ [-73.94928, 40.7968],
+ [-73.97317, 40.76437],
+ [-73.98235, 40.76799]
+ ]])
+ );
+
+ //Create a polygon layer to render the filled in area of the polygon.
+ var polygonLayer = new atlas.layer.PolygonLayer(dataSource, 'myPolygonLayer', {
+ fillColor: 'rgba(0, 200, 200, 0.5)'
+ });
+
+ //Create a line layer for greater control of rendering the outline of the polygon.
+ var lineLayer = new atlas.layer.LineLayer(dataSource, 'myLineLayer', {
+ strokeColor: 'red',
+ strokeWidth: 2
+ });
+
+ /*Create and add a polygon layer to render the polygon to the map*/
+ map.layers.add([polygonLayer, lineLayer])
+ });
+}
+```
+
+<!
<iframe height='500' scrolling='no' title='Polygon and line layer to add polygon' src='//codepen.io/azuremaps/embed/aRyEPy/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/aRyEPy/'>Polygon and line layer to add polygon</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+>
## Fill a polygon with a pattern In addition to filling a polygon with a color, you may use an image pattern to fill the polygon. Load an image pattern into the maps image sprite resources and then reference this image with the `fillPattern` property of the polygon layer.
-<br/>
+For a fully functional sample that shows how to use an image template as a fill pattern in a polygon layer, see [Fill polygon with built-in icon template] in the [Azure Maps Samples].
+
+<!
<iframe height="500" scrolling="no" title="Polygon fill pattern" src="//codepen.io/azuremaps/embed/JzQpYX/?height=500&theme-id=0&default-tab=js,result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/JzQpYX/'>Polygon fill pattern</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>-
+>
> [!TIP] > The Azure Maps web SDK provides several customizable image templates you can use as fill patterns. For more information, see the [How to use image templates](how-to-use-image-templates-web-sdk.md) document. ## Customize a polygon layer
-The Polygon layer only has a few styling options. Here is a tool to try them out.
+The Polygon layer only has a few styling options. See the [Polygon Layer Options] sample map in the [Azure Maps Samples] to try them out.
-<br/>
+<!
<iframe height='700' scrolling='no' title='LXvxpg' src='//codepen.io/azuremaps/embed/LXvxpg/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/LXvxpg/'>LXvxpg</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+>
<a id="addACircle"></a>
Azure Maps uses an extended version of the GeoJSON schema that provides a [defin
The Azure Maps Web SDK converts these `Point` features into `Polygon` features. Then, these features are rendered on the map using polygon and line layers as shown in the following code sample.
-<br/>
+```javascript
+function InitMap()
+{
+ var map = new atlas.Map('myMap', {
+ center: [-73.985708, 40.75773],
+ zoom: 12,
+ view: "Auto",
+
+ //Add authentication details for connecting to Azure Maps.
+ authOptions: {
+ // Get an Azure Maps key at https://azuremaps.com/.
+ authType: 'subscriptionKey',
+ subscriptionKey: '{Your-Azure-Maps-Subscription-key}'
+ }
+ });
+
+ //Wait until the map resources are ready.
+ map.events.add('ready', function () {
+
+ /*Create a data source and add it to the map*/
+ var dataSource = new atlas.source.DataSource();
+ map.sources.add(dataSource);
+
+ //Create a circle
+ dataSource.add(new atlas.data.Feature(new atlas.data.Point([-73.985708, 40.75773]),
+ {
+ subType: "Circle",
+ radius: 1000
+ }));
+
+ // Create a polygon layer to render the filled in area
+ // of the circle polygon, and add it to the map.
+ map.layers.add(new atlas.layer.PolygonLayer (dataSource, null, {
+ fillColor: 'rgba(0, 200, 200, 0.8)'
+ }));
+ });
+}
+```
+
+ <!
<iframe height='500' scrolling='no' title='Add a circle to a map' src='//codepen.io/azuremaps/embed/PRmzJX/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/PRmzJX/'>Add a circle to a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>-
+->
## Make a geometry easy to update A `Shape` class wraps a [Geometry](/javascript/api/azure-maps-control/atlas.data.geometry) or [Feature](/javascript/api/azure-maps-control/atlas.data.feature) and makes it easy to update and maintain these features. To instantiate a shape variable, pass a geometry or a set of properties to the shape constructor.
var shape1 = new atlas.Shape(new atlas.data.Point[0,0], { myProperty: 1 });
var shape2 = new atlas.Shape(new atlas.data.Feature(new atlas.data.Point[0,0], { myProperty: 1 }); ```
-The following code sample shows how to wrap a circle GeoJSON object with a shape class. As the value of the radius changes in the shape, the circle renders automatically on the map.
+The [Make a geometry easy to update] sample shows how to wrap a circle GeoJSON object with a shape class. As the value of the radius changes in the shape, the circle renders automatically on the map.
-<br/>
+ <!
<iframe height='500' scrolling='no' title='Update shape properties' src='//codepen.io/azuremaps/embed/ZqMeQY/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ZqMeQY/'>Update shape properties</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+->
## Next steps
Additional resources:
> [!div class="nextstepaction"] > [Azure Maps GeoJSON specification extension](extend-geojson.md#circle)+
+[Fill polygon with built-in icon template]: https://samples.azuremaps.com/?sample=fill-polygon-with-built-in-icon-template
+[Azure Maps Samples]: https://samples.azuremaps.com
+[Make a geometry easy to update]: https://samples.azuremaps.com/?sample=make-a-geometry-easy-to-update
azure-maps Map Add Snap Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md
Learn how to use other features of the drawing tools module:
> [!div class="nextstepaction"] > [Interaction types and keyboard shortcuts](drawing-tools-interactions-keyboard-shortcuts.md)
-[Use a snapping grid]: https://samples.azuremaps.com/?search=Use%20a%20snapping%20grid&sample=use-a-snapping-grid
-[Snap grid options]: https://samples.azuremaps.com/?search=grid&sample=snap-grid-options
+[Use a snapping grid]: https://samples.azuremaps.com/drawing-tools-module/use-a-snapping-grid
+[Snap grid options]: https://samples.azuremaps.com/drawing-tools-module/snap-grid-options
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
See the following articles for more code samples to add to your maps:
> [Add an image layer](./map-add-image-layer.md) [Azure Maps Samples]: https://samples.azuremaps.com
-[Tile Layer using X, Y, and Z]: https://samples.azuremaps.com/?search=tile%20layer&sample=tile-layer-using-x%2C-y%2C-and-z
+[Tile Layer using X, Y, and Z]: https://samples.azuremaps.com/tile-layers/tile-layer-using-x,-y-and-z
[OpenSeaMap project]: https://openseamap.org/index.php
-[WMS Tile Layer]: https://samples.azuremaps.com/?search=tile%20layer&sample=wms-tile-layer
+[WMS Tile Layer]: https://samples.azuremaps.com/tile-layers/wms-tile-layer
[U.S. Geological Survey (USGS)]: https://mrdata.usgs.gov/
-[WMTS Tile Layer]: https://samples.azuremaps.com/?search=tile%20layer&sample=wmts-tile-layer
+[WMTS Tile Layer]: https://samples.azuremaps.com/tile-layers/wmts-tile-layer
[U.S. Geological Survey (USGS) National Map]:https://viewer.nationalmap.gov/services
-[Tile Layer Options]: https://samples.azuremaps.com/?search=tile%20layer&sample=tile-layer-options
+[Tile Layer Options]: https://samples.azuremaps.com/tile-layers/tile-layer-options
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
See code examples to add functionality to your app:
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)
-[Multiple Maps]: https://samples.azuremaps.com/?search=multiple%20maps&sample=multiple-maps
+[Multiple Maps]: https://samples.azuremaps.com/map/multiple-maps
[Azure Maps Samples]: https://samples.azuremaps.com
azure-maps Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md
See the following articles for full code examples:
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)
-[Map Events]: https://samples.azuremaps.com/?search=interact%20with&sample=map-events
-[Layer Events]: https://samples.azuremaps.com/?search=interact%20with&sample=layer-events
-[HTML marker layer events]: https://samples.azuremaps.com/?search=interact%20with&sample=html-marker-layer-events
+[Map Events]: https://samples.azuremaps.com/map/map-events
+[Layer Events]: https://samples.azuremaps.com/symbol-layer/symbol-layer-events
+[HTML marker layer events]: https://samples.azuremaps.com/html-markers/html-marker-layer-events
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md
Title: Get data from shapes on a map | Microsoft Azure Maps
+ Title: Get data from shapes on a map
+ description: In this article learn, how to get shape data drawn on a map using the Microsoft Azure Maps Web SDK.-- Previously updated : 09/04/2019-++ Last updated : 06/15/2023+ - # Get shape data This article shows you how to get data of shapes that are drawn on the map. We use the **drawingManager.getSource()** function inside [drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager#getsource--). There are various scenarios when you want to extract geojson data of a drawn shape and use it elsewhere. - ## Get data from drawn shape
-The following function gets the drawn shape's source data and outputs it to the screen.
+The following function gets the drawn shape's source data and outputs it to the screen.
```javascript function getDrawnShapes() {
function getDrawnShapes() {
} ```
-Below is the complete running code sample, where you can draw a shape to test the functionality:
+The [Get drawn shapes from drawing manager] code sample allows you to draw a shape on a map and then get the code used to create those drawings by using the drawing managers `drawingManager.getSource()` function.
-<br/>
+<!--
<iframe height="686" title="Get shape data" src="//codepen.io/azuremaps/embed/xxKgBVz/?height=265&theme-id=0&default-tab=result" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">See the Pen <a href='https://codepen.io/azuremaps/pen/xxKgBVz/'>Get shape data</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+->
## Next steps
Learn more about the classes and methods used in this article:
> [!div class="nextstepaction"] > [Drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar)+
+[Get drawn shapes from drawing manager]: https://samples.azuremaps.com/drawing-tools-module/get-drawn-shapes-from-drawing-manager
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
Title: Show traffic on a map | Microsoft Azure Maps
+ Title: Show traffic on a map
+ description: Find out how to add traffic data to maps. Learn about flow data, and see how to use the Azure Maps Web SDK to add incident data and flow data to maps.-- Previously updated : 07/29/2019-++ Last updated : 06/15/2023+ - # Show traffic on the map
map.setTraffic({
}); ```
-Below is the complete running code sample of the above functionality.
+The [Traffic Overlay] sample demonstrates how to display the traffic overlay on a map.
-<br/>
-<iframe height='500' scrolling='no' title='Show traffic on a map' src='//codepen.io/azuremaps/embed/WMLRPw/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WMLRPw/'>Show traffic on a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+<!--
+<iframe height='500' scrolling='no' title='Show traffic on a map' src='//codepen.io/azuremaps/embed/WMLRPw/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WMLRPw/'>Show traffic on a map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
## Traffic overlay options
-The following tool lets you switch between the different traffic overlay settings to see how the rendering changes.
+The [Traffic Overlay Options] tool lets you switch between the different traffic overlay settings to see how the rendering changes.
-<br/>
+<!--
<iframe height="700" scrolling="no" title="Traffic overlay options" src="//codepen.io/azuremaps/embed/RwbPqRY/?height=700&theme-id=0&default-tab=result" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/RwbPqRY/'>Traffic overlay options</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
## Add traffic controls
map.controls.add(new atlas.control.TrafficControl(), { position: 'top-right' });
map.controls.add(new atlas.control.TrafficLegendControl(), { position: 'bottom-left' }); ```
-<br/>
+The [Add traffic controls] sample is a fully functional map that shows how to display traffic data on a map.
+
+<!--
<iframe height="500" scrolling="no" title="Traffic controls" src="https://codepen.io/azuremaps/embed/ZEWaeLJ?height500&theme-id=0&default-tab=js,result&embed-version=2&editable=true" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/ZEWaeLJ'>Traffic controls</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
## Next steps
Enhance your user experiences:
> [!div class="nextstepaction"] > [Code sample page](https://aka.ms/AzureMapsSamples)+
+[Traffic Overlay]: https://samples.azuremaps.com/traffic/traffic-overlay
+[Add traffic controls]: https://samples.azuremaps.com/traffic/traffic-controls
+[Traffic Overlay Options]: https://samples.azuremaps.com/traffic/traffic-overlay-options
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Global requests from clients can be processed by action group services in any re
|Notification type|Description |Fields| ||||
- |Email Azure Resource Manager role|Send an email to the subscription members, based on their role.<br>A notification email is sent only to the primary email address configured for the Azure AD user.<br>The email is only sent to Azure Active Directory **user** members of the selected role, not to Azure AD groups or service principals.<br> See [Email](#email).|Enter the primary email address configured for the Azure AD user. See [Email](#email).|
+ |Email Azure Resource Manager role|Send an email to the subscription members, based on their role.<br>A notification email is sent only to the primary email address configured for the Azure AD user.<br>The email is only sent to Azure Active Directory **user** members of the selected role, not to Azure AD groups or service principals.<br> See [Email](#email-azure-resource-manager).|Enter the primary email address configured for the Azure AD user. See [Email](#email-azure-resource-manager).|
|Email| Ensure that your email filtering and any malware/spam prevention services are configured appropriately. Emails are sent from the following email addresses:<br> * azure-noreply@microsoft.com<br> * azureemail-noreply@microsoft.com<br> * alerts-noreply@mail.windowsazure.com|Enter the email where the notification should be sent.| |SMS|SMS notifications support bi-directional communication. The SMS contains the following information:<br> * Shortname of the action group this alert was sent to<br> * The title of the alert.<br> A user can respond to an SMS to:<br> * Unsubscribe from all SMS alerts for all action groups or a single action group.<br> * Resubscribe to alerts<br> * Request help.<br> For more information about supported SMS replies, see [SMS replies](#sms-replies).|Enter the **Country code** and the **Phone number** for the SMS recipient. If you can't select your country/region code in the Azure portal, SMS isn't supported for your country/region. If your country/region code isn't available, you can vote to have your country/region added at [Share your ideas](https://feedback.azure.com/d365community/idea/e527eaa6-2025-ec11-b6e6-000d3a4f09d0). As a workaround until your country is supported, configure the action group to call a webhook to a third-party SMS provider that supports your country/region.| |Azure app Push notifications|Send notifications to the Azure mobile app. To enable push notifications to the Azure mobile app, provide the For more information about the Azure mobile app, see [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).|In the **Azure account email** field, enter the email address that you use as your account ID when you configure the Azure mobile app. |
Rate limiting applies across all subscriptions. Rate limiting is applied as soon
When an email address is rate limited, a notification is sent to communicate that rate limiting was applied and when the rate limiting expires.
-## Email
+## Email Azure Resource Manager
-When you use email notifications, you can send email to the members of a subscription's role. Email is only sent to Azure Active Directory (Azure AD) **user** members of the role. Email isn't sent to Azure AD groups or service principals.
+When you use Azure Resource Manager for email notifications, you can send email to the members of a subscription's role. Email is only sent to Azure Active Directory (Azure AD) **user** members of the role. Email isn't sent to Azure AD groups or service principals.
A notification email is sent only to the primary email address.
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
In this article, we cover the Click Analytics plug-in, which automatically track
## Get started
-Users can set up the Click Analytics Auto-Collection plug-in via SDK Loader Script or NPM.
+Users can set up the Click Analytics Auto-Collection plug-in via SDK Loader Script or npm and then optionally add a framework extension.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-### SDK Loader Script setup
+### [SDK Loader Script setup](#tab/sdkloaderscript)
Ignore this setup if you use the npm setup.
Ignore this setup if you use the npm setup.
> [!NOTE] > To add or update SDK Loader Script configuration, see [SDK Loader Script configuration](./javascript-sdk.md?tabs=sdkloaderscript#sdk-loader-script-configuration).
-### npm setup
+### [npm setup](#tab/npmsetup)
Install the npm package:
const appInsights = new ApplicationInsights({ config: configObj });
appInsights.loadAppInsights(); ``` ++
+## Add a framework extension
+
+Add a framework extension, if needed.
+
+### [React](#tab/react)
+
+```javascript
+import React from 'react';
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { ReactPlugin } from '@microsoft/applicationinsights-react-js';
+
+var browserHistory = createBrowserHistory({ basename: '' });
+var reactPlugin = new ReactPlugin();
+var clickPluginInstance = new ClickAnalyticsPlugin();
+var clickPluginConfig = {
+ autoCapture: true
+};
+var appInsights = new ApplicationInsights({
+ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ extensions: [reactPlugin, clickPluginInstance],
+ extensionConfig: {
+ [reactPlugin.identifier]: { history: browserHistory },
+ [clickPluginInstance.identifier]: clickPluginConfig
+ }
+ }
+});
+appInsights.loadAppInsights();
+```
+
+> [!NOTE]
+> To add React configuration, see [React configuration](./javascript-framework-extensions.md?tabs=react#configuration). For more information on the React plug-in, see [React plug-in](./javascript-framework-extensions.md?tabs=react#react-application-insights-javascript-sdk-plug-in).
+
+### [React Native](#tab/reactnative)
+
+```typescript
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native';
+import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
+
+var clickPluginInstance = new ClickAnalyticsPlugin();
+var clickPluginConfig = {
+ autoCapture: true
+};
+var RNPlugin = new ReactNativePlugin();
+var appInsights = new ApplicationInsights({
+ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ extensions: [RNPlugin, clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: clickPluginConfig
+ }
+ }
+});
+appInsights.loadAppInsights();
+```
+
+> [!NOTE]
+> To add React Native configuration, see [Enable Correlation for React Native](./javascript-framework-extensions.md?tabs=reactnative#enable-correlation). For more information on the React Native plug-in, see [React Native plug-in](./javascript-framework-extensions.md?tabs=reactnative#react-native-plugin-for-application-insights-javascript-sdk).
+
+### [Angular](#tab/angular)
+
+```javascript
+import { ApplicationInsights } from '@microsoft/applicationinsights-web';
+import { AngularPlugin } from '@microsoft/applicationinsights-angularplugin-js';
+import { ClickAnalyticsPlugin } from '@microsoft/applicationinsights-clickanalytics-js';
+import { Component } from '@angular/core';
+import { Router } from '@angular/router';
+
+@Component({
+ selector: 'app-root',
+ templateUrl: './app.component.html',
+ styleUrls: ['./app.component.css']
+})
+export class AppComponent {
+ constructor(
+ private router: Router
+ ){
+ var angularPlugin = new AngularPlugin();
+ var clickPluginInstance = new ClickAnalyticsPlugin();
+ var clickPluginConfig = {
+ autoCapture: true
+ };
+ const appInsights = new ApplicationInsights({ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ extensions: [angularPlugin, clickPluginInstance],
+ extensionConfig: {
+ [angularPlugin.identifier]: { router: this.router },
+ [clickPluginInstance.identifier]: clickPluginConfig
+ }
+ } });
+ appInsights.loadAppInsights();
+ }
+}
+```
+
+> [!NOTE]
+> To add Angular configuration, see [Enable Correlation for Angular](./javascript-framework-extensions.md?tabs=angular#enable-correlation). For more information on the Angular plug-in, see [Angular plug-in](./javascript-framework-extensions.md?tabs=angular#angular-plugin-for-application-insights-javascript-sdk).
+++ ## Set the authenticated user context If you need to set this optional setting, see [Set the authenticated user context](https://github.com/microsoft/ApplicationInsights-JS/blob/master/API-reference.md#setauthenticatedusercontext). This setting isn't required to use Click Analytics.
If you declare `parentDataTag` and define the `data-parentid` or `data-*-parenti
> Once `parentDataTag` is included in *any* HTML element across your application *the SDK begins looking for parents tags across your entire application* and not just the HTML element where you used it. > [!CAUTION]
-> If you're using the HEART workbook with the Click Analytics plugin, for HEART events to be logged or detected, the tag `parentDataTag` must be declared in all other parts of an end user's application.
+> If you're using the HEART workbook with the Click Analytics plug-in, for HEART events to be logged or detected, the tag `parentDataTag` must be declared in all other parts of an end user's application.
### `customDataPrefix`
The following key properties are captured by default when the plug-in is enabled
| | |--| | Name | The name of the custom event. For more information on how a name gets populated, see [Name column](#name).| About | | itemType | Type of event. | customEvent |
-|sdkVersion | Version of Application Insights SDK along with click plug-in.|JavaScript:2.6.2_ClickPlugin2.6.2|
+|sdkVersion | Version of Application Insights SDK along with click plug-in.|JavaScript:2_ClickPlugin2|
### Custom dimensions
export const clickPluginConfigWithParentDataTag = {
For example 2, for clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence. > [!NOTE] > If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`.
-
-## Example 3
+
+### Example 3
```javascript export const clickPluginConfigWithParentDataTag = {
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
- See a [sample app](https://go.microsoft.com/fwlink/?linkid=2152871) for how to implement custom event properties such as Name and parentid and custom behavior and content. - See the [sample app readme](https://github.com/Azure-Samples/Application-Insights-Click-Plugin-Demo/blob/main/README.md) for where to find click data and [Log Analytics](../logs/log-analytics-tutorial.md#write-a-query) if you arenΓÇÖt familiar with the process of writing a query. - Build a [workbook](../visualize/workbooks-overview.md) or [export to Power BI](../logs/log-powerbi.md) to create custom visualizations of click data.++
azure-monitor Javascript Framework Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-framework-extensions.md
Initialize a connection to Application Insights:
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+> [!TIP]
+> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [reactPlugin],`.
+ ```javascript import React from 'react'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
import { ReactPlugin, withAITracking } from '@microsoft/applicationinsights-reac
import { createBrowserHistory } from "history"; const browserHistory = createBrowserHistory({ basename: '' }); var reactPlugin = new ReactPlugin();
+// Add the Click Analytics plug-in.
+/* var clickPluginInstance = new ClickAnalyticsPlugin();
+ var clickPluginConfig = {
+ autoCapture: true
+}; */
var appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ // If you're adding the Click Analytics plug-in, delete the next line.
extensions: [reactPlugin],
+ // Add the Click Analytics plug-in.
+ // extensions: [reactPlugin, clickPluginInstance],
extensionConfig: { [reactPlugin.identifier]: { history: browserHistory }
+ // Add the Click Analytics plug-in.
+ // [clickPluginInstance.identifier]: clickPluginConfig
} } });
If a custom `PageView` duration isn't provided, `PageView` duration defaults to
Check out the [Application Insights React demo](https://github.com/microsoft/applicationinsights-react-js/tree/main/sample/applicationinsights-react-sample).
+> [!TIP]
+> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+ ## [React Native](#tab/reactnative) ### React Native plugin for Application Insights JavaScript SDK
To use this plugin, you need to construct the plugin and add it as an `extension
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+> [!TIP]
+> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [RNPlugin]`.
+ ```typescript import { ApplicationInsights } from '@microsoft/applicationinsights-web'; import { ReactNativePlugin } from '@microsoft/applicationinsights-react-native'; var RNPlugin = new ReactNativePlugin();
+// Add the Click Analytics plug-in.
+/* var clickPluginInstance = new ClickAnalyticsPlugin();
+var clickPluginConfig = {
+ autoCapture: true
+}; */
var appInsights = new ApplicationInsights({ config: { connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ // If you're adding the Click Analytics plug-in, delete the next line.
extensions: [RNPlugin]
+ // Add the Click Analytics plug-in.
+ /* extensions: [RNPlugin, clickPluginInstance],
+ extensionConfig: {
+ [clickPluginInstance.identifier]: clickPluginConfig
+ } */
} }); appInsights.loadAppInsights();
JavaScript correlation is turned off by default in order to minimize the telemet
#### PageView
-If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0.
+If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0.
+
+> [!TIP]
+> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
## [Angular](#tab/angular)
Set up an instance of Application Insights in the entry component in your app:
> [!IMPORTANT] > When using the ErrorService, there is an implicit dependency on the `@microsoft/applicationinsights-analytics-js` extension. you MUST include either the `'@microsoft/applicationinsights-web'` or include the `@microsoft/applicationinsights-analytics-js` extension. Otherwise, unhandled errors caught by the error service will not be sent.
+> [!TIP]
+> If you want to add the [Click Analytics plug-in](./javascript-feature-extensions.md), uncomment the lines for Click Analytics and delete `extensions: [angularPlugin],`.
+ ```js import { Component } from '@angular/core'; import { ApplicationInsights } from '@microsoft/applicationinsights-web';
export class AppComponent {
private router: Router ){ var angularPlugin = new AngularPlugin();
- const appInsights = new ApplicationInsights({ config: {
- connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
- extensions: [angularPlugin],
- extensionConfig: {
- [angularPlugin.identifier]: { router: this.router }
- }
- } });
+ // Add the Click Analytics plug-in.
+ /* var clickPluginInstance = new ClickAnalyticsPlugin();
+ var clickPluginConfig = {
+ autoCapture: true
+ }; */
+ const appInsights = new ApplicationInsights({
+ config: {
+ connectionString: 'YOUR_CONNECTION_STRING_GOES_HERE',
+ // If you're adding the Click Analytics plug-in, delete the next line.
+ extensions: [angularPlugin],
+ // Add the Click Analytics plug-in.
+ // extensions: [angularPlugin, clickPluginInstance],
+ extensionConfig: {
+ [angularPlugin.identifier]: { router: this.router }
+ // Add the Click Analytics plug-in.
+ // [clickPluginInstance.identifier]: clickPluginConfig
+ }
+ }
+ });
appInsights.loadAppInsights(); } }
The Angular Plugin automatically tracks route changes and collects other Angular
If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of 0.
+> [!TIP]
+> If you're adding the Click Analytics plug-in, see [Use the Click Analytics plug-in](./javascript-feature-extensions.md#use-the-plug-in) to continue with the setup process.
+ ## Next steps
azure-monitor Container Insights Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-authentication.md
+
+ Title: Configure agent authentication for the Container Insights agent | Microsoft Docs
+description: This article describes how to configure authentication for the containerized agent used by Container insights.
+ Last updated : 06/13/2023+++
+# Authentication for Azure Monitor - Container Insights
+
+Container Insights now defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a Monitoring Metrics Publisher role to the cluster.
+
+## How to enable
+
+Click on the relevant tab for instructions to enable Managed identity authentication on existing clusters.
+
+## [Azure portal](#tab/portal-azure-monitor)
+
+No action is needed when creating a cluster from the Portal. However, it isn't possible to switch to Managed Identity authentication from the Azure portal. Customers must use command line tools to migrate. See other tabs for migration instructions and templates.
+
+## [Azure CLI](#tab/cli)
+
+See [Migrate to managed identity authentication](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=azure-cli#migrate-to-managed-identity-authentication)
+
+## [Resource Manager template](#tab/arm)
+
+See instructions for migrating
+
+* [AKS clusters](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-aks?tabs=arm#existing-aks-cluster)
+* [Arc-enabled clusters](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?tabs=create-cli%2Cverify-portal%2Cmigrate-arm)
+
+## [Bicep](#tab/bicep)
+
+**Enable Monitoring with MSI without syslog**
+
+1. Download Bicep templates and Parameter files
+
+```
+curl -L https://aka.ms/enable-monitoring-msi-bicep-template -o existingClusterOnboarding.bicep
+curl -L https://aka.ms/enable-monitoring-msi-bicep-parameters -o existingClusterParam.json
+```
+
+2. Edit the values in the parameter file
+
+ - **aksResourceId**: Use the values on the AKS Overview page for the AKS cluster.
+ - **aksResourceLocation**: Use the values on the AKS Overview page for the AKS cluster.
+ - **workspaceResourceId**: Use the resource ID of your Log Analytics workspace.
+ - **workspaceRegion**: Use the location of your Log Analytics workspace.
+ - **resourceTagValues**: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values.
+ - Other parameters are for cost optimization, refer to [this guide](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-cost-config?tabs=create-CLI#data-collection-parameters)
+
+3. Onboard with the following commands:
+
+```
+az login
+az account set --subscription "Subscription Name"
+az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./existingClusterOnboarding.bicep --parameters ./existingClusterParam.json
+```
+
+**Enable Monitoring with MSI with syslog**
+
+1. Download Bicep templates and Parameter files
+
+```
+ curl -L https://aka.ms/enable-monitoring-msi-syslog-bicep-template -o existingClusterOnboarding.bicep
+ curl -L https://aka.ms/enable-monitoring-msi-syslog-bicep-parameters -o existingClusterParam.json
+```
+
+2. Edit the values in the parameter file
+
+- **aksResourceId**: Use the values on the AKS Overview page for the AKS cluster.
+- **aksResourceLocation**: Use the values on the AKS Overview page for the AKS cluster.
+- **workspaceResourceId**: Use the resource ID of your Log Analytics workspace.
+- **workspaceRegion**: Use the location of your Log Analytics workspace.
+- **resourceTagValues**: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values.
+
+3. Onboarding with the following commands:
+
+```
+az login
+az account set --subscription "Subscription Name"
+az deployment group create --resource-group <ClusterResourceGroupName> --template-file ./existingClusterOnboarding.bicep --parameters ./existingClusterParam.json
+```
+
+For new aks cluster:
+Replace and use the managed cluster resources in this [guide](https://learn.microsoft.com/azure/aks/learn/quick-kubernetes-deploy-bicep?tabs=azure-cli)
++
+## [Terraform](#tab/terraform)
+
+**Enable Monitoring with MSI without syslog for new aks cluster**
+
+1. Download Terraform template for enable monitoring msi with syslog enabled:
+https://aka.ms/enable-monitoring-msi-terraform
+2. Adjust the azurerm_kubernetes_cluster resource in main.tf based on what cluster settings you're going to have
+3. Update parameters in variables.tf to replace values in "<>"
+ - **aks_resource_group_name**: Use the values on the AKS Overview page for the resource group.
+ - **resource_group_location**: Use the values on the AKS Overview page for the resource group.
+ - **cluster_name**: Define the cluster name that you would like to create
+ - **workspace_resource_id**: Use the resource ID of your Log Analytics workspace.
+ - **workspace_region**: Use the location of your Log Analytics workspace.
+ - **resource_tag_values**: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values.
+ - Other parameters are for cluster settings or cost optimization, refer to [this guide](https://learn.microsoft.com/azure/azure-monitor/containers/container-insights-cost-config?tabs=create-CLI#data-collection-parameters)
+4. Run `terraform init -upgrade` to initialize the Terraform deployment.
+5. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
+6. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
+
+**Enable Monitoring with MSI with syslog for new aks cluster**
+1. Download Terraform template for enable monitoring msi with syslog enabled:
+https://aka.ms/enable-monitoring-msi-syslog-terraform
+2. Adjust the azurerm_kubernetes_cluster resource in main.tf based on what cluster settings you're going to have
+3. Update parameters in variables.tf to replace values in "<>"
+ - **aks_resource_group_name**: Use the values on the AKS Overview page for the resource group.
+ - **resource_group_location**: Use the values on the AKS Overview page for the resource group.
+ - **cluster_name**: Define the cluster name that you would like to create
+ - **workspace_resource_id**: Use the resource ID of your Log Analytics workspace.
+ - **workspace_region**: Use the location of your Log Analytics workspace.
+ - **resource_tag_values**: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name match `MSCI-<clusterName>-<clusterRegion>` and this resource is created in the same resource group as the AKS clusters. For first time onboarding, you can set the arbitrary tag values.
+ - Other parameters are for cluster settings, refer [to guide](http://LinkTobeAdded.com)
+4. Run `terraform init -upgrade` to initialize the Terraform deployment.
+5. Run `terraform plan -out main.tfplan` to initialize the Terraform deployment.
+6. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
+
+**Enable Monitoring with MSI for existing aks cluster:**
+1. Import the existing cluster resource first with this command: ` terraform import azurerm_kubernetes_cluster.k8s <aksResourceId>`
+2. Add the oms_agent add-on profile to the existing azurerm_kubernetes_cluster resource.
+```
+oms_agent {
+ log_analytics_workspace_id = var.workspace_resource_id
+ msi_auth_for_monitoring_enabled = true
+ }
+```
+3. Copy the dcr and dcra resources from the Terraform templates
+4. Run `terraform plan -out main.tfplan` and make sure the change is adding the oms_agent property. Note: If the azurerm_kubernetes_cluster resource defined is different during terraform plan, the existing cluster will get destroyed and recreated.
+5. Run `terraform apply main.tfplan` to apply the execution plan to your cloud infrastructure.
+
+> [!TIP]
+> - Edit the `main.tf` file appropriately before running the terraform template
+> - Data will start flowing after 10 minutes since the cluster needs to be ready first
+> - WorkspaceID needs to match the format `/subscriptions/12345678-1234-9876-4563-123456789012/resourceGroups/example-resource-group/providers/Microsoft.OperationalInsights/workspaces/workspaceValue`
+> - If resource group already exists, run `terraform import azurerm_resource_group.rg /subscriptions/<Subscription_ID>/resourceGroups/<Resource_Group_Name>` before terraform plan
+
+## [Azure Policy](#tab/policy)
+
+1. Download Azure Policy templates and parameter files using the following commands:
+
+```
+curl -L https://aka.ms/enable-monitoring-msi-azure-policy-template -o azure-policy.rules.json
+curl -L https://aka.ms/enable-monitoring-msi-azure-policy-parameters -o azure-policy.parameters.json
+```
++
+2. Activate the policies:
+
+You can create the policy definition using a command:
+```
+az policy definition create --name "AKS-Monitoring-Addon-MSI" --display-name "AKS-Monitoring-Addon-MSI" --mode Indexed --metadata version=1.0.0 category=Kubernetes --rules azure-policy.rules.json --params azure-policy.parameters.json
+```
+You can create the policy assignment with the following command like:
+```
+az policy assignment create --name aks-monitoring-addon --policy "AKS-Monitoring-Addon-MSI" --assign-identity --identity-scope /subscriptions/<subscriptionId> --role Contributor --scope /subscriptions/<subscriptionId> --location <location> --role Contributor --scope /subscriptions/<subscriptionId> -p "{ \"workspaceResourceId\": { \"value\": \"/subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/microsoft.operationalinsights/workspaces/<workspaceName>\" } }"
+```
+
+> [!TIP]
+> - Make sure when performing remediation task, the policy assignment has access to workspace you specified.
+> - Download all files under AddonPolicyTemplate folder before running the policy template.
+> - For assign policy, parameters and remediation task from portal, use the following guides:
+> o After creating the policy definition through the above command, go to Azure portal -> Policy -> Definitions and select the definition you created.
+> o Click on 'Assign' and then go to the 'Parameters' tab and fill in the details. Then click 'Review + Create'.
+> o Once the policy is assigned to the subscription, whenever you create a new cluster, the policy will run and check if Container Insights is enabled. If not, it will deploy the resource. If you want to apply the policy to existing AKS cluster, create a 'Remediation task' for that resource after going to the 'Policy Assignment'.
+++
+## Limitations
+1. Ingestion Transformations are not supported: See [Data collection transformation](https://learn.microsoft.com/azure/azure-monitor/essentials/data-collection-transformations) to read more.
+2. Dependency on DCR/DCRA for region availability - For new AKS region, there might be chances that DCR is still not supported in the new region. In that case, onboarding Container Insights with MSI will fail. One workaround is to onboard to Container Insights through CLI with the old way (with the use of Container Insights solution)
+
+## Timeline
+Any new clusters being created or being onboarded now default to Managed Identity authentication. However, existing clusters with legacy solution-based authentication are still supported.
+
+## Next steps
+If you experience issues when you upgrade the agent, review the [troubleshooting guide](container-insights-troubleshoot.md) for support.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
If you have a Kubernetes cluster with Windows nodes, review and configure the ne
## Authentication
-Container insights defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
-
-> [!NOTE]
-> Container insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available." They're excluded from the service-level agreements and limited warranty. Container insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service](../../aks/faq.md).
+Container insights defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
## Agent
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
Transformations in a [data collection rule (DCR)](data-collection-rule-overview.
-### Required columns
+## Required columns
The output of every transformation must contain a valid timestamp in a column called `TimeGenerated` of type `datetime`. Make sure to include it in the final `extend` or `project` block! Creating or updating a DCR without `TimeGenerated` in the output of a transformation will lead to an error.
-## Inline reference table
-The [datatable](/azure/data-explorer/kusto/query/datatableoperator?pivots=azuremonitor) operator isn't supported in the subset of KQL available to use in transformations. This operator would normally be used in KQL to define an inline query-time table. Use dynamic literals instead to work around this limitation.
-
-For example, the following statement isn't supported in a transformation:
-
-```kusto
-let galaxy = datatable (country:string,entity:string)['ES','Spain','US','United States'];
-source
-| join kind=inner (galaxy) on $left.Location == $right.country
-| extend Galaxy_CF = ['entity']
-```
-
-You can instead use the following statement, which is supported and performs the same functionality:
-
-```kusto
-let galaxyDictionary = parsejson('{"ES": "Spain","US": "United States"}');
-source
-| extend Galaxy_CF = galaxyDictionary[Location]
-```
--
-### Handling dynamic data
+## Handling dynamic data
Consider the following input with [dynamic data](/azure/data-explorer/kusto/query/scalar-data-types/dynamic): ```json
Consider the following input with [dynamic data](/azure/data-explorer/kusto/quer
} ```
-In order to access the properties in *AdditionalContext*, define it as dynamic-typed column in the input stream:
+To access the properties in *AdditionalContext*, define it as dynamic-type column in the input stream:
```json "columns": [
In order to access the properties in *AdditionalContext*, define it as dynamic-t
] ```
-The content of *AdditionalContext* column can now be parsed and used in the KQL transformation:
+The content of the *AdditionalContext* column can now be parsed and used in the KQL transformation:
```kusto source
source
| extend DeviceId = tostring(parsedAdditionalContext.DeviceID) ```
-### Dynamic literals
+## Dynamic literals
Use the [parse_json function](/azure/data-explorer/kusto/query/parsejsonfunction) to handle [dynamic literals](/azure/data-explorer/kusto/query/scalar-data-types/dynamic#dynamic-literals). For example, the following queries provide the same functionality:
print x = 2 + 2, y = 5 | extend z = exp2(x) + exp2(y)
- [parse](/azure/data-explorer/kusto/query/parseoperator) - [project-away](/azure/data-explorer/kusto/query/projectawayoperator) - [project-rename](/azure/data-explorer/kusto/query/projectrenameoperator)-- [columnifexists]() (use columnifexists instead of column_ifexists)
+- [datatable](/azure/data-explorer/kusto/query/datatableoperator?pivots=azuremonitor)
+- [columnifexists](/azure/data-explorer/kusto/query/columnifexists) (use columnifexists instead of column_ifexists)
### Scalar operators
azure-monitor Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following limits apply when you use Azure Resource Manager and Azure resourc
[!INCLUDE [AAD-service-limits](../../../includes/active-directory-service-limits-include.md)]
+## API Center (preview) limits
+++ ## API Management limits [!INCLUDE [api-management-service-limits](../../../includes/api-management-service-limits.md)]
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Applying locks can lead to unexpected results. Some operations, which don't seem
- A read-only lock on a **resource group** that contains an **automation account** prevents all runbooks from starting. These operations require a POST method request.
+- A cannot-delete lock on a **resource** or **resource group** prevents the deletion of Azure RBAC assignments.
+ - A cannot-delete lock on a **resource group** prevents Azure Resource Manager from [automatically deleting deployments](../templates/deployment-history-deletions.md) in the history. If you reach 800 deployments in the history, your deployments fail. - A cannot-delete lock on the **resource group** created by **Azure Backup Service** causes backups to fail. The service supports a maximum of 18 restore points. When locked, the backup service can't clean up restore points. For more information, see [Frequently asked questions-Back up Azure VMs](../../backup/backup-azure-vm-backup-faq.yml).
azure-resource-manager Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
azure-signalr Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
backup Azure Kubernetes Service Cluster Backup Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-using-cli.md
+
+ Title: Back up Azure Kubernetes Service (AKS) using Azure CLI
+description: This article explains how to back up Azure Kubernetes Service (AKS) using Azure CLI.
++ Last updated : 06/20/2023+++++
+# Back up Azure Kubernetes Service using Azure CLI (preview)
+
+This article describes how to configure and back up Azure Kubernetes Service (AKS) using Azure CLI.
+
+Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. Backup vault communicates with the cluster via this Backup Extension to perform backup and restore operations.
+
+## Before you start
+
+- Currently, AKS backup supports Azure Disk-based persistent volumes (enabled by CSI driver) only. The backups are stored only in operational datastore (in your tenant) and aren't moved to a vault. The Backup vault and AKS cluster should be in the same region.
+
+- AKS backup uses a blob container and a resource group to store the backups. The blob container has the AKS cluster resources stored in it, whereas the persistent volume snapshots are stored in the resource group. The AKS cluster and the storage locations must reside in the same region. Learn [how to create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
+
+- Currently, AKS backup supports once-a-day backup. It also supports more frequent backups (in every *4*, *8*, and *12* hours intervals) per day. This solution allows you to retain your data for restore for up to 360 days. Learn to [create a backup policy](#create-a-backup-policy).
+
+- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) to configure backup and restore operations on an AKS cluster. Learn more [about Backup Extension](azure-kubernetes-service-cluster-backup-concept.md#backup-extension).
+
+- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and the `TrustedAccessPreview` feature flag on `Microsoft.ContainerService` are registered for your subscription before initiating the backup configuration and restore operations.
+
+- Ensure to perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before initiating backup or restore operation for AKS backup.
+
+For more information on the supported scenarios, limitations, and availability, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md).
+
+## Create a Backup vault
+
+A Backup vault is a management entity in Azure that stores backup data for various newer workloads that Azure Backup supports, such as Azure Database for PostgreSQL servers and Azure Disks. Backup vaults make it easy to organize your backup data, while minimizing management overhead. Backup vaults are based on the Azure Resource Manager model of Azure, which provides enhanced capabilities to help secure backup data.
+
+Before you create a Backup vault, choose the storage redundancy of the data in the vault, and then create the Backup vault with that storage redundancy and the location. Learn more about [creating a Backup vault](backup-vault-overview.md#create-a-backup-vault).
+
+>[!Note]
+>Though the selected vault may have the *global-redundancy* setting, backup for AKS currently supports **Operational Tier** only. All backups are stored in your subscription in the same region as that of the AKS cluster, and they aren't copied to Backup vault storage.
+
+To create the Backup vault, run the following command:
+
+```azurecli
+az dataprotection backup-vault create --resource-group $backupvaultresourcegroup --vault-name $backupvault --location $region --type SystemAssigned --storage-settings datastore-type="VaultStore" type="LocallyRedundant"
+```
+
+Once the vault creation is complete, create a backup policy to protect AKS clusters.
+
+## Create a backup policy
+
+To understand the inner components of a backup policy for the backup of AKS, retrieve the policy template using the command `az dataprotection backup-policy get-default-policy-template`. This command returns a default policy template for a given datasource type. Use this policy template to create a new policy.
+
+```azurecli
+az dataprotection backup-policy get-default-policy-template --datasource-type AzureKubernetesService > akspolicy.json
++
+{
+ "datasourceTypes": [
+ "Microsoft.ContainerService/managedClusters"
+ ],
+ "name": "AKSPolicy1",
+ "objectType": "BackupPolicy",
+ "policyRules": [
+ {
+ "backupParameters": {
+ "backupType": "Incremental",
+ "objectType": "AzureBackupParams"
+ },
+ "dataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ },
+ "name": "BackupHourly",
+ "objectType": "AzureBackupRule",
+ "trigger": {
+ "objectType": "ScheduleBasedTriggerContext",
+ "schedule": {
+ "repeatingTimeIntervals": [
+ "R/2023-01-04T09:00:00+00:00/PT4H"
+ ]
+ },
+ "taggingCriteria": [
+ {
+ "isDefault": true,
+ "tagInfo": {
+ "id": "Default_",
+ "tagName": "Default"
+ },
+ "taggingPriority": 99
+ }
+ ]
+ }
+ },
+ {
+ "isDefault": true,
+ "lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P7D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ }
+ }
+ ],
+ "name": "Default",
+ "objectType": "AzureRetentionRule"
+ }
+ ]
+}
+
+```
+
+The policy template consists of a trigger criteria (which decides the factors to trigger the backup job) and a lifecycle (which decides when to delete, copy, or move the backups). In AKS backup, the default value for trigger is a scheduled hourly trigger is *every 4 hours (PT4H)* and retention of each backup is *365 days*.
++
+```azurecli
+Scheduled trigger:
+ "trigger": {
+ "objectType": "ScheduleBasedTriggerContext",
+ "schedule": {
+ "repeatingTimeIntervals": [
+ "R/2023-01-04T09:00:00+00:00/PT4H"
+ ]
+ },
+
+Default retention lifecycle:
+ "lifecycles": [
+ {
+ "deleteAfter": {
+ "duration": "P7D",
+ "objectType": "AbsoluteDeleteOption"
+ },
+ "sourceDataStore": {
+ "dataStoreType": "OperationalStore",
+ "objectType": "DataStoreInfoBase"
+ }
+ }
+ ],
++
+```
+
+Backup for AKS provides multiple backups per day. If you require more frequent backups, choose the *Hourly backup frequency* that has the ability to take backups with intervals of every *4*, *6*, *8*, or *12* hours. The backups are scheduled based on the *Time interval* you've selected.
+
+For example, if you select *Every 4 hours*, then the backups are taken at approximately in the interval of *every 4 hours* so that the backups are distributed equally across the day. If *once a day backup* is sufficient, then choose the *Daily backup frequency*. In the daily backup frequency, you can specify the *time of the day* when your backups should be taken.
+
+>[!Important]
+>The time of the day indicates the backup start time and not the time when the backup completes.
+
+>[!Note]
+>Though the selected vault has the global-redundancy setting, backup for AKS currently supports snapshot datastore only. All backups are stored in a resource group in your subscription, and aren't copied to the Backup vault storage.
+
+Once you've downloaded the template as a JSON file, you can edit it for scheduling and retention as required. Then create a new policy with the resulting JSON. If you want to edit the hourly frequency or the retention period, use the `az dataprotection backup-policy trigger set` and/or `az dataprotection backup-policy retention-rule set` commands. Once the policy JSON has all the required values, proceed to create a new policy from the policy object using the `az dataprotection backup-policy create` command.
+
+```azurecli
+az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVault -n mypolicy --policy policy.json
+```
+
+## Prepare AKS cluster for backup
+
+Once the vault and policy creation are complete, you need to perform the following prerequisites to get the AKS cluster ready for backup:
+
+1. **Create a storage account and blob container**.
+
+ Backup for AKS stores Kubernetes resources in a blob container as backups. To get the AKS cluster ready for backup, you need to install an extension in the cluster. This extension requires the storage account and blob container as inputs.
+
+ To create a new storage account, run the following command:
+
+ ```azurecli
+ az storage account create --name $storageaccount --resource-group $storageaccountresourcegroup --location $region --sku Standard_LRS
+ ```
+
+ Once the storage account creation is complete, create a blob container inside by running the following command:
+
+ ```azurecli
+ az storage container create --name $blobcontainer --account-name $storageaccount --auth-mode login
+ ```
+
+ Learn how to [enable or disable specific features, such as private endpoint, while creating storage account and blob container](../storage/common/storage-account-create.md?tabs=azure-portal).
++
+ >[!Note]
+ >1. The storage account and the AKS cluster should be in the same region and subscription.
+ >2. The blob container shouldn't contain any previously created file systems (except created by backup for AKS).
+ >3. If your source or target AKS cluster is in a private virtual network, then you need to create Private Endpoint to connect storage account with the AKS cluster.
+
+2. **Install Backup Extension**.
+
+ Backup Extension is mandatory to be installed in the AKS cluster to perform any backup and restore operations. The Backup Extension creates a namespace `dataprotection-microsoft` in the cluster and uses the same to deploy its resources. The extension requires the storage account and blob container as inputs for installation.
+
+ ```azurecli
+ az k8s-extension create --name azure-aks-backup --extension-type microsoft.dataprotection.kubernetes --scope cluster --cluster-type managedClusters --cluster-name $akscluster --resource-group $aksclusterresourcegroup --release-train stable --configuration-settings blobContainer=$blobcontainer storageAccount=$storageaccount storageAccountResourceGroup=$storageaccountresourcegroup storageAccountSubscriptionId=$subscriptionId
+
+ ```
+
+ As part of extension installation, a user identity is created in the AKS cluster's Node Pool Resource Group. For the extension to access the storage account, you need to provide this identity the **Storage Account Contributor** role. To assign the required role, run the following command:
+
+ ```azurecli
+ az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name $akscluster --resource-group $aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/$subscriptionId/resourceGroups/$storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/$storageaccount
+ ```
+
+
+3. **Enable Trusted Access**
+
+ For the Backup vault to connect with the AKS cluster, you must enable *Trusted Access* as it allows the Backup vault to have a direct line of sight to the AKS cluster.
++
+ To enable Trusted Access, run the following command:
+
+ ```azurecli
+ az aks trustedaccess rolebinding create --cluster-name $akscluster --name backuprolebinding --resource-group $aksclusterresourcegroup --roles Microsoft.DataProtection/backupVaults/backup-operator --source-resource-id /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/BackupVaults/$backupvault
+ ```
+
+## Configure backups
+
+With the created Backup vault and backup policy, and the AKS cluster in *ready-to-be-backed-up* state, you can now start to back up your AKS cluster.
+
+### Prepare the request
+
+The configuration of backup is performed in two steps:
+
+1. Prepare backup configuration to define which cluster resources are to be backed up using the `az dataprotection backup-instance initialize-backupconfig` command. The command generates a JSON, which you can update to define backup configuration for your AKS cluster as required.
+
+ ```azurecli
+ az dataprotection backup-instance initialize-backupconfig --datasource-type AzureKubernetesService > aksbackupconfig.json
+
+ {
+ "excluded_namespaces": null,
+ "excluded_resource_types": null,
+ "include_cluster_scope_resources": true,
+ "included_namespaces": null,
+ "included_resource_types": null,
+ "label_selectors": null,
+ "snapshot_volumes": true
+ }
+ ```
+
+2. Prepare the relevant request using the relevant vault, policy, AKS cluster, backup configuration, and snapshot resource group using the `az dataprotection backup-instance initialize` command.
+
+ ```azurecli
+ az dataprotection backup-instance initialize --datasource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$akscluster --datasource-location $region --datasource-type AzureKubernetesService --policy-id /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/backupVaults/$backupvault/backupPolicies/$backuppolicy --backup-configuration ./aksbackupconfig.json --friendly-name ecommercebackup --snapshot-resource-group-name $snapshotresourcegroup > backupinstance.json
+ ```
+
+Now, use the JSON output of this command to configure backup for the AKS cluster.
+
+### Assign required permissions and validate
+
+Backup vault uses managed identity to access other Azure resources. To configure backup of AKS cluster, Backup vault's managed identity requires a set of permissions on the AKS cluster and resource groups, where snapshots are created and managed. Also, the AKS cluster requires permission on the Snapshot Resource group.
+
+Only, system-assigned managed identity is currently supported for backup (both Backup vault and AKS cluster). A system-assigned managed identity is restricted to one per resource and is tied to the lifecycle of this resource. You can grant permissions to the managed identity by using Azure role-based access control (Azure RBAC). Managed identity is a service principal of a special type that may only be used with Azure resources. Learn more [about managed identities](../active-directory/managed-identities-azure-resources/overview.md).
+
+With the request prepared, first you need to validate if the required roles are assigned to the resources mentioned above by running the following command:
+
+```azurecli
+az dataprotection backup-instance validate-for-backup --backup-instance ./backupinstance.json --ids /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/backupVaults/$backupvault
+```
+
+If the validation fails and there are certain permissions missing, then you can assign them by running the following command:
+
+```azurecli
+az dataprotection backup-instance update-msi-permissions command.
+az dataprotection backup-instance update-msi-permissions --datasource-type AzureKubernetesService --operation Backup --permissions-scope ResourceGroup --vault-name $backupvault --resource-group $backupvaultresourcegroup --backup-instance backupinstance.json
+
+```
+
+Once the permissions are assigned, revalidate using the following *validate for backup* command:
+
+```azurecli
+az dataprotection backup-instance create --backup-instance backupinstance.json --resource-group $backupvaultresourcegroup --vault-name $backupvault
+```
+
+## Run an on-demand backup
+
+To fetch the relevant backup instance on which you want to trigger a backup, run the `az dataprotection backup-instance list-from-resourcegraph --` command.
+
+```azurecli
+az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureKubernetesService --datasource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$akscluster --query aksAssignedIdentity.id
+```
+
+Now, trigger an on-demand backup for the backup instance by running the following command:
+
+```azurecli
+az dataprotection backup-instance adhoc-backup --rule-name "BackupDaily" --ids /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/backupVaults/$backupvault/backupInstances/$backupinstanceid
+
+```
+
+## Tracking jobs
+
+Track backup jobs running the `az dataprotection job` command. You can list all jobs and fetch a particular job detail.
+
+You can also use Resource Graph to track all jobs across all subscriptions, resource groups, and Backup vaults by running the `az dataprotection job list-from-resourcegraph` command to get the relevant job
+
+**For on-demand backup**:
+
+```azurecli
+az dataprotection job list-from-resourcegraph --datasource-type AzureKubernetesService --datasource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$akscluster --operation OnDemandBackup
+```
+
+**For scheduled backup**:
+
+```azurecli
+az dataprotection job list-from-resourcegraph --datasource-type AzureKubernetesService --datasource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$akscluster --operation ScheduledBackup
+```
+
+## Next steps
+
+- [Restore Azure Kubernetes Service cluster using Azure CLI (preview)](azure-kubernetes-service-cluster-restore-using-cli.md)
+- [Manage Azure Kubernetes Service cluster backups (preview)](azure-kubernetes-service-cluster-manage-backups.md)
+- [About Azure Kubernetes Service cluster backup (preview)](azure-kubernetes-service-cluster-backup-concept.md)
backup Azure Kubernetes Service Cluster Restore Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore-using-cli.md
+
+ Title: Restore Azure Kubernetes Service (AKS) using Azure CLI
+description: This article explains how to restore backed-up Azure Kubernetes Service (AKS) using Azure CLI.
++ Last updated : 06/20/2023+++++
+# Restore Azure Kubernetes Service using Azure CLI (preview)
+
+This article describes how to restore Azure Kubernetes cluster from a restore point created by Azure Backup using Azure CLI.
+
+Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. Backup vault communicates with the cluster via this Backup Extension to perform backup and restore operations.
+
+You can perform both *Original-Location Recovery (OLR)* (restoring in the AKS cluster that was backed up) and *Alternate-Location Recovery (ALR)* (restoring in a different AKS cluster). You can also select the items to be restored from the backup that is Item-Level Recovery (ILR).
+
+>[!Note]
+>Before you initiate a restore operation, the target cluster should have Backup Extension installed and Trusted Access enabled for the Backup vault. [Learn more](azure-kubernetes-service-cluster-backup-using-cli.md#prepare-aks-cluster-for-backup).
+
+## Before you start
+
+- AKS backup allows you to restore to original AKS cluster (that was backed up) and to an alternate AKS cluster. AKS backup allows you to perform a full restore and item-level restore. You can utilize [restore configurations](#restore-to-an-aks-cluster) to define parameters based on the cluster resources that will be picked up during the restore.
+
+- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access) between the Backup vault and the AKS cluster.
+
+For more information on the limitations and supported scenarios, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md).
+
+## Validate and prepare target AKS cluster
+
+Before you initiate a restore process, you must validate that AKS cluster is prepared for restore. This includes the Backup Extension to be installed with the extension having the permission on storage account where backups are stored and Trusted Access to be enabled between AKS cluster and Backup vault.
+
+First, check if Backup Extension is installed in the cluster by running the following command:
+
+```azurecli
+az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name $targetakscluster --resource-group $aksclusterresourcegroup
+```
+
+If the extension is installed, then check if it has the right permissions on the storage account where backups are stored:
+
+```azurecli
+az role assignment list --all --assignee $(az k8s-extension show --name azure-aks-backup --cluster-name $targetakscluster --resource-group $aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv)
+```
+
+If the role isn't assigned, then you can assign the role by running the following command:
+
+```azurecli
+az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name $targetakscluster --resource-group $aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/$subscriptionId/resourceGroups/$storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/$storageaccount
+
+```
+
+If the Backup Extension isn't installed, then running the following extension installation command with the storage account and blob container where backups are stored as input.
+
+```azurecli
+az k8s-extension create --name azure-aks-backup --extension-type microsoft.dataprotection.kubernetes --scope cluster --cluster-type managedClusters --cluster-name $targetakscluster --resource-group $aksclusterresourcegroup --release-train stable --configuration-settings blobContainer=$blobcontainer storageAccount=$storageaccount storageAccountResourceGroup=$storageaccountresourcegroup storageAccountSubscriptionId=$subscriptionId
+```
+
+Then assign the required role to the extension on the storage account by running the following command:
+
+```azurecli
+az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name $targetakscluster --resource-group $aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/$subscriptionId/resourceGroups/$storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/$storageaccount
+
+```
+
+## Check Trusted Access
+
+To check if Trusted Access is enabled between the Backup vault and Target AKS cluster, run the following command:
+
+```azurecli
+az aks trustedaccess rolebinding list --resource-group $aksclusterresourcegroup --cluster-name $targetakscluster
+```
+
+If it's not enabled, then run the following command to enable Trusted Access:
+
+```azurecli
+az aks trustedaccess rolebinding create --cluster-name $targetakscluster --name backuprolebinding --resource-group $aksclusterresourcegroup --roles Microsoft.DataProtection/backupVaults/backup-operator --source-resource-id /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/BackupVaults/$backupvault
+```
+
+## Restore to an AKS cluster
+
+### Fetch the relevant recovery point
+
+Fetch all instances associated with the AKS cluster and identify the relevant instance.
+
+```azurecli
+az dataprotection backup-instance list-from-resourcegraph --datasource-type AzureKubernetesService --datasource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$akscluster --query aksAssignedIdentity.id
+
+```
++
+Once the instance is identified, fetch the relevant recovery point.
+
+```azurecli
+az dataprotection recovery-point list --backup-instance-name $backupinstancename --resource-group $backupvaultresourcegroup --vault-name $backupvault
+
+```
+
+### Prepare the restore request
+
+To prepare the restore configuration defining the items to be restored to the target AKS cluster, run the `az dataprotection backup-instance initialize-restoreconfig` command.
+
+```azurecli
+az dataprotection backup-instance initialize-restoreconfig --datasource-type AzureKubernetesService >restoreconfig.json
+++
+{
+ "conflict_policy": "Skip",
+ "excluded_namespaces": null,
+ "excluded_resource_types": null,
+ "include_cluster_scope_resources": true,
+ "included_namespaces": null,
+ "included_resource_types": null,
+ "label_selectors": null,
+ "namespace_mappings": null,
+ "object_type": "KubernetesClusterRestoreCriteria",
+ "persistent_volume_restore_mode": "RestoreWithVolumeData"
+}
+
+```
+
+Now, prepare the restore request with all relevant details. If you're restoring the backup to the original cluster, then run the following command:
+
+```azurecli
+az dataprotection backup-instance restore initialize-for-item-recovery --datasource-type AzureKubernetesService --restore-location $region --source-datastore OperationalStore --recovery-point-id $recoverypointid --restore-configuration restoreconfig.json --backup-instance-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.DataProtection/backupVaults/$backupvault/backupInstances/$backupinstanceid >restorerequestobject.json
+
+```
+
+If the Target AKS cluster for restore is different from the original cluster, then run the following command:
+
+```azurecli
+az dataprotection backup-instance restore initialize-for-data-recovery --datasource-type AzureKubernetesService --restore-location $region --source-datastore OperationalStore --recovery-point-id $recoverypointid --restore-configuration restoreconfig.json --target-resource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$targetakscluster >restorerequestobject.json
+
+```
+
+Now, you can update the JSON object as per your requirements, and then validate the object by running the following command:
+
+```azurecli
+az dataprotection backup-instance validate-for-restore --backup-instance-name $backupinstancename --resource-group $backupvaultresourcegroup --restore-request-object restorerequestobject.json --vault-name $backupvault
+
+```
+
+This command checks if the AKS Cluster and Backup vault have required permissions on each other and the Snapshot resource group to perform restore. If the validation fails due to missing permissions, you can assign them by running the following command:
+
+```azurecli
+az dataprotection backup-instance update-msi-permissions --datasource-type AzureKubernetesService --operation Restore --permissions-scope Resource --resource-group $backupvaultresourcegroup --vault-name $backupvault --restore-request-object restorerequestobject.json --snapshot-resource-group-id /subscriptions/$subscriptionId/resourceGroups/$snapshotresourcegroup
+
+```
+
+## Trigger the restore
+
+Once the role assignment is complete, you should validate the restore object once more. After that, you can trigger a restore operation by running the following command:
+
+```azurecli
+az dataprotection backup-instance restore trigger --restore-request-object restorerequestobject.json --ids /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.DataProtection/backupVaults/$backupvault/backupInstances/$backupinstancename
+```
+
+>[!Note]
+>During the restore operation, the Backup vault and the AKS cluster need to have certain roles assigned to perform the restore:
+>
+>1. *Target AKS* cluster should have *Contributor* role on the *Snapshot Resource Group*.
+>2. The *User Identity* attached with the Backup Extension should have *Storage Account Contributor* roles on the *storage account* where backups are stored.
+>3. The *Backup vault* should have a *Reader* role on the *Target AKS cluster* and *Snapshot Resource Group*.
+
+## Tracking job
+
+You can track the restore jobs using the `az dataprotection job` command. You can list all jobs and fetch a particular job detail.
+
+You can also use Resource Graph to track all jobs across all subscriptions, resource groups, and Backup vaults. Use the `az dataprotection job list-from-resourcegraph` command to get the relevant job.
+
+```azurecli
+az dataprotection job list-from-resourcegraph --datasource-type AzureKubernetesService --datasource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$akscluster --operation Restore
+```
+
+## Next steps
+
+- [Manage Azure Kubernetes Service cluster backups (preview)](azure-kubernetes-service-cluster-manage-backups.md)
+- [About Azure Kubernetes Service cluster backup (preview)](azure-kubernetes-service-cluster-backup-concept.md)
+
backup Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
backup Troubleshoot Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/troubleshoot-azure-files.md
Retry the registration. If the problem persists, contact support.
- Ensure that the file share you're looking to protect hasn't been deleted. - Ensure that the Storage Account is a supported storage account for file share backup. You can refer to the [Support matrix for Azure file share backup](azure-file-share-support-matrix.md) to find supported Storage Accounts. - Check if the file share is already protected in the same Recovery Services vault.-- Check the Network Routing setting of storage account to ensure that routing preference is set as Microsoft network routing .
+- Check the Network Routing setting of storage account to ensure that routing preference is set as Microsoft network routing.
### Backup file share configuration (or the protection policy configuration) is failing
Error Message: Storage accounts with Internet routing configuration are not supp
Ensure that the routing preference set for the storage account hosting backed up file share is Microsoft network routing.
-### FileshareBackupFailedWithAzureRpRequestThrottling/ FileshareRestoreFailedWithAzureRpRequestThrottling- File share backup or restore failed due to storage service throttling. This may be because the storage service is busy processing other requests for the given storage account
+### FileshareBackupFailedWithAzureRpRequestThrottling/ FileshareRestoreFailedWithAzureRpRequestThrottling- File share backup or restore operation failed due to storage service throttling. This may be because the storage service is busy processing other requests for the given storage account
Error Code: FileshareBackupFailedWithAzureRpRequestThrottling/ FileshareRestoreFailedWithAzureRpRequestThrottling
-Error Message: File share backup or restore failed due to storage service throttling. This may be because the storage service is busy processing other requests for the given storage account.
+Error Message: File share backup or restore operation failed due to storage service throttling. This may be because the storage service is busy processing other requests for the given storage account.
Try the backup/restore operation at a later time.
Recommended Actions: Ensure that the following configurations in the storage acc
- Ensure that the storage keys aren't rotated during the restore. - Check the network configuration on the storage account(s) and ensure that it allows the Microsoft first party services.
+ :::image type="content" source="./media/troubleshoot-azure-files/storage-account-network-configuration.png" alt-text="Screenshot shows the required networking details in a storage account." lightbox="./media/troubleshoot-azure-files/storage-account-network-configuration.png":::
+- Ensure that the target storage account has the following configuration: *Permitted scope for copy operations* is set to *From storage accounts in the same Azure AD tenant*.
-- Ensure that the target storage account has the following configuration: *Permitted scope for copy operations* are set to *From storage accounts in the same Azure AD tenant*.--
+ :::image type="content" source="./media/troubleshoot-azure-files/target-storage-account-configuration.png" alt-text="Screenshot shows the target storage account configuration." lightbox="./media/troubleshoot-azure-files/target-storage-account-configuration.png":::
## Common modify policy errors
batch Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Now you can use your private AKS cluster with Chaos Studio. To learn how to inst
az network vnet subnet create -g MyResourceGroup --vnet-name MyVnetName --name ChaosStudioRelaySubnet --address-prefixes "10.0.0.0/28" ```
-1. When you enable targets for the AKS cluster so that you can use it in Chaos experiments, set the `properties.subnets.containerSubnetId` and `properties.subnets.relaySubnetId` properties by using the new subnets you created in step 3.
+1. When you enable targets for the AKS cluster so that you can use it in chaos experiments, set the `properties.subnets.containerSubnetId` and `properties.subnets.relaySubnetId` properties by using the new subnets you created in step 3.
Replace `$SUBSCRIPTION_ID` with your Azure subscription ID. Replace `$RESOURCE_GROUP` and `$AKS_CLUSTER` with the resource group name and your AKS cluster resource name. Also, replace `$AKS_INFRA_RESOURCE_GROUP` and `$AKS_VNET` with your AKS infrastructure resource group name and virtual network name. Replace `$URL` with the corresponding `https://management.azure.com/` URL used for onboarding the target.
cognitive-services Document Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/document-sdk-overview.md
+
+ Title: Azure Document Translation SDKs
+
+description: Azure Document Translation software development kits (SDKs) expose Document Translation features and capabilities, using C#, Java, JavaScript, and Python programming language.
+++++ Last updated : 06/05/2023+
+recommendations: false
++
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD051 -->
+
+# Azure Document Translation SDK
+
+Azure Document Translation is a cloud-based REST API feature of the Azure Translator service. The Document Translation API enables quick and accurate source-to-target whole document translations, asynchronously, in supported languages and various file formats. The Document Translation software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Translation REST API capabilities into your applications.
+
+## Supported languages
+
+Document Translation SDK supports the following programming languages:
+
+| Language → SDK version | Package|Client library| Supported API version|
+|:-:|:-|:-|:-|
+|[.NET/C# → 1.0.0](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.Translation.Document/1.0.0/https://docsupdatetracker.net/index.html)| [NuGet](https://www.nuget.org/packages/Azure.AI.Translation.Document) | [Azure SDK for .NET](/dotnet/api/overview/azure/AI.Translation.Document-readme?view=azure-dotnet&preserve-view=true) | Document Translation v1.0|
+|[Python → 1.0.0](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-translation-document/1.0.0/https://docsupdatetracker.net/index.html)|[PyPi](https://pypi.org/project/azure-ai-translation-document/1.0.0/)|[Azure SDK for Python](/python/api/overview/azure/ai-translation-document-readme?view=azure-python&preserve-view=true)|Document Translation v1.0|
+
+## Changelog and release history
+
+This section provides a version-based description of Document Translation feature and capability releases, changes, updates, and enhancements.
+
+### [C#/.NET](#tab/csharp)
+
+**Version 1.0.0 (GA)** </br>
+**2022-06-07**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/CHANGELOG.md)
+
+##### [README](https://github.com/Azure/azure-sdk-for-net/blob/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/README.md)
+
+##### [Samples](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/samples)
+
+### [Python](#tab/python)
+
+**Version 1.0.0 (GA)** </br>
+**2022-06-07**
+
+##### [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/CHANGELOG.md)
+
+##### [README](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/README.md)
+
+##### [Samples](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/samples)
+++
+## Use Document Translation SDK in your applications
+
+The Document Translation SDK enables the use and management of the Translation service in your application. The SDK builds on the underlying Document Translation REST APIs for use within your programming language paradigm. Choose your preferred programming language:
+
+### 1. Install the SDK client library
+
+### [C#/.NET](#tab/csharp)
+
+```dotnetcli
+dotnet add package Azure.AI.Translation.Document --version 1.0.0
+```
+
+```powershell
+Install-Package Azure.AI.Translation.Document -Version 1.0.0
+```
+
+### [Python](#tab/python)
+
+```python
+pip install azure-ai-translation-document==1.0.0
+```
+++
+### 2. Import the SDK client library into your application
+
+### [C#/.NET](#tab/csharp)
+
+```csharp
+using System;
+using Azure.Core;
+using Azure.AI.Translation.Document;
+```
+
+### [Python](#tab/python)
+
+```python
+from azure.ai.translation.document import DocumentTranslationClient
+from azure.core.credentials import AzureKeyCredential
+```
+++
+### 3. Authenticate the client
+
+### [C#/.NET](#tab/csharp)
+
+Create an instance of the `DocumentTranslationClient` object to interact with the Document Translation SDK, and then call methods on that client object to interact with the service. The `DocumentTranslationClient` is the primary interface for using the Document Translation client library. It provides both synchronous and asynchronous methods to perform operations.
+
+```csharp
+private static readonly string endpoint = "<your-custom-endpoint>";
+private static readonly string key = "<your-key>";
+
+DocumentTranslationClient client = new DocumentTranslationClient(new Uri(endpoint), new AzureKeyCredential(key));
+
+```
+
+### [Python](#tab/python)
+
+Create an instance of the `DocumentTranslationClient` object to interact with the Document Translation SDK, and then call methods on that client object to interact with the service. The `DocumentTranslationClient` is the primary interface for using the Document Translation client library. It provides both synchronous and asynchronous methods to perform operations.
+
+```python
+endpoint = "<endpoint>"
+key = "<apiKey>"
+
+client = DocumentTranslationClient(endpoint, AzureKeyCredential(key))
+
+```
+++
+### 4. Build your application
+
+### [C#/.NET](#tab/csharp)
+
+The Document Translation service requires that you upload your files to an Azure Blob Storage source container (sourceUri), provide a target container where the translated documents can be written (targetUri), and include the target language code (targetLanguage).
+
+```csharp
+
+Uri sourceUri = new Uri("<your-source container-url");
+Uri targetUri = new Uri("<your-target-container-url>");
+string targetLanguage = "<target-language-code>";
+
+DocumentTranslationInput input = new DocumentTranslationInput(sourceUri, targetUri, targetLanguage)
+```
+
+### [Python](#tab/python)
+
+The Document Translation service requires that you upload your files to an Azure Blob Storage source container (sourceUri), provide a target container where the translated documents can be written (targetUri), and include the target language code (targetLanguage).
+
+```python
+sourceUrl = "<your-source container-url>"
+targetUrl = "<your-target-container-url>"
+targetLanguage = "<target-language-code>"
+
+poller = client.begin_translation(sourceUrl, targetUrl, targetLanguage)
+result = poller.result()
+
+```
+++
+## Help options
+
+The [Microsoft Q&A](/answers/tags/132/azure-translator) and [Stack Overflow](https://stackoverflow.com/questions/tagged/microsoft-translator) forums are available for the developer community to ask and answer questions about Azure Text Translation and other services. Microsoft monitors the forums and replies to questions that the community has yet to answer.
+
+> [!TIP]
+> To make sure that we see your Microsoft Q&A question, tag it with **`microsoft-translator`**.
+> To make sure that we see your Stack Overflow question, tag it with **`Azure Translator`**.
+>
+
+## Next steps
+
+>[!div class="nextstepaction"]
+> [**Document Translation SDK quickstart**](quickstarts/document-translation-sdk.md) [**Document Translation v1.0 REST API reference**](reference/rest-api-guide.md)
cognitive-services Use Client Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/use-client-sdks.md
- Title: "Document Translation C#/.NET or Python client library"-
-description: Use the Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
------- Previously updated : 12/17/2022---
-# Document Translation client-library SDKs
-<!-- markdownlint-disable MD024 -->
-<!-- markdownlint-disable MD001 -->
-[Document Translation](../overview.md) is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service. You can translate entire documents or process batch document translations in various file formats while preserving original document structure and format. In this article, you learn how to use the Document Translation service C#/.NET and Python client libraries. For the REST API, see our [Quickstart](../quickstarts/get-started-with-rest-api.md) guide.
-
-## Prerequisites
-
-To get started, you need:
-
-* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-
-* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource). Choose **Global** unless your business or application requires a specific region. Select the **Standard S1** pricing tier to get started (document translation isn't supported for the free tier).
-
-* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your Azure Blob Storage account for your source and target files:
-
- * **Source container**. This container is where you upload your files for translation (required).
- * **Target container**. This container is where your translated files are stored (required).
-
-* You also need to create Shared Access Signature (SAS) tokens for your source and target containers. The `sourceUrl` and `targetUrl` , must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](create-sas-tokens.md).
-
- * Your **source** container or blob must have designated **read** and **list** access.
- * Your **target** container or blob must have designated **write** and **list** access.
-
-For more information, *see* [Create SAS tokens](create-sas-tokens.md).
-
-## Client libraries
-
-### [C#/.NET v1.0.0 (GA)](#tab/csharp)
-
-| [Package (NuGet)](https://www.nuget.org/packages/Azure.AI.Translation.Document)| [Client library](/dotnet/api/overview/azure/AI.Translation.Document-readme) | [REST API](../reference/rest-api-guide.md) | [Product documentation](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.Translation.Document/1.0.0/https://docsupdatetracker.net/index.html) | [Samples](https://github.com/Azure/azure-sdk-for-net/tree/Azure.AI.Translation.Document_1.0.0/sdk/translation/Azure.AI.Translation.Document/samples) |
-
-### Set up your project
-
-In a console window (such as cmd, PowerShell, or Bash), use the `dotnet new` command to create a new console app with the name `batch-document-translation`. This command creates a simple "Hello World" C# project with a single source file: *program.cs*.
-
-```console
-dotnet new console -n batch-document-translation
-```
-
-Change your directory to the newly created app folder. Build your application with the following command:
-
-```console
-dotnet build
-```
-
-The build output should contain no warnings or errors.
-
-```console
-...
-Build succeeded.
- 0 Warning(s)
- 0 Error(s)
-...
-```
-
-### Install the client library
-
-Within the application directory, install the Document Translation client library for .NET using one of the following methods:
-
-#### **.NET CLI**
-
-```console
-dotnet add package Azure.AI.Translation.Document --version 1.0.0
-```
-
-#### **NuGet Package Manager**
-
-```console
-Install-Package Azure.AI.Translation.Document -Version 1.0.0
-```
-
-#### **NuGet PackageReference**
-
-```xml
-<ItemGroup>
- <!-- ... -->
-<PackageReference Include="Azure.AI.Translation.Document" Version="1.0.0" />
- <!-- ... -->
-</ItemGroup>
-```
-
-From the project directory, open the Program.cs file in your preferred editor or IDE. Add the following using directives:
-
-```csharp
-using Azure;
-using Azure.AI.Translation.Document;
-
-using System;
-using System.Threading;
-```
-
-In the application's **Program** class, create variables for your key and custom endpoint. For more information, *see* [Retrieve your key and custom domain endpoint](../quickstarts/get-started-with-rest-api.md#retrieve-your-key-and-document-translation-endpoint).
-
-```csharp
-private static readonly string endpoint = "<your custom endpoint>";
-private static readonly string key = "<your key>";
-```
-
-### Translate a document or batch files
-
-* To Start a translation operation for one or more documents in a single blob container, you call the `StartTranslationAsync` method.
-
-* To call `StartTranslationAsync`, you need to initialize a `DocumentTranslationInput` object that contains the following parameters:
-
-* **sourceUri**. The SAS URI for the source container containing documents to be translated.
-* **targetUri** The SAS URI for the target container to which the translated documents are written.
-* **targetLanguageCode**. The language code for the translated documents. You can find language codes on our [Language support](../../language-support.md) page.
-
-```csharp
-
-public void StartTranslation() {
- Uri sourceUri = new Uri("<sourceUrl>");
- Uri targetUri = new Uri("<targetUrl>");
-
- DocumentTranslationClient client = new DocumentTranslationClient(new Uri(endpoint), new AzureKeyCredential(key));
-
- DocumentTranslationInput input = new DocumentTranslationInput(sourceUri, targetUri, "es")
-
- DocumentTranslationOperation operation = await client.StartTranslationAsync(input);
-
- await operation.WaitForCompletionAsync();
-
- Console.WriteLine($ " Status: {operation.Status}");
- Console.WriteLine($ " Created on: {operation.CreatedOn}");
- Console.WriteLine($ " Last modified: {operation.LastModified}");
- Console.WriteLine($ " Total documents: {operation.DocumentsTotal}");
- Console.WriteLine($ " Succeeded: {operation.DocumentsSucceeded}");
- Console.WriteLine($ " Failed: {operation.DocumentsFailed}");
- Console.WriteLine($ " In Progress: {operation.DocumentsInProgress}");
- Console.WriteLine($ " Not started: {operation.DocumentsNotStarted}");
-
- await foreach(DocumentStatusResult document in operation.Value) {
- Console.WriteLine($ "Document with Id: {document.DocumentId}");
- Console.WriteLine($ " Status:{document.Status}");
- if (document.Status == TranslationStatus.Succeeded) {
- Console.WriteLine($ " Translated Document Uri: {document.TranslatedDocumentUri}");
- Console.WriteLine($ " Translated to language: {document.TranslatedTo}.");
- Console.WriteLine($ " Document source Uri: {document.SourceDocumentUri}");
- }
- else {
- Console.WriteLine($ " Error Code: {document.Error.ErrorCode}");
- Console.WriteLine($ " Message: {document.Error.Message}");
- }
- }
-}
-```
-
-That's it! You've created a program to translate documents in a storage container using the .NET client library.
-
-### [Python v1.0.0 (GA)](#tab/python)
-
-| [Package (PyPI)](https://pypi.org/project/azure-ai-translation-document/) | [Client library](/python/api/overview/azure/ai-translation-document-readme?view=azure-python&preserve-view=true) | [REST API](../reference/rest-api-guide.md) | [Product documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-translation-document/latest/azure.ai.translation.document.html) | [Samples](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-translation-document_1.0.0/sdk/translation/azure-ai-translation-document/samples) |
-
-### Set up your project
-
-### Install the client library
-
-If you haven't done so, install [Python](https://www.python.org/downloads/), and then install the latest version of the Translator client library:
-
-```console
-pip install azure-ai-translation-document==1.0.0
-```
-
-### Create your application
-
-Create a new Python application in your preferred editor or IDE. Then import the following libraries.
-
-```python
-import os
-from azure.core.credentials import AzureKeyCredential
-from azure.ai.translation.document import DocumentTranslationClient
-```
-
-Create variables for your resource key, custom endpoint, sourceUrl, and targetUrl. For more information, *see* [Retrieve your key and custom domain endpoint](../quickstarts/get-started-with-rest-api.md#retrieve-your-key-and-document-translation-endpoint).
-
-```python
-key = "<your-key>"
-endpoint = "<your-custom-endpoint>"
-sourceUrl = "<your-container-sourceUrl>"
-targetUrl = "<your-container-targetUrl>"
-```
-
-### Translate a document or batch files
-
-```python
-client = DocumentTranslationClient(endpoint, AzureKeyCredential(key))
-
-poller = client.begin_translation(sourceUrl, targetUrl, "fr")
-result = poller.result()
-
-print("Status: {}".format(poller.status()))
-print("Created on: {}".format(poller.details.created_on))
-print("Last updated on: {}".format(poller.details.last_updated_on))
-print("Total number of translations on documents: {}".format(poller.details.documents_total_count))
-
-print("\nOf total documents...")
-print("{} failed".format(poller.details.documents_failed_count))
-print("{} succeeded".format(poller.details.documents_succeeded_count))
-
-for document in result:
- print("Document ID: {}".format(document.id))
- print("Document status: {}".format(document.status))
- if document.status == "Succeeded":
- print("Source document location: {}".format(document.source_document_url))
- print("Translated document location: {}".format(document.translated_document_url))
- print("Translated to language: {}\n".format(document.translated_to))
- else:
- print("Error Code: {}, Message: {}\n".format(document.error.code, document.error.message))
-```
-
-That's it! You've created a program to translate documents in a storage container using the Python client library.
---
-### Next step
-
-> [!div class="nextstepaction"]
- > [**Try the REST API quickstart**](../quickstarts/get-started-with-rest-api.md)
cognitive-services Document Translation Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/document-translation-rest-api.md
+
+ Title: Get started with Document Translation
+description: "How to create a Document Translation service using C#, Go, Java, Node.js, or Python programming languages and the REST API"
++++++ Last updated : 06/02/2023+
+recommendations: false
+ms.devlang: csharp, golang, java, javascript, python
+
+zone_pivot_groups: programming-languages-set-translator
++
+# Get started with Document Translation
+
+Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
+
+## Prerequisites
+
+> [!IMPORTANT]
+>
+> * Java and JavaScript Document Translation SDKs are currently available in **public preview**. Features, approaches and processes may change, prior to the general availability (GA) release, based on user feedback.
+> * C# and Python SDKs are general availability (GA) releases ready for use in your production applications
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+>
+> * Document Translation is **only** supported in the S1 Standard Service Plan (Pay-as-you-go) or in the D3 Volume Discount Plan. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+>
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You also need to [create containers](#create-azure-blob-storage-containers) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource):
+
+ **Complete the Translator project and instance details fields as follows:**
+
+ 1. **Subscription**. Select one of your available Azure subscriptions.
+
+ 1. **Resource Group**. You can create a new resource group or add your resource to a pre-existing resource group that shares the same lifecycle, permissions, and policies.
+
+ 1. **Resource Region**. Choose **Global** unless your business or application requires a specific region. If you're planning on using a [system-assigned managed identity](../how-to-guides/create-use-managed-identities.md) for authentication, choose a **geographic** region like **West US**.
+
+ 1. **Name**. Enter the name you have chosen for your resource. The name you choose must be unique within Azure.
+
+ > [!NOTE]
+ > Document Translation requires a custom domain endpoint. The value that you enter in the Name field will be the custom domain name parameter for your endpoint.
+
+ 1. **Pricing tier**. Document Translation isn't supported in the free tier. **Select Standard S1 to try the service**.
+
+ 1. Select **Review + Create**.
+
+ 1. Review the service terms and select **Create** to deploy your resource.
+
+ 1. After your resource has successfully deployed, select **Go to resource**.
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue with the prerequisites.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Prerequisites) -->
+
+### Retrieve your key and document translation endpoint
+
+*Requests to the Translator service require a read-only key and custom endpoint to authenticate access. The custom domain endpoint is a URL formatted with your resource name, hostname, and Translator subdirectories and is available in the Azure portal.
+
+1. If you've created a new resource, after it deploys, select **Go to resource**. If you have an existing Document Translation resource, navigate directly to your resource page.
+
+1. In the left rail, under *Resource Management*, select **Keys and Endpoint**.
+
+1. Copy and paste your **`key`** and **`document translation endpoint`** in a convenient location, such as *Microsoft Notepad*. Only one key is necessary to make an API call.
+
+1. You paste your **`key`** and **`document translation endpoint`** into the code samples to authenticate your request to the Document Translation service.
+
+ :::image type="content" source="../media/document-translation-key-endpoint.png" alt-text="Screenshot showing the get your key field in Azure portal.":::
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue retrieving my key and endpoint.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Retrieve-your-keys-and-endpoint) -->
+
+## Create Azure Blob Storage containers
+
+You need to [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM) for source and target files.
+
+* **Source container**. This container is where you upload your files for translation (required).
+* **Target container**. This container is where your translated files are stored (required).
+
+### **Required authentication**
+
+The `sourceUrl` , `targetUrl` , and optional `glossaryUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs. *See* [**Create SAS tokens for Document Translation process**](../how-to-guides/create-sas-tokens.md).
+
+* Your **source** container or blob must have designated **read** and **list** access.
+* Your **target** container or blob must have designated **write** and **list** access.
+* Your **glossary** blob must have designated **read** and **list** access.
+
+> [!TIP]
+>
+> * If you're translating **multiple** files (blobs) in an operation, **delegate SAS access at the container level**.
+> * If you're translating a **single** file (blob) in an operation, **delegate SAS access at the blob level**.
+> * As an alternative to SAS tokens, you can use a [**system-assigned managed identity**](../how-to-guides/create-use-managed-identities.md) for authentication.
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue creating blob storage containers with authentication.](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?Pillar=Language&Product=Document-translation&Page=quickstart&Section=Create-blob-storage-containers) -->
+
+### Sample document
+
+For this project, you need a **source document** uploaded to your **source container**. You can download our [document translation sample document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/Translator/document-translation-sample.docx) for this quickstart. The source language is English.
+++++++++++++
+That's it, congratulations! In this quickstart, you used Document Translation to translate a document while preserving it's original structure and data format.
+
+## Next steps
++
cognitive-services Document Translation Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/quickstarts/document-translation-sdk.md
+
+ Title: "Document Translation C#/.NET or Python client library"
+
+description: Use the Translator C#/.NET or Python client library (SDK) for cloud-based batch document translation service and process
+++++++ Last updated : 06/14/2023+
+zone_pivot_groups: programming-languages-document-sdk
++
+# Document Translation client-library SDKs
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD001 -->
+
+Document Translation is a cloud-based feature of the [Azure Translator](../../translator-overview.md) service that asynchronously translates whole documents in [supported languages](../../language-support.md) and various [file formats](../overview.md#supported-document-formats). In this quickstart, learn to use Document Translation with a programming language of your choice to translate a source document into a target language while preserving structure and text formatting.
+
+> [!IMPORTANT]
+>
+> * Document Translation is currently supported in the Translator (single-service) resource only, and is **not** included in the Cognitive Services (multi-service) resource.
+>
+> * Document Translation is supported in paid tiers. The Language Studio only supports the S1 or D3 instance tiers. We suggest that you select Standard S1 to try Document Translation. *See* [Cognitive Services pricingΓÇöTranslator](https://azure.microsoft.com/pricing/details/cognitive-services/translator/).
+
+## Prerequisites
+
+To get started, you need:
+
+* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+
+* A [**single-service Translator resource**](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) (**not** a multi-service Cognitive Services resource). If you're planning on using the Document Translation feature with [managed identity authorization](../how-to-guides/create-use-managed-identities.md), choose a geographic region such as **East US**. Select the **Standard S1 or D3** or pricing tier.
+
+* An [**Azure Blob Storage account**](https://portal.azure.com/#create/Microsoft.StorageAccount-ARM). You'll [**create containers**](../../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) in your Azure Blob Storage account for your source and target files:
+
+ * **Source container**. This container is where you upload your files for translation (required).
+ * **Target container**. This container is where your translated files are stored (required).
+
+### Storage container authorization
+
+You can choose one of the following options to authorize access to your Translator resource.
+
+**✔️ Managed Identity**. A managed identity is a service principal that creates an Azure Active Directory (Azure AD) identity and specific permissions for an Azure managed resource. Managed identities enable you to run your Translator application without having to embed credentials in your code. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your source and target URLs.
+
+To learn more, *see* [Managed identities for Document Translation](../how-to-guides/create-use-managed-identities.md).
+
+ :::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
+
+**✔️ Shared Access Signature (SAS)**. A shared access signature is a URL that grants restricted access for a specified period of time to your Translator service. To use this method, you need to create Shared Access Signature (SAS) tokens for your source and target containers. The `sourceUrl` and `targetUrl` must include a Shared Access Signature (SAS) token, appended as a query string. The token can be assigned to your container or specific blobs.
+
+* Your **source** container or blob must have designated **read** and **list** access.
+* Your **target** container or blob must have designated **write** and **list** access.
+
+To learn more, *see* [**Create SAS tokens**](../how-to-guides/create-sas-tokens.md).
+
+ :::image type="content" source="../media/sas-url-token.png" alt-text="Screenshot of a resource URI with a SAS token.":::
+++++
+### Next step
+
+> [!div class="nextstepaction"]
+> [**Learn more about Document Translation operations**](../../reference/rest-api-guide.md)
cognitive-services Quickstart Text Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-text-rest-api.md
+
+ Title: "Quickstart: Azure Cognitive Services Translator REST APIs"
+
+description: "Learn to translate text with the Translator service REST APIs. Examples are provided in C#, Go, Java, JavaScript and Python."
++++++ Last updated : 06/02/2023+
+ms.devlang: csharp, golang, java, javascript, python
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+
+# Quickstart: Azure Cognitive Services Translator REST APIs
+
+Try the latest version of Azure Translator. In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice or the REST API. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production.
+
+## Prerequisites
+
+You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+
+* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
+
+ * You need the key and endpoint from the resource to connect your application to the Translator service. You paste your key and endpoint into the code later in the quickstart. You can find these values on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+
+ > [!NOTE]
+ >
+ > * For this quickstart it is recommended that you use a Translator text single-service global resource.
+ > * With a single-service global resource you'll include one authorization header (**Ocp-Apim-Subscription-key**) with the REST API request. The value for Ocp-Apim-Subscription-key is your Azure secret key for your Translator Text subscription.
+ > * If you choose to use the multi-service Cognitive Services or regional Translator resource, two authentication headers will be required: (**Ocp-Api-Subscription-Key** and **Ocp-Apim-Subscription-Region**). The value for Ocp-Apim-Subscription-Region is the region associated with your subscription.
+ > * For more information on how to use the **Ocp-Apim-Subscription-Region** header, _see_ [Text Translator REST API headers](translator-text-apis.md).
+
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=prerequisites)
+-->
+
+## Headers
+
+To call the Translator service via the [REST API](reference/rest-api-guide.md), you need to include the following headers with each request. Don't worry, we include the headers for you in the sample code for each programming language.
+
+For more information on Translator authentication options, _see_ the [Translator v3 reference](./reference/v3-0-reference.md#authentication) guide.
+
+Header|Value| Condition |
+| |: |:|
+|**Ocp-Apim-Subscription-Key** |Your Translator service key from the Azure portal.|&bullet; ***Required***|
+|**Ocp-Apim-Subscription-Region**|The region where your resource was created. |&bullet; ***Required*** when using a multi-service Cognitive Services or regional (geographic) resource like **West US**.</br>&bullet; ***Optional*** when using a single-service global Translator Resource.
+|**Content-Type**|The content type of the payload. The accepted value is **application/json** or **charset=UTF-8**.|&bullet; **Required**|
+|**Content-Length**|The **length of the request** body.|&bullet; ***Optional***|
+
+> [!IMPORTANT]
+>
+> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../key-vault/general/overview.md). For more information, _see_ the Cognitive Services [security](../cognitive-services-security.md) article.
+
+## Translate text
+
+The core operation of the Translator service is translating text. In this quickstart, you build a request using a programming language of your choice that takes a single source (`from`) and provides two outputs (`to`). Then we review some parameters that can be used to adjust both the request and the response.
+
+For detailed information regarding Azure Translator Service request limits, *see* [**Text translation request limits**](request-limits.md#text-translation).
+
+### [C#: Visual Studio](#tab/csharp)
+
+### Set up your Visual Studio project
+
+1. Make sure you have the current version of [Visual Studio IDE](https://visualstudio.microsoft.com/vs/).
+
+ > [!TIP]
+ >
+ > If you're new to Visual Studio, try the [Introduction to Visual Studio](/training/modules/go-get-started/) Learn module.
+
+1. Open Visual Studio.
+
+1. On the Start page, choose **Create a new project**.
+
+ :::image type="content" source="media/quickstarts/start-window.png" alt-text="Screenshot: Visual Studio start window.":::
+
+1. On the **Create a new project page**, enter **console** in the search box. Choose the **Console Application** template, then choose **Next**.
+
+ :::image type="content" source="media/quickstarts/create-new-project.png" alt-text="Screenshot: Visual Studio's create new project page.":::
+
+1. In the **Configure your new project** dialog window, enter `translator_quickstart` in the Project name box. Leave the "Place solution and project in the same directory" checkbox **unchecked** and select **Next**.
+
+ :::image type="content" source="media/quickstarts/configure-new-project.png" alt-text="Screenshot: Visual Studio's configure new project dialog window.":::
+
+1. In the **Additional information** dialog window, make sure **.NET 6.0 (Long-term support)** is selected. Leave the "Don't use top-level statements" checkbox **unchecked** and select **Create**.
+
+ :::image type="content" source="media/quickstarts/additional-information.png" alt-text="Screenshot: Visual Studio's additional information dialog window.":::
+
+### Install the Newtonsoft.json package with NuGet
+
+1. Right-click on your translator_quickstart project and select **Manage NuGet Packages...** .
+
+ :::image type="content" source="media/quickstarts/manage-nuget.png" alt-text="Screenshot of the NuGet package search box.":::
+
+1. Select the Browse tab and type Newtonsoft.json.
+
+ :::image type="content" source="media/quickstarts/newtonsoft.png" alt-text="Screenshot of the NuGet package install window.":::
+
+1. Select install from the right package manager window to add the package to your project.
+
+ :::image type="content" source="media/quickstarts/install-newtonsoft.png" alt-text="Screenshot of the NuGet package install button.":::
+<!-- checked -->
+<!-- [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=set-up-your-visual-studio-project) -->
+
+### Build your C# application
+
+> [!NOTE]
+>
+> * Starting with .NET 6, new projects using the `console` template generate a new program style that differs from previous versions.
+> * The new output uses recent C# features that simplify the code you need to write.
+> * When you use the newer version, you only need to write the body of the `Main` method. You don't need to include top-level statements, global using directives, or implicit using directives.
+> * For more information, _see_ [**New C# templates generate top-level statements**](/dotnet/core/tutorials/top-level-templates).
+
+1. Open the **Program.cs** file.
+
+1. Delete the pre-existing code, including the line `Console.WriteLine("Hello World!")`. Copy and paste the code sample into your application's Program.cs file. Make sure you update the key variable with the value from your Azure portal Translator instance:
+
+```csharp
+using System.Text;
+using Newtonsoft.Json;
+
+class Program
+{
+ private static readonly string key = "<your-translator-key>";
+ private static readonly string endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static readonly string location = "<YOUR-RESOURCE-LOCATION>";
+
+ static async Task Main(string[] args)
+ {
+ // Input and output languages are defined as parameters.
+ string route = "/translate?api-version=3.0&from=en&to=fr&to=zu";
+ string textToTranslate = "I would really like to drive your car around the block a few times!";
+ object[] body = new object[] { new { Text = textToTranslate } };
+ var requestBody = JsonConvert.SerializeObject(body);
+
+ using (var client = new HttpClient())
+ using (var request = new HttpRequestMessage())
+ {
+ // Build the request.
+ request.Method = HttpMethod.Post;
+ request.RequestUri = new Uri(endpoint + route);
+ request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
+ request.Headers.Add("Ocp-Apim-Subscription-Key", key);
+ // location required if you're using a multi-service or regional (not global) resource.
+ request.Headers.Add("Ocp-Apim-Subscription-Region", location);
+
+ // Send the request and get response.
+ HttpResponseMessage response = await client.SendAsync(request).ConfigureAwait(false);
+ // Read response as a string.
+ string result = await response.Content.ReadAsStringAsync();
+ Console.WriteLine(result);
+ }
+ }
+}
+
+```
+<!-- checked -->
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=build-your-c#-application) -->
+
+### Run your C# application
+
+Once you've added a code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
++
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Csharp&Product=Translator&Page=quickstart-translator&Section=run-your-c#-application) -->
+
+### [Go](#tab/go)
+
+### Set up your Go environment
+
+You can use any text editor to write Go applications. We recommend using the latest version of [Visual Studio Code and the Go extension](/azure/developer/go/configure-visual-studio-code).
+
+> [!TIP]
+>
+> If you're new to Go, try the [Get started with Go](/training/modules/go-get-started/) Learn module.
+
+1. If you haven't done so already, [download and install Go](https://go.dev/doc/install).
+
+ * Download the Go version for your operating system.
+ * Once the download is complete, run the installer.
+ * Open a command prompt and enter the following to confirm Go was installed:
+
+ ```console
+ go version
+ ```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=set-up-your-go-environment) -->
+
+### Build your Go application
+
+1. In a console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-app**, and navigate to it.
+
+1. Create a new GO file named **translation.go** from the **translator-app** directory.
+
+1. Copy and paste the provided code sample into your **translation.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance:
+
+```go
+package main
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "log"
+ "net/http"
+ "net/url"
+)
+
+func main() {
+ key := "<YOUR-TRANSLATOR-KEY>"
+ endpoint := "https://api.cognitive.microsofttranslator.com/"
+ uri := endpoint + "/translate?api-version=3.0"
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ location := "<YOUR-RESOURCE-LOCATION>"
+
+ // Build the request URL. See: https://go.dev/pkg/net/url/#example_URL_Parse
+ u, _ := url.Parse(uri)
+ q := u.Query()
+ q.Add("from", "en")
+ q.Add("to", "fr")
+ q.Add("to", "zu")
+ u.RawQuery = q.Encode()
+
+ // Create an anonymous struct for your request body and encode it to JSON
+ body := []struct {
+ Text string
+ }{
+ {Text: "I would really like to drive your car around the block a few times."},
+ }
+ b, _ := json.Marshal(body)
+
+ // Build the HTTP POST request
+ req, err := http.NewRequest("POST", u.String(), bytes.NewBuffer(b))
+ if err != nil {
+ log.Fatal(err)
+ }
+ // Add required headers to the request
+ req.Header.Add("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ req.Header.Add("Ocp-Apim-Subscription-Region", location)
+ req.Header.Add("Content-Type", "application/json")
+
+ // Call the Translator API
+ res, err := http.DefaultClient.Do(req)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Decode the JSON response
+ var result interface{}
+ if err := json.NewDecoder(res.Body).Decode(&result); err != nil {
+ log.Fatal(err)
+ }
+ // Format and print the response to terminal
+ prettyJSON, _ := json.MarshalIndent(result, "", " ")
+ fmt.Printf("%s\n", prettyJSON)
+}
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=build-your-go-application) -->
+
+### Run your Go application
+
+Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-app** folder and use the following command:
+
+```console
+ go run translation.go
+```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "detectedLanguage": {
+ "language": "en",
+ "score": 1.0
+ },
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Go&Product=Translator&Page=quickstart-translator&Section=run-your-go-application) -->
+
+### [Java: Gradle](#tab/java)
+
+### Set up your Java environment
+
+* You should have the latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. _See_ [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java).
+
+ >[!TIP]
+ >
+ > * Visual Studio Code offers a **Coding Pack for Java** for Windows and macOS.The coding pack is a bundle of VS Code, the Java Development Kit (JDK), and a collection of suggested extensions by Microsoft. The Coding Pack can also be used to fix an existing development environment.
+ > * If you are using VS Code and the Coding Pack For Java, install the [**Gradle for Java**](https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-gradle) extension.
+
+* If you aren't using VS Code, make sure you have the following installed in your development environment:
+
+ * A [**Java Development Kit** (OpenJDK)](/java/openjdk/download#openjdk-17) version 8 or later.
+
+ * [**Gradle**](https://docs.gradle.org/current/userguide/installation.html), version 6.8 or later.
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=set-up-your-java-environment) -->
+
+### Create a new Gradle project
+
+1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **translator-text-app**, and navigate to it.
+
+ ```console
+ mkdir translator-text-app && translator-text-app
+ ```
+
+ ```powershell
+ mkdir translator-text-app; cd translator-text-app
+ ```
+
+1. Run the `gradle init` command from the translator-text-app directory. This command creates essential build files for Gradle, including _build.gradle.kts_, which is used at runtime to create and configure your application.
+
+ ```console
+ gradle init --type basic
+ ```
+
+1. When prompted to choose a **DSL**, select **Kotlin**.
+
+1. Accept the default project name (translator-text-app) by selecting **Return** or **Enter**.
+
+1. Update `build.gradle.kts` with the following code:
+
+ ```kotlin
+ plugins {
+ java
+ application
+ }
+ application {
+ mainClass.set("TranslatorText")
+ }
+ repositories {
+ mavenCentral()
+ }
+ dependencies {
+ implementation("com.squareup.okhttp3:okhttp:4.10.0")
+ implementation("com.google.code.gson:gson:2.9.0")
+ }
+ ```
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-a-gradle-project) -->
+
+### Create your Java Application
+
+1. From the translator-text-app directory, run the following command:
+
+ ```console
+ mkdir -p src/main/java
+ ```
+
+ You create the following directory structure:
+
+ :::image type="content" source="media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure.":::
+
+1. Navigate to the `java` directory and create a file named **`TranslatorText.java`**.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item TranslatorText.java**.
+ >
+ > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
+
+1. Open the `TranslatorText.java` file in your IDE and copy then paste the following code sample into your application. **Make sure you update the key with one of the key values from your Azure portal Translator instance:**
+
+```java
+import java.io.IOException;
+
+import com.google.gson.*;
+import okhttp3.MediaType;
+import okhttp3.OkHttpClient;
+import okhttp3.Request;
+import okhttp3.RequestBody;
+import okhttp3.Response;
+
+public class TranslatorText {
+ private static String key = "<your-translator-key";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ private static String location = "<YOUR-RESOURCE-LOCATION>";
++
+ // Instantiates the OkHttpClient.
+ OkHttpClient client = new OkHttpClient();
+
+ // This function performs a POST request.
+ public String Post() throws IOException {
+ MediaType mediaType = MediaType.parse("application/json");
+ RequestBody body = RequestBody.create(mediaType,
+ "[{\"Text\": \"I would really like to drive your car around the block a few times!\"}]");
+ Request request = new Request.Builder()
+ .url("https://api.cognitive.microsofttranslator.com/translate?api-version=3.0&from=en&to=fr&to=zu")
+ .post(body)
+ .addHeader("Ocp-Apim-Subscription-Key", key)
+ // location required if you're using a multi-service or regional (not global) resource.
+ .addHeader("Ocp-Apim-Subscription-Region", location)
+ .addHeader("Content-type", "application/json")
+ .build();
+ Response response = client.newCall(request).execute();
+ return response.body().string();
+ }
+
+ // This function prettifies the json response.
+ public static String prettify(String json_text) {
+ JsonParser parser = new JsonParser();
+ JsonElement json = parser.parse(json_text);
+ Gson gson = new GsonBuilder().setPrettyPrinting().create();
+ return gson.toJson(json);
+ }
+
+ public static void main(String[] args) {
+ try {
+ TranslatorText translateRequest = new TranslatorText();
+ String response = translateRequest.Post();
+ System.out.println(prettify(response));
+ } catch (Exception e) {
+ System.out.println(e);
+ }
+ }
+}
+```
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=create-your-java-application) -->
+
+### Build and run your Java application
+
+Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
+
+1. Build your application with the `build` command:
+
+ ```console
+ gradle build
+ ```
+
+1. Run your application with the `run` command:
+
+ ```console
+ gradle run
+ ```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=build-and-run-your-java-application) -->
+
+### [JavaScript: Node.js](#tab/nodejs)
+
+### Set up your Node.js Express project
+
+1. If you haven't done so already, install the latest version of [Node.js](https://nodejs.org/en/download/). Node Package Manager (npm) is included with the Node.js installation.
+
+ > [!TIP]
+ >
+ > If you're new to Node.js, try the [Introduction to Node.js](/training/modules/intro-to-nodejs/) Learn module.
+
+1. In a console window (such as cmd, PowerShell, or Bash), create and navigate to a new directory for your app named `translator-app`.
+
+ ```console
+ mkdir translator-app && cd translator-app
+ ```
+
+ ```powershell
+ mkdir translator-app; cd translator-app
+ ```
+
+1. Run the npm init command to initialize the application and scaffold your project.
+
+ ```console
+ npm init
+ ```
+
+1. Specify your project's attributes using the prompts presented in the terminal.
+
+ * The most important attributes are name, version number, and entry point.
+ * We recommend keeping `index.js` for the entry point name. The description, test command, GitHub repository, keywords, author, and license information are optional attributesΓÇöthey can be skipped for this project.
+ * Accept the suggestions in parentheses by selecting **Return** or **Enter**.
+ * After you've completed the prompts, a `package.json` file will be created in your translator-app directory.
+
+1. Open a console window and use npm to install the `axios` HTTP library and `uuid` package:
+
+ ```console
+ npm install axios uuid
+ ```
+
+1. Create the `index.js` file in the application directory.
+
+ > [!TIP]
+ >
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * Type the following command **New-Item index.js**.
+ >
+ > * You can also create a new file named `index.js` in your IDE and save it to the `translator-app` directory.
+<!-- checked -->
+
+<!-- > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-nodejs-express-project) -->
+
+### Build your JavaScript application
+
+Add the following code sample to your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**:
+
+```javascript
+ const axios = require('axios').default;
+ const { v4: uuidv4 } = require('uuid');
+
+ let key = "<your-translator-key>";
+ let endpoint = "https://api.cognitive.microsofttranslator.com";
+
+ // location, also known as region.
+ // required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+ let location = "<YOUR-RESOURCE-LOCATION>";
+
+ axios({
+ baseURL: endpoint,
+ url: '/translate',
+ method: 'post',
+ headers: {
+ 'Ocp-Apim-Subscription-Key': key,
+ // location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': uuidv4().toString()
+ },
+ params: {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['fr', 'zu']
+ },
+ data: [{
+ 'text': 'I would really like to drive your car around the block a few times!'
+ }],
+ responseType: 'json'
+ }).then(function(response){
+ console.log(JSON.stringify(response.data, null, 4));
+ })
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-javascript-application) -->
+
+### Run your JavaScript application
+
+Once you've added the code sample to your application, run your program:
+
+1. Navigate to your application directory (translator-app).
+
+1. Type the following command in your terminal:
+
+ ```console
+ node index.js
+ ```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Java&Product=Translator&Page=quickstart-translator&Section=run-your-javascript-application) -->
+
+### [Python](#tab/python)
+
+### Set up your Python project
+
+1. If you haven't done so already, install the latest version of [Python 3.x](https://www.python.org/downloads/). The Python installer package (pip) is included with the Python installation.
+
+ > [!TIP]
+ >
+ > If you're new to Python, try the [Introduction to Python](/training/paths/beginner-python/) Learn module.
+
+1. Open a terminal window and use pip to install the Requests library and uuid0 package:
+
+ ```console
+ pip install requests uuid
+ ```
+
+ > [!NOTE]
+ > We will also use a Python built-in package called json. It's used to work with JSON data.
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=set-up-your-python-project) -->
+
+### Build your Python application
+
+1. Create a new Python file called **translator-app.py** in your preferred editor or IDE.
+
+1. Add the following code sample to your `translator-app.py` file. **Make sure you update the key with one of the values from your Azure portal Translator instance**.
+
+```python
+import requests, uuid, json
+
+# Add your key and endpoint
+key = "<your-translator-key>"
+endpoint = "https://api.cognitive.microsofttranslator.com"
+
+# location, also known as region.
+# required if you're using a multi-service or regional (not global) resource. It can be found in the Azure portal on the Keys and Endpoint page.
+location = "<YOUR-RESOURCE-LOCATION>"
+
+path = '/translate'
+constructed_url = endpoint + path
+
+params = {
+ 'api-version': '3.0',
+ 'from': 'en',
+ 'to': ['fr', 'zu']
+}
+
+headers = {
+ 'Ocp-Apim-Subscription-Key': key,
+ # location required if you're using a multi-service or regional (not global) resource.
+ 'Ocp-Apim-Subscription-Region': location,
+ 'Content-type': 'application/json',
+ 'X-ClientTraceId': str(uuid.uuid4())
+}
+
+# You can pass more than one object in body.
+body = [{
+ 'text': 'I would really like to drive your car around the block a few times!'
+}]
+
+request = requests.post(constructed_url, params=params, headers=headers, json=body)
+response = request.json()
+
+print(json.dumps(response, sort_keys=True, ensure_ascii=False, indent=4, separators=(',', ': ')))
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=build-your-python-application) -->
+
+### Run your Python application
+
+Once you've added a code sample to your application, build and run your program:
+
+1. Navigate to your **translator-app.py** file.
+
+1. Type the following command in your console:
+
+ ```console
+ python translator-app.py
+ ```
+
+**Translation output:**
+
+After a successful call, you should see the following response:
+
+```json
+[
+ {
+ "translations": [
+ {
+ "text": "J'aimerais vraiment conduire votre voiture autour du pâté de maisons plusieurs fois!",
+ "to": "fr"
+ },
+ {
+ "text": "Ngingathanda ngempela ukushayela imoto yakho endaweni evimbelayo izikhathi ezimbalwa!",
+ "to": "zu"
+ }
+ ]
+ }
+]
+
+```
+<!-- checked -->
+<!--
+ > [!div class="nextstepaction"]
+> [My REST API call was successful](#next-steps) [I ran into an issue](https://microsoft.qualtrics.com/jfe/form/SV_0Cl5zkG3CnDjq6O?PLanguage=Python&Product=Translator&Page=quickstart-translator&Section=run-your-python-application) -->
+++
+## Next steps
+
+That's it, congratulations! You've learned to use the Translator service to translate text.
+
+ Explore our how-to documentation and take a deeper dive into Translation service capabilities:
+
+* [**Translate text**](translator-text-apis.md#translate-text)
+
+* [**Transliterate text**](translator-text-apis.md#transliterate-text)
+
+* [**Detect and identify language**](translator-text-apis.md#detect-language)
+
+* [**Get sentence length**](translator-text-apis.md#get-sentence-length)
+
+* [**Dictionary lookup and alternate translations**](translator-text-apis.md#dictionary-examples-translations-in-context)
cognitive-services Quickstart Text Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/quickstart-text-sdk.md
+
+ Title: "Quickstart: Azure Cognitive Services Translator SDKs"
+
+description: "Learn to translate text with the Translator service SDks in a programming language of your choice: C#, Java, JavaScript, or Python."
++++++ Last updated : 05/09/2023+
+ms.devlang: csharp, java, javascript, python
+
+zone_pivot_groups: programming-languages-set-translator-sdk
++
+<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD001 -->
+<!-- markdownlint-disable MD024 -->
+<!-- markdownlint-disable MD036 -->
+<!-- markdownlint-disable MD049 -->
+
+# Quickstart: Azure Cognitive Services Translator SDKs (preview)
+
+> [!IMPORTANT]
+>
+> * The Translator text SDKs are currently available in public preview. Features, approaches and processes may change, prior to General Availability (GA), based on user feedback.
+
+In this quickstart, get started using the Translator service to [translate text](reference/v3-0-translate.md) using a programming language of your choice. For this project, we recommend using the free pricing tier (F0), while you're learning the technology, and later upgrading to a paid tier for production.
+
+## Prerequisites
+
+You need an active Azure subscription. If you don't have an Azure subscription, you can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
+
+* Once you have your Azure subscription, create a [Translator resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) in the Azure portal.
+
+* After your resource deploys, select **Go to resource** and retrieve your key and endpoint.
+
+ * Get the key, endpoint, and region from the resource to connect your application to the Translator service. Paste these values into the code later in the quickstart. You can find them on the Azure portal **Keys and Endpoint** page:
+
+ :::image type="content" source="media/quickstarts/keys-and-endpoint-portal.png" alt-text="Screenshot: Azure portal keys and endpoint page.":::
+++++++++
+That's it, congratulations! In this quickstart, you used a Text Translation SDK to translate text.
+
+## Next steps
+
+Learn more about Text Translation development options:
+
+> [!div class="nextstepaction"]
+>[Text Translation SDK overview](text-sdk-overview.md) </br></br>[Text Translator V3 reference](reference/v3-0-reference.md)
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
Previously updated : 02/14/2023 Last updated : 06/12/2023 ms.devlang: csharp, golang, java, javascript, python
To call the Translator service via the [REST API](reference/rest-api-guide.md),
1. Open the **Program.cs** file.
-1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. Copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
+1. Delete the pre-existing code, including the line `Console.WriteLine("Hello World!")`. Copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
1. Once you've added a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
After a successful call, you should see the following response. Unlike the call
## Dictionary lookup (alternate translations)
-With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "sunshine" from `en` to `es`, this endpoint returns "`luz solar`", "`rayos solares`", and "`soleamiento`", "`sol`", and "`insolaci├│n`".
+With the endpoint, you can get alternate translations for a word or phrase. For example, when translating the word "sunshine" from `en` to `es`, this endpoint returns "`luz solar`," "`rayos solares`," and "`soleamiento`," "`sol`," and "`insolaci├│n`."
### [C#](#tab/csharp)
cognitive-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/content-filter.md
As part of your application design, consider the following best practices to del
- Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md). - Apply for modified content filters via [this form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu).-- Azure OpenAI content filtering is powered by the models that power [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
+- Azure OpenAI content filtering is powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext). - Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext#preventing-abuse-and-harmful-content-generation).
cognitive-services Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Services description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/recording-logs.md
Communication Services offers the following types of logs that you can enable:
- Media content (for example, audio/video, unmixed, or transcription). - Format types used for the recording (for example, WAV or MP4). - The reason why the recording ended.
+* **Recording incoming operations logs** - provides information regarding incoming requests for Call Recording operations. Every entry corresponds to the result of a call to the Call Recording APIs, e.g. StartRecording, StopRecording, PauseRecording, ResumeRecording, etc.
++ A recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot). It can also end because of a system failure.
Summary logs are published after a recording is ready to be downloaded. The logs
| Property | Description | | -- | |
-| `Timestamp` | The timestamp (UTC) of when the log was generated. |
-| `Operation Name` | The operation associated with log record. |
-| `Operation Version` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
-| `Category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
-| `Correlation ID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
+| `timestamp` | The timestamp (UTC) of when the log was generated. |
+| `operationName` | The operation associated with log record. |
+| `operationVersion` | The `api-version` associated with the operation, if the operationName was performed using an API. If there's no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| `category` | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the properties blob of an event are the same within a particular log category and resource type. |
+| `correlationID` | The ID for correlated events. Can be used to identify correlated events between multiple tables. |
| `Properties` | Other data applicable to various modes of Communication Services. |
-| `Record ID` | The unique ID for a given usage record. |
-| `Usage Type` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
-| `Unit Type` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
-| `Quantity` | The number of units used or consumed for this record. |
+| `recordID` | The unique ID for a given usage record. |
+| `usageType` | The mode of usage. (for example, Chat, PSTN, NAT, etc.) |
+| `unitType` | The type of unit that usage is based off for a given mode of usage. (for example, minutes, megabytes, messages, etc.). |
+| `quantity` | The number of units used or consumed for this record. |
### Call Recording summary logs schema
Summary logs are published after a recording is ready to be downloaded. The logs
```json "operationName": "Call Recording Summary", "operationVersion": "1.0",
-"category": "RecordingSummaryPUBLICPREVIEW",
+"category": "RecordingSummary",
``` A call can have one recording or many recordings, depending on how many times a recording event is triggered.
-For example, if an agent initiates an outbound call on a recorded line and the call drops because of a poor network signal, `callid` will have one `recordingid` value. If the agent calls back the customer, the system generates a new `callid` instance and a new `recordingid` value.
+For example, if an agent initiates an outbound call on a recorded line and the call drops because of a poor network signal, `callID` will have one `recordingID` value. If the agent calls back the customer, the system generates a new `callID` instance and a new `recordingID` value.
#### Example: Call Recording for one call to one recording
For example, if an agent initiates an outbound call on a recorded line and the c
} ```
-If the agent initiates a recording and then stops and restarts the recording multiple times while the call is still on, `callid` will have many `recordingid` values, depending on how many times the recording events were triggered.
+If the agent initiates a recording and then stops and restarts the recording multiple times while the call is still on, `callID` will have many `recordingID` values, depending on how many times the recording events were triggered.
#### Example: Call Recording for one call to many recordings
If the agent initiates a recording and then stops and restarts the recording mul
"AudioChannelsCount": 1 } ```
+### ACSCallRecordingIncomingOperations logs
+
+Properties
+
+| Property | Description |
+| -- | |
+|` timeGenerated`| Represents the timestamp (UTC) of when the log was generated. |
+|` callConnectionId`| Represents the ID of the call connection/leg, if available. |
+|` callerIpAddress`| Represents the caller IP address, if the operation corresponds to an API call that would come from an entity with a publicly available IP address. |
+|` correlationId`| Represents the ID for correlated events. Can be used to identify correlated events between multiple tables. |
+|` durationMs`|Represents the duration of the operation in milliseconds. |
+|` level`| Represents the severity level of the operation. |
+|` operationName`| Represents the operation associated with log records. |
+|` operationVersion`| Represents the API version associated with the operation or version of the operation (if there is no API version). |
+|` resourceId`| Represents a unique identifier for the resource that the record is associated with. |
+|` resultSignature`| Represents the sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+|` resultType`| Represents the status of the operation. |
+|` sdkType`| Represents the SDK type used in the request. |
+|` sdkVersion`| Represents the SDK version. |
+|` serverCallId`| Represents the server Call ID. |
+|` URI`| Represents the URI of the request. |
+
+ Sample
+
+```json
+"properties"
+{ "TimeGenerated": "2023-05-09T15:58:30.100Z",
+ "Level": "Informational",
+ "CorrelationId": "a999f996-b4e1-xxxx-ac04-a59test87d97",
+ "OperationName": "ResumeCallRecording",
+ "OperationVersion": "2023-03-06",
+ "URI": "https://acsresouce.communication.azure.com/calling/recordings/ eyJQbGF0Zm9ybUVuZHBviI0MjFmMTIwMC04MjhiLTRmZGItOTZjYi0...:resume?api-version=2023-03-06",
+ "ResultType": "Succeeded",
+ "ResultSignature": 202,
+ "DurationMs": 130,
+ "CallerIpAddress": "127.0.0.1",
+ "CallConnectionId": "d5596715-ab0b-test-8eee-575c250e4234",
+ "ServerCallId": "aHR0cHM6Ly9hcGk0vjCCCCCCQd2pRP2k9OTMmZT02Mzc5OTQ3xMDAzNDUwMzg...",
+ "SdkVersion": "1.0.0-alpha.20220829.1",
+ "SdkType": "dotnet"
+}
+```
+ ## Next steps
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
The following list presents the set of features that are currently available in
| | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | | | Hang up a call (remove the call leg) | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ |
+| | Cancel media operations | ✔️ | ✔️ |
| Query scenarios | Get the call state | ✔️ | ✔️ | | | Get a participant in a call | ✔️ | ✔️ | | | List all participants in a call | ✔️ | ✔️ |
When your application has answered a one-to-one call, the hang-up action will re
**Terminate** Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
+**Cancel media operations**
+Based on business logic your application may need to cancel ongoing and queued media operations. Depending on the media operation canceled and the ones in queue, you will received a webhook event indicating that the action has been canceled.
## Events
The Call Automation events are sent to the web hook callback URI specified when
| AddParticipantSucceeded| Your application added a participant | |AddParticipantFailed | Your application was unable to add a participant | | ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
-| PlayCompleted| Your application successfully played the audio file provided |
-| PlayFailed| Your application failed to play audio |
+| PlayCompleted | Your application successfully played the audio file provided |
+| PlayFailed | Your application failed to play audio |
+| PlayCanceled | Your application canceled the play operation |
| RecognizeCompleted | Recognition of user input was successfully completed | | RecognizeFailed | Recognition of user input was unsuccessful <br/>*to learn more about recognize action events view our how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md)*|
+| RecognizeCanceled | Your application canceled the request to recognize user input |
To understand which events are published for different actions, refer to [this guide](../../how-tos/call-automation/actions-for-call-control.md) that provides code samples as well as sequence diagrams for various call control flows.
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md
As part of compliance requirements in various industries, vendors are expected t
## Known limitations - Play action isn't enabled to work with Teams Interoperability. -
-## What's coming up next for Play action
-As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the play action will add new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Text-to-Speech and fine tuning Text-to-Speech with SSML. With these capabilities, you can improve customer interactions to create more personalized messages.
- ## Next Steps Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users.
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
The recognize action can be used for many reasons, below are a few examples of h
![Recognize Action](./media/recognize-flow.png)
-## What's coming up next for Recognize action
-
-As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the recognize action will add in new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Speech-To-Text. With these, you can improve customer interactions and recognize voice inputs from participants on the call.
- ## Next steps - Check out our how-to guide to learn how you can [gather user input](../../how-tos/call-automation/recognize-action.md).
communication-services Direct Routing Sip Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-sip-specification.md
description: SIP protocol details for Azure Communication Services direct routin
Previously updated : 05/30/2023 Last updated : 06/15/2023 audience: admin
The REFERRED-BY header has a SIP URI with transferor MRI encoded in it and trans
|: |:- |:- | | x-m | MRI | Full MRI of transferor/transfer target as populated by CC | | x-t | Tenant ID | x-t resource ID Optional resource ID as populated by CC |
-| x-ti | Transferor Correlation Id | Correlation ID of the call to the transferor |
+| x-ti | Transferor Correlation ID | Correlation ID of the call to the transferor |
| x-tt | Transfer target call URI | Encoded call replacement URI | The size of the Refer Header can be up to 400 symbols in this case. The SBC must support handling Refer messages with size up to 400 symbols.
When the SBC receives a 503 message with a Retry-After header in response to an
## Handling retries (603 response)
-If an end user observes several missed calls for one call after declining the incoming call, it means that the SBC or PSTN trunk provider's retry mechanism is misconfigured. The SBC must be reconfigured to stop the retry efforts on the 603 response.
+If an end user observes several missed calls for one call after declining the incoming call, it means that the SBC or PSTN trunk provider's retry mechanism is misconfigured. The SBC must be reconfigured to stop the retry efforts on the 603 response.
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
-Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available for steering calls, like CreateCall, Transfer, Redirect, and managing participants. Actions are accompanied with sample code on how to invoke the said action and sequence diagrams describing the events expected after invoking an action. These diagrams will help you visualize how to program your service application with Call Automation.
+Call Automation uses a REST API interface to receive requests for actions and provide responses to notify whether the request was successfully submitted or not. Due to the asynchronous nature of calling, most actions have corresponding events that are triggered when the action completes successfully or fails. This guide covers the actions available for steering calls, like CreateCall, Transfer, Redirect, and managing participants. Actions are accompanied with sample code on how to invoke the said action and sequence diagrams describing the events expected after invoking an action. These diagrams help you visualize how to program your service application with Call Automation.
Call Automation supports various other actions to manage call media and recording that aren't included in this guide. > [!NOTE]
-> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
+> Call Automation currently doesn't interoperate with Microsoft Teams. Actions like making, redirecting a call to a Teams user or adding them to a call using Call Automation isn't supported.
-As a pre-requisite, we recommend you to read the below articles to make the most of this guide:
-1. Call Automation [concepts guide](../../concepts/call-automation/call-automation.md#call-actions) that describes the action-event programming model and event callbacks.
-2. Learn about [user identifiers](../../concepts/identifiers.md#the-communicationidentifier-type) like CommunicationUserIdentifier and PhoneNumberIdentifier used in this guide.
+As a prerequisite, we recommend you to read these articles to make the most of this guide:
+
+1. Call Automation [concepts guide](../../concepts/call-automation/call-automation.md#call-actions) that describes the action-event programming model and event callbacks.
+2. Learn about [user identifiers](../../concepts/identifiers.md#the-communicationidentifier-type) like CommunicationUserIdentifier and PhoneNumberIdentifier used in this guide.
+
+For all the code samples, `client` is CallAutomationClient object that can be created as shown and `callConnection` is the CallConnection object obtained from Answer or CreateCall response. You can also obtain it from callback events received by your application.
-For all the code samples, `client` is CallAutomationClient object that can be created as shown and `callConnection` is the CallConnection object obtained from Answer or CreateCall response. You can also obtain it from callback events received by your application.
## [csharp](#tab/csharp)+ ```csharp var client = new CallAutomationClient("<resource_connection_string>"); ```+ ## [Java](#tab/java)+ ```java CallAutomationClient client = new CallAutomationClientBuilder().connectionString("<resource_connection_string>").buildClient(); ```+
+## [JavaScript](#tab/javascript)
+
+```javascript
+const client = new CallAutomationClient("<resource_connection_string>");
+```
+
+## [Python](#tab/python)
+
+```python
+call_automation_client = CallAutomationClient.from_connection_string("<resource_connection_string>")
+```
+ --
-## Make an outbound call
-You can place a 1:1 or group call to a communication user or phone number (public or Communication Services owned number). Below sample makes an outbound call from your service application to a phone number.
-callerIdentifier is used by Call Automation as your application's identity when making an outbound a call. When calling a PSTN endpoint, you also need to provide a phone number that will be used as the source caller ID and shown in the call notification to the target PSTN endpoint.
-To place a call to a Communication Services user, you'll need to provide a CommunicationUserIdentifier object instead of PhoneNumberIdentifier.
+## Make an outbound call
+
+You can place a 1:1 or group call to a communication user or phone number (public or Communication Services owned number).
+When calling a PSTN endpoint, you also need to provide a phone number that is used as the source caller ID and shown in the call notification to the target PSTN endpoint.
+To place a call to a Communication Services user, you need to provide a CommunicationUserIdentifier object instead of PhoneNumberIdentifier.
+ ### [csharp](#tab/csharp)+ ```csharp
-Uri callBackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
-var callerIdentifier = new CommunicationUserIdentifier("<user_id>");
-CallSource callSource = new CallSource(callerIdentifier);
-callSource.CallerId = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
-var callThisPerson = new PhoneNumberIdentifier("+16041234567");
-var listOfPersonToBeCalled = new List<CommunicationIdentifier>();
-listOfPersonToBeCalled.Add(callThisPerson);
-var createCallOptions = new CreateCallOptions(callSource, listOfPersonToBeCalled, callBackUri);
-CreateCallResult response = await client.CreateCallAsync(createCallOptions);
+Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var callThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); // person to call
+CreateCallResult response = await client.CreateCallAsync(callThisPerson, callbackUri);
```+ ### [Java](#tab/java)+ ```java String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
-List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(new PhoneNumberIdentifier("+16471234567")));
-CommunicationUserIdentifier callerIdentifier = new CommunicationUserIdentifier("<user_id>");
-CreateCallOptions createCallOptions = new CreateCallOptions(callerIdentifier, targets, callbackUri)
- .setSourceCallerId("+18001234567"); // This is the ACS provisioned phone number for the caller
-Response<CreateCallResult> response = client.createCallWithResponse(createCallOptions).block();
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the ACS provisioned phone number for the caller
+CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16471234567"), callerIdNumber); // person to call
+CreateCallResult response = client.createCall(callInvite, callbackUri).block();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const callInvite = {
+ targetParticipant: { phoneNumber: "+18008008800" }, // person to call
+ sourceCallIdNumber: { phoneNumber: "+18888888888" } // This is the ACS provisioned phone number for the caller
+};
+const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events
+const response = await client.createCall(callInvite, callbackUri);
+```
+
+### [Python](#tab/python)
+
+```python
+callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events
+caller_id_number = PhoneNumberIdentifier(
+ "+18001234567"
+) # This is the ACS provisioned phone number for the caller
+call_invite = CallInvite(
+ target=PhoneNumberIdentifier("+16471234567"),
+ source_caller_id_number=caller_id_number,
+)
+call_connection_properties = client.create_call(call_invite, callback_uri)
```+
+--
+When making a group call that includes a phone number, you must provide a phone number that is used as a caller ID number to the PSTN endpoint.
+
+### [csharp](#tab/csharp)
+
+```csharp
+Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+var pstnEndpoint = new PhoneNumberIdentifier("+16041234567");
+var voipEndpoint = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+var groupCallOptions = new CreateGroupCallOptions(new List<CommunicationIdentifier>{ pstnEndpoint, voipEndpoint }, callbackUri)
+{
+ SourceCallerIdNumber = new PhoneNumberIdentifier("+16044561234"), // This is the ACS provisioned phone number for the caller
+};
+CreateCallResult response = await client.CreateGroupCallAsync(groupCallOptions);
+```
+
+### [Java](#tab/java)
+
+```java
+String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+18001234567"); // This is the ACS provisioned phone number for the caller
+List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(new PhoneNumberIdentifier("+16471234567"), new CommunicationUserIdentifier("<user_id_of_target>")));
+CreateGroupCallOptions groupCallOptions = new CreateGroupCallOptions(targets, callbackUri);
+groupCallOptions.setSourceCallIdNumber(callerIdNumber);
+Response<CreateCallResult> response = client.createGroupCallWithResponse(createGroupCallOptions).block();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events
+const participants = [
+ { phoneNumber: "+18008008800" },
+ { communicationUserId: "<user_id_of_target>" }, //user id looks like 8:a1b1c1-...
+];
+const createCallOptions = {
+ sourceCallIdNumber: { phoneNumber: "+18888888888" }, // This is the ACS provisioned phone number for the caller
+};
+const response = await client.createGroupCall(participants, callbackUri, createCallOptions);
+```
+
+### [Python](#tab/python)
+
+```python
+callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events
+caller_id_number = PhoneNumberIdentifier(
+ "+18888888888"
+) # This is the ACS provisioned phone number for the caller
+pstn_endpoint = PhoneNumberIdentifier("+18008008800")
+voip_endpoint = CommunicationUserIdentifier(
+ "<user_id_of_target>"
+) # user id looks like 8:a1b1c1-...
+call_connection_properties = client.create_group_call(
+ target_participants=[voip_endpoint, pstn_endpoint],
+ callback_url=callback_uri,
+ source_caller_id_number=caller_id_number,
+)
+```
+ --
-The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
-1. `CallConnected` event notifying that the call has been established with the callee.
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events are published to the callback endpoint you provided earlier:
+
+1. `CallConnected` event notifying that the call has been established with the callee.
2. `ParticipantsUpdated` event that contains the latest list of participants in the call. ![Sequence diagram for placing an outbound call.](media/make-call-flow.png) - ## Answer an incoming call
-Once you've subscribed to receive [incoming call notifications](../../concepts/call-automation/incoming-call-notification.md) to your resource, below is sample code on how to answer that call. When answering a call, it's necessary to provide a callback url. Communication Services will post all subsequent events about this call to that url.
+
+Once you've subscribed to receive [incoming call notifications](../../concepts/call-automation/incoming-call-notification.md) to your resource, you will answer an incoming call. When answering a call, it's necessary to provide a callback url. Communication Services post all subsequent events about this call to that url.
+ ### [csharp](#tab/csharp) ```csharp
var answerCallOptions = new AnswerCallOptions(incomingCallContext, callBackUri);
AnswerCallResult answerResponse = await client.AnswerCallAsync(answerCallOptions); CallConnection callConnection = answerResponse.CallConnection; ```+ ### [Java](#tab/java) ```java
String callbackUri = "https://<myendpoint>/Events";
AnswerCallOptions answerCallOptions = new AnswerCallOptions(incomingCallContext, callbackUri); Response<AnswerCallResult> response = client.answerCallWithResponse(answerCallOptions).block(); ```+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+const callbackUri = "https://<myendpoint>/Events";
+
+const { callConnection } = await client.answerCall(incomingCallContext, callbackUri);
+```
+
+### [Python](#tab/python)
+
+```python
+incoming_call_context = "<IncomingCallContext_From_IncomingCall_Event>"
+callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events
+call_connection_properties = client.answer_call(
+ incoming_call_context=incoming_call_context, callback_url=callback_uri
+)
+```
+ --
-The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events will be published to the callback endpoint you provided earlier:
-1. `CallConnected` event notifying that the call has been established with the caller.
+The response provides you with CallConnection object that you can use to take further actions on this call once it's connected. Once the call is answered, two events are published to the callback endpoint you provided earlier:
+
+1. `CallConnected` event notifying that the call has been established with the caller.
2. `ParticipantsUpdated` event that contains the latest list of participants in the call. ![Sequence diagram for answering an incoming call.](media/answer-flow.png)
-## Reject a call
-You can choose to reject an incoming call as shown below. You can provide a reject reason: none, busy or forbidden. If nothing is provided, none is chosen by default.
+## Reject a call
+
+You can choose to reject an incoming call as shown below. You can provide a reject reason: none, busy or forbidden. If nothing is provided, none is chosen by default.
+ # [csharp](#tab/csharp)+ ```csharp string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>"; var rejectOption = new RejectCallOptions(incomingCallContext); rejectOption.CallRejectReason = CallRejectReason.Forbidden; _ = await client.RejectCallAsync(rejectOption); ```+ # [Java](#tab/java) ```java
RejectCallOptions rejectCallOptions = new RejectCallOptions(incomingCallContext)
.setCallRejectReason(CallRejectReason.BUSY); Response<Void> response = client.rejectCallWithResponse(rejectCallOptions).block(); ```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+const rejectOptions = {
+ callRejectReason: KnownCallRejectReason.Forbidden,
+};
+await client.rejectCall(incomingCallContext, rejectOptions);
+```
+
+# [Python](#tab/python)
+
+```python
+incoming_call_context = "<IncomingCallContext_From_IncomingCall_Event>"
+client.reject_call(
+ incoming_call_context=incoming_call_context,
+ call_reject_reason=CallRejectReason.FORBIDDEN,
+)
+```
+ --
-No events are published for reject action.
+No events are published for reject action.
+
+## Redirect a call
+
+You can choose to redirect an incoming call to one or more endpoints without answering it. Redirecting a call removes your application's ability to control the call using Call Automation.
-## Redirect a call
-You can choose to redirect an incoming call to one or more endpoints without answering it. Redirecting a call will remove your application's ability to control the call using Call Automation.
# [csharp](#tab/csharp)+ ```csharp string incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
-var target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
-var redirectOption = new RedirectCallOptions(incomingCallContext, target);
-_ = await client.RedirectCallAsync(redirectOption);
+var target = new CallInvite(new CommunicationUserIdentifier("<user_id_of_target>")); //user id looks like 8:a1b1c1-...
+_ = await client.RedirectCallAsync(incomingCallContext, target);
```+ # [Java](#tab/java)+ ```java String incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
-CommunicationIdentifier target = new CommunicationUserIdentifier("<user_id_of_target>"); //user id looks like 8:a1b1c1-...
+CallInvite target = new CallInvite(new CommunicationUserIdentifier("<user_id_of_target>")); //user id looks like 8:a1b1c1-...
RedirectCallOptions redirectCallOptions = new RedirectCallOptions(incomingCallContext, target); Response<Void> response = client.redirectCallWithResponse(redirectCallOptions).block(); ```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const incomingCallContext = "<IncomingCallContext_From_IncomingCall_Event>";
+const target = { targetParticipant: { communicationUserId: "<user_id_of_target>" } }; //user id looks like 8:a1b1c1-...
+await client.redirectCall(incomingCallContext, target);
+```
+
+# [Python](#tab/python)
+
+```python
+incoming_call_context = "<IncomingCallContext_From_IncomingCall_Event>"
+call_invite = CallInvite(
+ CommunicationUserIdentifier("<user_id_of_target>")
+ ) # user id looks like 8:a1b1c1-...
+client.redirect_call(
+ incoming_call_context=incoming_call_context, target_participant=call_invite
+)
+```
+ --
-To redirect the call to a phone number, set the target to be PhoneNumberIdentifier.
+To redirect the call to a phone number, construct the target with PhoneNumberIdentifier.
+ # [csharp](#tab/csharp)+ ```csharp
-var target = new PhoneNumberIdentifier("+16041234567");
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var target = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
```+ # [Java](#tab/java)+ ```java
-CommunicationIdentifier target = new PhoneNumberIdentifier("+18001234567");
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+CallInvite target = new CallInvite(new PhoneNumberIdentifier("+18001234567"), callerIdNumber);
```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const callerIdNumber = { phoneNumber: "+16044561234" };
+const target = {
+ targetParticipant: { phoneNumber: "+16041234567" },
+ sourceCallIdNumber: callerIdNumber,
+};
+```
+
+# [Python](#tab/python)
+
+```python
+caller_id_number = PhoneNumberIdentifier(
+ "+18888888888"
+) # This is the ACS provisioned phone number for the caller
+call_invite = CallInvite(
+ target=PhoneNumberIdentifier("+16471234567"),
+ source_caller_id_number=caller_id_number,
+)
+```
+ --
-No events are published for redirect. If the target is a Communication Services user or a phone number owned by your resource, it will generate a new IncomingCall event with 'to' field set to the target you specified.
+No events are published for redirect. If the target is a Communication Services user or a phone number owned by your resource, it generates a new IncomingCall event with 'to' field set to the target you specified.
+
+## Transfer a 1:1 call
+
+When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call removes your application from the call and hence remove its ability to control the call using Call Automation.
-## Transfer a 1:1 call
-When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application from the call and hence remove its ability to control the call using Call Automation.
# [csharp](#tab/csharp)+ ```csharp var transferDestination = new CommunicationUserIdentifier("<user_id>"); var transferOption = new TransferToParticipantOptions(transferDestination); TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption); ```+ # [Java](#tab/java)+ ```java CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>"); TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination); Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block(); ```
-When transferring to a phone number, it's mandatory to provide a source caller ID. This ID serves as the identity of your application(the source) for the destination endpoint.
-# [csharp](#tab/csharp)
-```csharp
-var transferDestination = new PhoneNumberIdentifier("+16041234567");
-var transferOption = new TransferToParticipantOptions(transferDestination);
-transferOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
-TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const transferDestination = { communicationUserId: "<user_id>" };
+const result = await callConnection.transferCallToParticipant(transferDestination);
```
-# [Java](#tab/java)
-```java
-CommunicationIdentifier transferDestination = new PhoneNumberIdentifier("+16471234567");
-TransferToParticipantCallOptions options = new TransferToParticipantCallOptions(transferDestination)
- .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
-Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+
+# [Python](#tab/python)
+
+```python
+transfer_destination = CommunicationUserIdentifier("<user_id>")
+call_connection_client = call_automation_client.get_call_connection("<call_connection_id_from_ongoing_call>")
+result = call_connection_client.transfer_call_to_participant(
+ target_participant=transfer_destination
+)
```+ --
-The below sequence diagram shows the expected flow when your application places an outbound 1:1 call and then transfers it to another endpoint.
+When transferring to a phone number, it's mandatory to provide a source caller ID. This ID serves as the identity of your application(the source) for the destination endpoint.
+
+--
+The sequence diagram shows the expected flow when your application places an outbound 1:1 call and then transfers it to another endpoint.
![Sequence diagram for placing a 1:1 call and then transferring it.](media/transfer-flow.png)
-## Add a participant to a call
-You can add one or more participants (Communication Services users or phone numbers) to an existing call. When adding a phone number, it's mandatory to provide source caller ID. This caller ID will be shown on call notification to the participant being added.
+## Add a participant to a call
+
+You can add a participant (Communication Services user or phone number) to an existing call. When adding a phone number, it's mandatory to provide source caller ID. This caller ID is shown on call notification to the participant being added.
+ # [csharp](#tab/csharp)+ ```csharp
-var addThisPerson = new PhoneNumberIdentifier("+16041234567");
-var listOfPersonToBeAdded = new List<CommunicationIdentifier>();
-listOfPersonToBeAdded.Add(addThisPerson);
-var addParticipantsOption = new AddParticipantsOptions(listOfPersonToBeAdded);
-addParticipantsOption.SourceCallerId = new PhoneNumberIdentifier("+16044561234");
-AddParticipantsResult result = await callConnection.AddParticipantsAsync(addParticipantsOption);
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+var addThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
+AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisPerson);
```+ # [Java](#tab/java)+ ```java
-CommunicationIdentifier target = new PhoneNumberIdentifier("+16041234567");
-List<CommunicationIdentifier> targets = new ArrayList<>(Arrays.asList(target));
-AddParticipantsOptions addParticipantsOptions = new AddParticipantsOptions(targets)
- .setSourceCallerId(new PhoneNumberIdentifier("+18001234567"));
-Response<AddParticipantsResult> addParticipantsResultResponse = callConnectionAsync.addParticipantsWithResponse(addParticipantsOptions).block();
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS provisioned phone number for the caller
+CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
+AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite);
+Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block();
```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const callerIdNumber = { phoneNumber: "+16044561234" }; // This is the ACS provisioned phone number for the caller
+const addThisPerson = {
+ targetParticipant: { phoneNumber: "+16041234567" },
+ sourceCallIdNumber: callerIdNumber,
+};
+const addParticipantResult = await callConnection.addParticipant(addThisPerson);
+```
+
+# [Python](#tab/python)
+
+```python
+caller_id_number = PhoneNumberIdentifier(
+ "+18888888888"
+) # This is the ACS provisioned phone number for the caller
+call_invite = CallInvite(
+ target=PhoneNumberIdentifier("+18008008800"),
+ source_caller_id_number=caller_id_number,
+)
+call_connection_client = call_automation_client.get_call_connection(
+ "<call_connection_id_from_ongoing_call>"
+)
+result = call_connection_client.add_participant(call_invite)
+```
+ -- To add a Communication Services user, provide a CommunicationUserIdentifier instead of PhoneNumberIdentifier. Source caller ID isn't mandatory in this case.
-AddParticipant will publish a `AddParticipantSucceeded` or `AddParticipantFailed` event, along with a `ParticipantUpdated` providing the latest list of participants in the call.
+AddParticipant publishes a `AddParticipantSucceeded` or `AddParticipantFailed` event, along with a `ParticipantUpdated` providing the latest list of participants in the call.
![Sequence diagram for adding a participant to the call.](media/add-participant-flow.png) ## Remove a participant from a call+ # [csharp](#tab/csharp)+ ```csharp var removeThisUser = new CommunicationUserIdentifier("<user_id>");
-var listOfParticipantsToBeRemoved = new List<CommunicationIdentifier>();
-listOfParticipantsToBeRemoved.Add(removeThisUser);
-var removeOption = new RemoveParticipantsOptions(listOfParticipantsToBeRemoved);
-RemoveParticipantsResult result = await callConnection.RemoveParticipantsAsync(removeOption);
+RemoveParticipantsResult result = await callConnection.RemoveParticipantAsync(removeThisUser);
```+ # [Java](#tab/java)+ ```java CommunicationIdentifier removeThisUser = new CommunicationUserIdentifier("<user_id>");
-RemoveParticipantsOptions removeParticipantsOptions = new RemoveParticipantsOptions(new ArrayList<>(Arrays.asList(removeThisUser)));
-Response<RemoveParticipantsResult> removeParticipantsResultResponse = callConnectionAsync.removeParticipantsWithResponse(removeParticipantsOptions).block();
+RemoveParticipantOptions removeParticipantOptions = new RemoveParticipantOptions(removeThisUser);
+Response<RemoveParticipantResult> removeParticipantResultResponse = callConnectionAsync.removeParticipantWithResponse(removeThisUser).block();
```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const removeThisUser = { communicationUserId: "<user_id>" };
+const removeParticipantResult = await callConnection.removeParticipant(removeThisUser);
+```
+
+# [Python](#tab/python)
+
+```python
+remove_this_user = CommunicationUserIdentifier("<user_id>")
+call_connection_client = call_automation_client.get_call_connection(
+ "<call_connection_id_from_ongoing_call>"
+)
+result = call_connection_client.remove_participant(remove_this_user)
+```
+ --
-RemoveParticipant only generates `ParticipantUpdated` event describing the latest list of participants in the call. The removed participant is excluded if remove operation was successful.
+RemoveParticipant will publish a `RemoveParticipantSucceeded` or `RemoveParticipantFailed` event, along with a `ParticipantUpdated` providing the latest list of participants in the call. The removed participant is excluded if the remove operation was successful.
![Sequence diagram for removing a participant from the call.](media/remove-participant-flow.png)
-## Hang up on a call
-Hang Up action can be used to remove your application from the call or to terminate a group call by setting forEveryone parameter to true. For a 1:1 call, hang up will terminate the call with the other participant by default.
+## Hang up on a call
+
+Hang Up action can be used to remove your application from the call or to terminate a group call by setting forEveryone parameter to true. For a 1:1 call, hang up terminates the call with the other participant by default.
# [csharp](#tab/csharp)+ ```csharp
-_ = await callConnection.HangUpAsync(true);
+_ = await callConnection.HangUpAsync(forEveryone: true);
```+ # [Java](#tab/java)+ ```java
-Response<Void> response1 = callConnectionAsync.hangUpWithResponse(new HangUpOptions(true)).block();
+Response<Void> response = callConnectionAsync.hangUpWithResponse(true).block();
```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+await callConnection.hangUp(true);
+```
+
+# [Python](#tab/python)
+
+```python
+call_connection_client.hang_up(is_for_everyone=True)
+```
+ --
-CallDisconnected event is published once the hangUp action has completed successfully.
+CallDisconnected event is published once the hangUp action has completed successfully.
+
+## Get information about a call participant
-## Get information about a call participant
# [csharp](#tab/csharp)+ ```csharp
-CallParticipant participantInfo = await callConnection.GetParticipantAsync("<user_id>")
+CallParticipant participantInfo = await callConnection.GetParticipantAsync(new CommunicationUserIdentifier("<user_id>"));
```+ # [Java](#tab/java)+ ```java
-CallParticipant participantInfo = callConnection.getParticipant("<user_id>").block();
+CallParticipant participantInfo = callConnection.getParticipant(new CommunicationUserIdentifier("<user_id>")).block();
```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const participantInfo = await callConnection.getParticipant({ communicationUserId: "<user_id>" });
+```
+
+# [Python](#tab/python)
+
+```python
+participant_info = call_connection_client.get_participant(
+ CommunicationUserIdentifier("<user_id>")
+)
+```
+ -- ## Get information about all call participants+ # [csharp](#tab/csharp)+ ```csharp List<CallParticipant> participantList = (await callConnection.GetParticipantsAsync()).Value.ToList(); ```+ # [Java](#tab/java)+ ```java
-List<CallParticipant> participantsInfo = Objects.requireNonNull(callConnection.listParticipants().block()).getValues();
+List<CallParticipant> participantList = Objects.requireNonNull(callConnection.listParticipants().block()).getValues();
```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const participantList = await callConnection.listParticipants();
+```
+
+# [Python](#tab/python)
+
+```python
+participant_list = call_connection_client.list_participants()
+```
+ --
-## Get latest info about a call
+## Get latest info about a call
+ # [csharp](#tab/csharp)+ ```csharp
-CallConnectionProperties thisCallsProperties = callConnection.GetCallConnectionProperties();
+CallConnectionProperties callConnectionProperties = await callConnection.GetCallConnectionPropertiesAsync();
```+ # [Java](#tab/java)+ ```java
-CallConnectionProperties thisCallsProperties = callConnection.getCallProperties().block();
+CallConnectionProperties callConnectionProperties = callConnection.getCallProperties().block();
```+
+# [JavaScript](#tab/javascript)
+
+```javascript
+const callConnectionProperties = await callConnection.getCallConnectionProperties();
+```
+
+# [Python](#tab/python)
+
+```python
+call_connection_properties = call_connection_client.get_call_properties()
+```
+ --
communication-services Handle Events With Event Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/handle-events-with-event-processor.md
+
+ Title: Azure Communication Services Call Automation Events handling with Event Processor
+
+description: Provides a how-to guide on using Call Automation's Event Processor
++++ Last updated : 05/31/2023+++
+# Handling Events with Call Automation's Event Processor Overview
+
+Once the call is established with Call Automation, further state update of the on-going call is sent as separate event via [Webhook Callback](../../concepts/call-automation/call-automation.md#call-automation-webhook-events). These events have important information, such as latest state of the call and outcome of the request that was sent.
+
+Call Automation's EventProcessor helps easily processing these Webhook Callback events for your applications. It helps corelate each event to its respective call, and allow you to build applications with ease.
+
+## Benefits
+
+EventProcessor features allow developers to easily build robust application that can handle call automation events.
+
+- Associating events to its respective calls
+- Easily able to write code linearly
+- Handling events that could happen anytime during the call (such as, CallDisconnected or ParticipantsUpdated)
+- Handling rare case where events arriving earlier than the request's response
+- Set custom timeout for waiting on events
+
+## Passing events to Call Automation's EventProcessor
+
+Call Automation's EventProcessor first need to consume events that were sent from the service. Once the event arrives in callback endpoint, pass the event to EventProcessor.
+
+> [!IMPORTANT]
+> Have you established webhook callback events endpoint? EventProcessor still needs to consume callback events through webhook callback. See **[this page](../../quickstarts/call-automation/callflows-for-customer-interactions.md)** for further assistance.
+
+```csharp
+using Azure.Communication.CallAutomation;
+
+[HttpPost]
+public IActionResult CallbackEvent([FromBody] CloudEvent[] cloudEvents)
+{
+ // Use your call automation client that established the call
+ CallAutomationEventProcessor eventProcessor = callAutomationClient.GetEventProcessor();
+
+ // Let event be processed in EventProcessor
+ eventProcessor.ProcessEvents(cloudEvents);
+ return Ok();
+}
+```
+
+Now we're ready to use the EventProcessor.
+
+## Using Create Call request's response to wait for Call Connected event
+
+First scenario is to create an outbound call, then wait until the call is established with the EventProcessor.
+
+```csharp
+// Creating an outbound call here
+CreateCallResult createCallResult = await callAutomationClient.CreateCallAsync(callInvite, callbackUri);
+CallConnection callConnection = createCallResult.CallConnection;
+
+// Wait for 40 seconds before throwing timeout error.
+var tokenSource = new CancellationTokenSource(TimeSpan.FromSeconds(40));
+
+// We can wait for EventProcessor that related to outbound call here. In this case, we are waiting for CreateCallEventResult, upto 40 seconds.
+CreateCallEventResult eventResult = await createCallResult.WaitForEventProcessorAsync(tokenSource);
+
+// Once EventResult comes back, we can get SuccessResult of CreateCall - which is, CallConnected event.
+CallConnected returnedEvent = eventResult.SuccessResult;
+```
+
+With EventProcessor, we can easily wait CallConnected event until the call is established. If the call was never established (that is, callee never picked up the phone), it throws Timeout Exception.
+
+> [!NOTE]
+> If specific timeout was not given when waiting on EventProcessor, it will wait until its default timeout happens. The default timeout is 4 minutes.
+
+## Using Play request's response to wait for Play events
+
+Now the call is established, let's try to play some audio in the call, then wait until the media is played.
+
+```csharp
+// play my prompt to everyone
+FileSource fileSource = new FileSource(playPrompt);
+PlayResult playResult = await callConnection.GetCallMedia().PlayToAllAsync(fileSource);
+
+// wait for play to complete
+PlayEventResult playEventResult = await playResult.WaitForEventProcessorAsync();
+
+// check if the play was completed successfully
+if (playEventResult.IsSuccess)
+{
+ // success play!
+ PlayCompleted playCompleted = playEventResult.SuccessResult;
+}
+else
+{
+ // failed to play the audio.
+ PlayFailed playFailed = playEventResultResult.FailureResult;
+}
+```
+
+> [!WARNING]
+> EventProcessor utilizes OperationContext to track event with its related request. If OperationContext was not set during request, EventProcessor will set generated GUID to track future events to the request. If you are setting your own OperationContext during reuest, EventProcessor will still work - but it's advised to set them differently from request to request, to allow EventProcessor to distinguish request 1's event and request 2's event.
+
+## Handling events with Ongoing EventProcessor
+
+Some events could happen anytime during the call, such as CallDisconnected or ParticipantsUpdated, when other caller leaves the call. EventProcessor provides a way to handle these events easily with ongoing event handler.
+
+```csharp
+// Use your call automation client that established the call
+CallAutomationEventProcessor eventProcessor = callAutomationClient.GetEventProcessor();
+
+// attatch ongoing EventProcessor for this particular call,
+// then prints out # of participants in the call
+eventProcessor.AttachOngoingEventProcessor<ParticipantsUpdated>(callConnectionId, recievedEvent => {
+ logger.LogInformation($"Number of participants in this Call: [{callConnectionId}], Number Of Participants[{recievedEvent.Participants.Count}]");
+});
+```
+
+With this given ongoing EventProcessor, we can now print number or participant in the call whenever people join or leave the call.
+
+> [!TIP]
+> You can attach ongoing handler to any event type! This opens possibility to build your application with callback design pattern.
+
+## Advanced: Using predicate to wait for specific event
+
+If you would like to wait for specific event with given predicate without relying on EventResult returned from request, it's also possible to do so with predicate. Let's try to wait for CallDisconnected event with matching CallConnectionId and its type.
+
+```csharp
+// Use your call automation client that established the call
+CallAutomationEventProcessor eventProcessor = callAutomationClient.GetEventProcessor();
+
+// With given matching informations, wait for this specific event
+CallDisconnected disconnectedEvent = (CallDisconnected)await eventProcessor.WaitForEvent(predicate
+=>
+ predicate.CallConnectionId == myConnectionId
+ && predicate.GetType() == typeof(CallDisconnected)
+);
+```
+
+## Advanced: Detailed specification
+
+- The default timeout for waiting on EventProcessor is 4 minutes. After that, it will throw timeout exception.
+- The same call automation client that made the request must be used to wait on event using EventProcessor.
+- Once the CallDisconnect event is received for the call, all of the call's events are removed from the memory.
+- In some rare cases, event may arrive earlier than the response of the request. In these cases, it's saved in backlog for 5 seconds.
+- You may have multiple EventProcessor wait on the same event. Once the matching event arrives, all of EventProcessors waiting on that event returns with arrived event.
+
+## Next steps
+
+- Learn more about [How to control and steer calls with Call Automation](../call-automation/actions-for-call-control.md).
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-action.md
Last updated 09/06/2022
-zone_pivot_groups: acs-csharp-java
+zone_pivot_groups: acs-js-csharp-java-python
# Customize voice prompts to users with Play action
This guide will help you get started with playing audio files to participants by
[!INCLUDE [Play audio with Java](./includes/play-audio-quickstart-java.md)] ::: zone-end ++
+## Event codes
+|Status|Code|Subcode|Message|
+|-|--|--|--|
+|PlayCompleted|200|0|Action completed successfully.|
+|PlayFailed|400|8535|Action failed, file format is invalid.|
+|PlayFailed|400|8536|Action failed, file could not be downloaded.|
+|PlayCanceled|400|8508|Action failed, the operation was canceled.|
+ ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md
Last updated 09/16/2022
-zone_pivot_groups: acs-csharp-java
+zone_pivot_groups: acs-js-csharp-java-python
# Gather user input with Recognize action
This guide will help you get started with recognizing DTMF input provided by par
[!INCLUDE [Recognize action with Java](./includes/recognize-action-quickstart-java.md)] ::: zone-end ++ ## Event codes |Status|Code|Subcode|Message|
This guide will help you get started with recognizing DTMF input provided by par
|RecognizeFailed|400|8532|Action failed, inter-digit silence timeout reached.| |RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.| |RecognizeFailed|500|8512|Unknown internal server error.|
+|RecognizeCanceled|400|8508|Action failed, the operation was canceled.|
+ ## Clean up resources
communication-services Secure Webhook Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/secure-webhook-endpoint.md
+zone_pivot_groups: acs-js-csharp-java-python
# How to secure webhook endpoint
Securing the delivery of messages from end to end is crucial for ensuring the confidentiality, integrity, and trustworthiness of sensitive information transmitted between systems. Your ability and willingness to trust information received from a remote system relies on the sender providing their identity. Call Automation has two ways of communicating events that can be secured; the shared IncomingCall event sent by Azure Event Grid, and all other mid-call events sent by the Call Automation platform via webhook. ## Incoming Call Event+ Azure Communication Services relies on Azure Event Grid subscriptions to deliver the [IncomingCall event](../../concepts/call-automation/incoming-call-notification.md). You can refer to the Azure Event Grid team for their [documentation about how to secure a webhook subscription](../../../event-grid/secure-webhook-delivery.md). ## Call Automation webhook events
Azure Communication Services relies on Azure Event Grid subscriptions to deliver
A common way you can improve this security is by implementing an API KEY mechanism. Your webserver can generate the key at runtime and provide it in the callback URI as a query parameter when you answer or create a call. Your webserver can verify the key in the webhook callback from Call Automation before allowing access. Some customers require more security measures. In these cases, a perimeter network device may verify the inbound webhook, separate from the webserver or application itself. The API key mechanism alone may not be sufficient.
-## Improving Call Automation webhook callback security
-
-Each mid-call webhook callback sent by Call Automation uses a signed JSON Web Token (JWT) in the Authentication header of the inbound HTTPS request. You can use standard Open ID Connect (OIDC) JWT validation techniques to ensure the integrity of the token as follows. The lifetime of the JWT is five (5) minutes and a new token is created for every event sent to the callback URI.
-
-1. Obtain the Open ID configuration URL: https://acscallautomation.communication.azure.com/calling/.well-known/acsopenidconfiguration
-2. Install the [Microsoft.AspNetCore.Authentication.JwtBearer NuGet](https://www.nuget.org/packages/Microsoft.AspNetCore.Authentication.JwtBearer) package.
-3. Configure your application to validate the JWT using the NuGet package and the configuration of your ACS resource. You need the `audience` values as it is present in the JWT payload.
-4. Validate the issuer, audience and the JWT token.
- - The audience is your ACS resource ID you used to setup your Call Automation client. Refer [here](../../quickstarts/voice-video-calling/get-resource-id.md) about how to get it.
- - The JSON Web Key Set (JWKS) endpoint in the OpenId configuration contains the keys used to validate the JWT token. When the signature is valid and the token hasn't expired (within 5 minutes of generation), the client can use the token for authorization.
-
-This sample code demonstrates how to use `Microsoft.IdentityModel.Protocols.OpenIdConnect` to validate webhook payload
-## [csharp](#tab/csharp)
-```csharp
-using Microsoft.AspNetCore.Authentication.JwtBearer;
-using Microsoft.IdentityModel.Protocols;
-using Microsoft.IdentityModel.Protocols.OpenIdConnect;
-using Microsoft.IdentityModel.Tokens;
-
-var builder = WebApplication.CreateBuilder(args);
-
-builder.Services.AddEndpointsApiExplorer();
-builder.Services.AddSwaggerGen();
-
-// Add ACS CallAutomation OpenID configuration
-var configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>(
- builder.Configuration["OpenIdConfigUrl"],
- new OpenIdConnectConfigurationRetriever());
-var configuration = configurationManager.GetConfigurationAsync().Result;
-builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddJwtBearer(options =>
- {
- options.Configuration = configuration;
- options.TokenValidationParameters = new TokenValidationParameters
- {
- ValidAudience = builder.Configuration["AllowedAudience"]
- };
- });
-builder.Services.AddAuthorization();
-var app = builder.Build();
-
-// Configure the HTTP request pipeline.
-if (app.Environment.IsDevelopment())
-{
- app.UseSwagger();
- app.UseSwaggerUI();
-}
-
-app.UseHttpsRedirection();
-
-app.MapPost("/api/callback", (CloudEvent[] events) =>
-{
- // Your implemenation on the callback event
- return Results.Ok();
-})
-.RequireAuthorization()
-.WithOpenApi();
-
-app.UseAuthentication();
-app.UseAuthorization();
-
-app.Run();
-
-```
## Next steps-- Learn more about [How to control and steer calls with Call Automation](../call-automation/actions-for-call-control.md).+
+- Learn more about [How to control and steer calls with Call Automation](../call-automation/actions-for-call-control.md).
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/callflows-for-customer-interactions.md
Last updated 09/06/2022
-zone_pivot_groups: acs-csharp-java
+zone_pivot_groups: acs-js-csharp-java-python
# Build a customer interaction workflow using Call Automation
In this quickstart, you'll learn how to build an application that uses the Azure
[!INCLUDE [Call flows for customer interactions with Java](./includes/callflow-for-customer-interactions-java.md)] ::: zone-end ++ ## Subscribe to IncomingCall event IncomingCall is an Azure Event Grid event for notifying incoming calls to your Communication Services resource. To learn more about it, see [this guide](../../concepts/call-automation/incoming-call-notification.md).
If you want to clean up and remove a Communication Services subscription, you ca
- Learn how to [redirect inbound telephony calls](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md) with Call Automation. - Learn more about [Play action](../../concepts/call-automation/play-action.md). - Learn more about [Recognize action](../../concepts/call-automation/recognize-action.md).
+- Learn more about [Handle Call Automation Events with EventProcessor](../../how-tos/call-automation/handle-events-with-event-processor.md).
communication-services Quickstart Make An Outbound Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/call-automation/quickstart-make-an-outbound-call.md
+
+ Title: Quickstart - Make an outbound call using Call Automation
+
+description: In this quickstart, you'll learn how to make an outbound PSTN call using Azure Communication Services using Call Automation
++ Last updated : 05/26/2023+++
+zone_pivot_groups: acs-js-csharp-java-python
++
+# Quickstart: Make an outbound call using Call Automation
+
+Azure Communication Services (ACS) Call Automation APIs are a powerful way to create interactive calling experiences. In this quick start we'll cover a way to make an outbound call and recognize various events in the call.
++++
confidential-computing How To Create Custom Image Confidential Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-create-custom-image-confidential-vm.md
+
+ Title: Create a custom image for Azure confidential VMs
+description: Learn how to use the Azure CLI to create a Confidential VM custom image from a vhd.
++
+m
++ Last updated : 6/09/2023++++
+# How to create a custom image for Azure confidential VMs
+
+**Applies to:** :heavy_check_mark: Linux VMs
+
+This "how to" shows you how to use the Azure Command-Line Interface (Azure CLI) to create a custom image for your confidential virtual machine (confidential VM) in Azure. The Azure CLI is used to create and manage Azure resources via either the command line or scripts.
+
+## Prerequisites
+
+If you don't have an Azure subscription, [create a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+### Launch Azure Cloud Shell
+
+Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it.
+
+If you prefer to install and use the CLI locally, this quickstart requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+### Create a resource group
+
+Create a resource group with the [az group create](/cli/azure/group) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+> [!NOTE]
+> Confidential VMs are not available in all locations. For currently supported locations, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
+```azurecli - interactive
+az group create --name $resourceGroupName --location eastus
+```
+## Create custom image for Azure confidential VMs
+
+1. [Create a virtual machine](/azure/virtual-machines/linux/quick-create-cli) with an Ubuntu image of choice from the list of [Azure supported images.](/azure/virtual-machines/linux/cli-ps-findimage)
+
+2. Ensure the kernel version is at least 5.15.0-1037-azure. You can use "uname -r" after connecting to the VM to check the kernel version. Here you can add any changes to the image as you see fit.
+
+3. Deallocate your virtual machine.
+ ```azurecli
+ az vm deallocate --name $vmname --resource-group $resourceGroupName
+ ```
+3. Create a shared access token (SAS token) for the OS disk and store it in a variable. Note this OS disk doesn't have to be in the same resource group as the confidential VM.
+ ```azurecli
+ disk_name=$(az vm show --name $vmname --resource-group $resourceGroupName | jq -r .storageProfile.osDisk.name)
+ disk_url=$(az disk grant-access --duration-in-seconds 3600 --name $disk_name --resource-group $resourceGroupName | jq -r .accessSas)
+ ```
+
+#### Create a storage account to store the exported disk
+
+1. Create a storage account.
+ ```azurecli
+ az storage account create --resource-group ${resourceGroupName} --name ${storageAccountName} --location $region --sku "Standard_LRS"
+ ```
+2. Create a container within the storage account.
+ ```azurecli
+ az storage container create --name $storageContainerName --account-name $storageAccountName --resource-group $resourceGroupName
+ ```
+3. Generate a read shared access token (SAS token) to the [storage container](/cli/azure/storage/container) and save it in a variable.
+ ```azurecli
+ container_sas=$(az storage container generate-sas --name $storageContainerName --account-name $storageAccountName --auth-mode key --expiry 2024-01-01 --https-only --permissions dlrw -o tsv)
+ ```
+4. Using azcopy, copy the OS disk to the storage container.
+ ```azurecli
+ blob_url="https://${storageAccountName}.blob.core.windows.net/$storageContainerName/$referenceVHD"
+ azcopy copy "$disk_url" "${blob_url}?${container_sas}"
+ ```
+
+#### Create a confidential supported image
+
+1. Create a Shared Image Gallery.
+ ```azurecli
+ az sig create --resource-group $resourceGroupName --gallery-name $galleryName
+ ```
+2. Create a [shared image gallery (SIG) definition](/cli/azure/sig/image-definition) confidential VM supported. Set new names for gallery image definition, SIG publisher, and SKU.
+ ```azurecli
+ az sig image-definition create --resource-group $resourceGroupName --location $region --gallery-name $galleryName --gallery-image-definition $imageDefinitionName --publisher $sigPublisherName --offer ubuntu --sku $sigSkuName --os-type Linux --os-state specialized --hyper-v-generation V2 --features SecurityType=ConfidentialVMSupported
+ ```
+3. Get the storage account ID.
+ ```azurecli
+ storageAccountId=$(az storage account show --name $storageAccountName --resource-group $resourceGroupName | jq -r .id)
+ ```
+4. Create a SIG image version.
+ ```azurecli
+ az sig image-version create --resource-group $resourceGroupName --gallery-name $galleryName --gallery-image-definition $imageDefinitionName --gallery-image-version $galleryImageVersion --os-vhd-storage-account $storageAccountId --os-vhd-uri $blob_url
+ ```
+5. Store the ID of the SIG image version created in the previous step.
+ ```azurecli
+ galleryImageId=$(az sig image-version show --gallery-image-definition $imageDefinitionName --gallery-image-version $galleryImageVersion --gallery-name $galleryName --resource-group $resourceGroupName | jq -r .id)
+ ```
+#### Create a confidential VM
+
+1. Create a VM with the [az vm create](/cli/azure/vm) command. For more information, see [secure boot and vTPM](/azure/virtual-machines/trusted-launch). For more information on disk encryption, see [confidential OS disk encryption](confidential-vm-overview.md). Currently confidential VMs support the [DC series](/azure/virtual-machines/dcasv5-dcadsv5-series) and [EC series](/azure/virtual-machines/ecasv5-ecadsv5-series) VM sizes.
+ ```azurecli-interactive
+ az vm create \
+ --resource-group $resourceGroupName \
+ --name $cvmname \
+ --size Standard_DC4as_v5 \
+ --enable-vtpm true \
+ --enable-secure-boot true \
+ --image $galleryImageId \
+ --public-ip-sku Standard \
+ --security-type ConfidentialVM \
+ --os-disk-security-encryption-type VMGuestStateOnly \
+ --specialized
+ ```
+## Next Steps
+> [!div class="nextstepaction"]
+> [Connect and attest the CVM through Microsoft Azure Attestation Sample App](quick-create-confidential-vm-azure-cli-amd.md)
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Scaling is defined by the combination of limits and rules.
| Scale limit | Default value | Min value | Max value | |||||
- | Minimum number of replicas per revision | 0 | 0 | 30 |
- | Maximum number of replicas per revision | 10 | 1 | 30 |
+ | Minimum number of replicas per revision | 0 | 0 | 300 |
+ | Maximum number of replicas per revision | 10 | 1 | 300 |
To request an increase in maximum replica amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
container-apps Tutorial Ci Cd Runners Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md
+
+ Title: 'Tutorial: Run GitHub Actions runners and Azure Pipelines agents with Azure Container Apps jobs'
+description: Learn to create self-hosted CI/CD runners and agents with jobs in Azure Container Apps
++++ Last updated : 06/01/2023+
+zone_pivot_groups: container-apps-jobs-self-hosted-ci-cd
++
+# Tutorial: Deploy self-hosted CI/CD runners and agents with Azure Container Apps jobs
+
+GitHub Actions and Azure Pipelines allow you to run CI/CD workflows with self-hosted runners and agents. You can run self-hosted runners and agents using event-driven Azure Container Apps [jobs](./jobs.md).
+
+Self-hosted runners are useful when you need to run workflows that require access to local resources or tools that aren't available to a cloud-hosted runner. For example, a self-hosted runner in a Container Apps job allows your workflow to access resources inside the job's virtual network that isn't accessible to a cloud-hosted runner.
+
+Running self-hosted runners as event-driven jobs allows you to take advantage of the serverless nature of Azure Container Apps. Jobs execute automatically when a workflow is triggered and exit when the job completes.
+
+You only pay for the time that the job is running.
++
+In this tutorial, you learn how to run GitHub Actions runners as an [event-driven Container Apps job](jobs.md#event-driven-jobs).
+
+> [!div class="checklist"]
+> * Create a Container Apps environment to deploy your self-hosted runner
+> * Create a GitHub repository for running a workflow that uses a self-hosted runner
+> * Build a container image that runs a GitHub Actions runner
+> * Deploy the runner as a job to the Container Apps environment
+> * Create a workflow that uses the self-hosted runner and verify that it runs
+
+> [!IMPORTANT]
+> Self-hosted runners are only recommended for *private* repositories. Using them with public repositories can allow dangerous code to execute on your self-hosted runner. For more information, see [Self-hosted runner security](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#self-hosted-runner-security).
+++
+In this tutorial, you learn how to run Azure Pipelines agents as an [event-driven Container Apps job](jobs.md#event-driven-jobs).
+
+> [!div class="checklist"]
+> * Create a Container Apps environment to deploy your self-hosted agent
+> * Create an Azure DevOps organization and project
+> * Build a container image that runs an Azure Pipelines agent
+> * Use a manual job to create a placeholder agent in the Container Apps environment
+> * Deploy the agent as a job to the Container Apps environment
+> * Create a pipeline that uses the self-hosted agent and verify that it runs
+
+> [!IMPORTANT]
+> Self-hosted agents are only recommended for *private* projects. Using them with public projects can allow dangerous code to execute on your self-hosted agent. For more information, see [Self-hosted agent security](/azure/devops/pipelines/agents/linux-agent#permissions).
++
+> [!NOTE]
+> Container apps and jobs don't support running Docker in containers. Any steps in your workflows that use Docker commands will fail when run on a self-hosted runner or agent in a Container Apps job.
+
+## Prerequisites
+
+- **Azure account**: If you don't have one, you [can create one for free](https://azure.microsoft.com/free/).
+
+- **Azure CLI**: Install the [Azure CLI](/cli/azure/install-azure-cli).
+- **Azure DevOps organization**: If you don't have a DevOps organization with an active subscription, you [can create one for free](https://azure.microsoft.com/services/devops/).
+
+Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a list of limitations.
+
+## Setup
+
+1. To sign in to Azure from the CLI, run the following command and follow the prompts to complete the authentication process.
+
+ ```bash
+ az login
+ ```
+
+1. Ensure you're running the latest version of the CLI via the `upgrade` command.
+
+ ```bash
+ az upgrade
+ ```
+
+1. Install the latest version of the Azure Container Apps CLI extension.
+
+ ```bash
+ az extension add --name containerapp --upgrade
+ ```
+
+1. Register the `Microsoft.App` and `Microsoft.OperationalInsights` namespaces if you haven't already registered them in your Azure subscription.
+
+ ```bash
+ az provider register --namespace Microsoft.App
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+1. Define the environment variables that are used throughout this article.
+
+ ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-github-actions"
+
+ ```bash
+ RESOURCE_GROUP="jobs-sample"
+ LOCATION="northcentralus"
+ ENVIRONMENT="env-jobs-sample"
+ JOB_NAME="github-actions-runner-job"
+ ```
+
+ ::: zone-end
+
+ ::: zone pivot="container-apps-jobs-self-hosted-ci-cd-azure-pipelines"
+
+ ```bash
+ RESOURCE_GROUP="jobs-sample"
+ LOCATION="northcentralus"
+ ENVIRONMENT="env-jobs-sample"
+ JOB_NAME="azure-pipelines-agent-job"
+ PLACEHOLDER_JOB_NAME="placeholder-agent-job"
+ ```
+
+ ::: zone-end
+
+## Create a Container Apps environment
+
+The Azure Container Apps environment acts as a secure boundary around container apps and jobs so they can share the same network and communicate with each other.
+
+1. Create a resource group using the following command.
+
+ ```bash
+ az group create \
+ --name "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
+
+1. Create the Container Apps environment using the following command.
+
+ ```bash
+ az containerapp env create \
+ --name "$ENVIRONMENT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
++
+## Create a GitHub repository for running a workflow
+
+To execute a workflow, you need to create a GitHub repository that contains the workflow definition.
+
+1. Navigate to [GitHub](https://github.com/new) and sign in.
+
+1. Create a new repository by entering the following values.
+
+ | Setting | Value |
+ |||
+ | Owner | Select your GitHub username. |
+ | Repository name | Enter a name for your repository. |
+ | Visibility | Select **Private**. |
+ | Initialize this repository with | Select **Add a README file**. |
+
+ Leave the rest of the values as their default selection.
+
+1. Select **Create repository**.
+
+1. In your new repository, select **Actions**.
+
+1. Search for the *Simple workflow* template and select **Configure**.
+
+1. Select **Commit changes** to add the workflow to your repository.
+
+The workflow runs on the `ubuntu-latest` GitHub-hosted runner and prints a message to the console. Later, you replace the GitHub-hosted runner with a self-hosted runner.
+
+## Get a GitHub personal access token
+
+To run a self-hosted runner, you need to create a personal access token (PAT) in GitHub. Each time a runner starts, the PAT is used to generate a token to register the runner with GitHub. The PAT is also used by the GitHub Actions runner scale rule to monitor the repository's workflow queue and start runners as needed.
+
+1. In GitHub, select your profile picture in the upper-right corner and select **Settings**.
+
+1. Select **Developer settings**.
+
+1. Under *Personal access tokens*, select **Fine-grained tokens**.
+
+1. Select **Generate new token**.
+
+1. In the *New fine-grained personal access token* screen, enter the following values.
+
+ | Setting | Value |
+ |||
+ | Token name | Enter a name for your token. |
+ | Expiration | Select **30 days**. |
+ | Repository access | Select **Only select repositories** and select the repository you created. |
+
+ Enter the following values for *Repository permissions*.
+
+ | Setting | Value |
+ |||
+ | Actions | Select **Read-only**. |
+ | Administration | Select **Read and write**. |
+ | Metadata | Select **Read-only**. |
+
+1. Select **Generate token**.
+
+1. Copy the token value.
+
+1. Define variables that are used to configure the runner and scale rule later.
+
+ ```bash
+ GITHUB_PAT="<GITHUB_PAT>"
+ REPO_OWNER="<REPO_OWNER>"
+ REPO_NAME="<REPO_NAME>"
+ ```
+
+ Replace the placeholders with the following values:
+
+ | Placeholder | Value |
+ |||
+ | `<GITHUB_PAT>` | The GitHub PAT you generated. |
+ | `<REPO_OWNER>` | The owner of the repository you created earlier. This value is usually your GitHub username. |
+ | `<REPO_NAME>` | The name of the repository you created earlier. This value is the same name you entered in the *Repository name* field. |
+
+## Build the GitHub Actions runner container image
+
+To create a self-hosted runner, you need to build a container image that executes the runner. In this section, you build the container image and push it to a container registry.
+
+> [!NOTE]
+> The image you build in this tutorial contains a basic self-hosted runner that's suitable for running as a Container Apps job. You can customize it to include additional tools or dependencies that your workflows require.
+
+1. Define a name for your container image and registry.
+
+ ```bash
+ CONTAINER_IMAGE_NAME="github-actions-runner:1.0"
+ CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>"
+ ```
+
+ Replace `<CONTAINER_REGISTRY_NAME>` with a unique name for creating a container registry. Container registry names must be *unique within Azure* and be from 5 to 50 characters in length containing numbers and lowercase letters only.
+
+1. Create a container registry.
+
+ ```bash
+ az acr create \
+ --name "$CONTAINER_REGISTRY_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --location "$LOCATION" \
+ --sku Basic \
+ --admin-enabled true
+ ```
+
+1. The Dockerfile for creating the runner image is available on [GitHub](https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial/tree/main/github-actions-runner). Run the following command to clone the repository and build the container image in the cloud using the `az acr build` command.
+
+ ```bash
+ az acr build \
+ --registry "$CONTAINER_REGISTRY_NAME" \
+ --image "$CONTAINER_IMAGE_NAME" \
+ --file "Dockerfile.github" \
+ "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git"
+ ```
+
+ The image is now available in the container registry.
+
+## Deploy a self-hosted runner as a job
+
+You can now create a job that uses to use the container image. In this section, you create a job that executes the self-hosted runner and authenticates with GitHub using the PAT you generated earlier. The job uses the [`github-runner` scale rule](https://keda.sh/docs/latest/scalers/github-runner/) to create job executions based on the number of pending workflow runs.
+
+1. Create a job in the Container Apps environment.
+
+ ```bash
+ az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \
+ --trigger-type Event \
+ --replica-timeout 300 \
+ --replica-retry-limit 0 \
+ --replica-completion-count 1 \
+ --parallelism 1 \
+ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
+ --min-executions 0 \
+ --max-executions 10 \
+ --polling-interval 30 \
+ --scale-rule-name "github-runner" \
+ --scale-rule-type "github-runner" \
+ --scale-rule-metadata "github-runner=https://api.github.com" "owner=$REPO_OWNER" "runnerScope=repo" "repos=$REPO_NAME" "targetWorkflowQueueLength=1" \
+ --scale-rule-auth "personalAccessToken=personal-access-token" \
+ --cpu "2.0" \
+ --memory "4Gi" \
+ --secrets "personal-access-token=$GITHUB_PAT" \
+ --env-vars "GITHUB_PAT=secretref:personal-access-token" "REPO_URL=https://github.com/$REPO_OWNER/$REPO_NAME" "REGISTRATION_TOKEN_API_URL=https://api.github.com/repos/$REPO_OWNER/$REPO_NAME/actions/runners/registration-token" \
+ --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io"
+ ```
+
+ The following table describes the key parameters used in the command.
+
+ | Parameter | Description |
+ | | |
+ | `--replica-timeout` | The maximum duration a replica can execute. |
+ | `--replica-retry-limit` | The number of times to retry a failed replica. |
+ | `--replica-completion-count` | The number of replicas to complete successfully before a job execution is considered successful. |
+ | `--parallelism` | The number of replicas to start per job execution. |
+ | `--min-executions` | The minimum number of job executions to run per polling interval. |
+ | `--max-executions` | The maximum number of job executions to run per polling interval. |
+ | `--polling-interval` | The polling interval at which to evaluate the scale rule. |
+ | `--scale-rule-name` | The name of the scale rule. |
+ | `--scale-rule-type` | The type of scale rule to use. To learn more about the GitHub runner scaler, see the KEDA [documentation](https://keda.sh/docs/latest/scalers/github-runner/). |
+ | `--scale-rule-metadata` | The metadata for the scale rule. |
+ | `--scale-rule-auth` | The authentication for the scale rule. |
+ | `--secrets` | The secrets to use for the job. |
+ | `--env-vars` | The environment variables to use for the job. |
+ | `--registry-server` | The container registry server to use for the job. For an Azure Container Registry, the command automatically configures authentication. |
+
+ The scale rule configuration defines the event source to monitor. It's evaluated on each polling interval and determines how many job executions to trigger. To learn more, see [Set scaling rules](scale-app.md).
+
+The event-driven job is now created in the Container Apps environment.
+
+## Run a workflow and verify the job
+
+The job is configured to evaluate the scale rule every 30 seconds. During each evaluation, it checks the number of pending workflow runs that require a self-hosted runner and starts a new job execution for pending workflow, up to a configured maximum of 10 executions.
+
+To verify the job was configured correctly, you modify the workflow to use a self-hosted runner and trigger a workflow run. You can then view the job execution logs to see the workflow run.
+
+1. In the GitHub repository, navigate to the workflow you generated earlier. It's a YAML file in the `.github/workflows` directory.
+
+1. Select **Edit in place**.
+
+1. Update the `runs-on` property to `self-hosted`:
+
+ ```yaml
+ runs-on: self-hosted
+ ```
+
+1. Select **Commit changes...**.
+
+1. Select **Commit changes**.
+
+1. Navigate to the **Actions** tab.
+
+ A new workflow is now queued. Within 30 seconds, the job execution will start and the workflow will complete soon after.
+
+ Wait for the action to complete before going on the next step.
+
+1. List the executions of the job to confirm a job execution was created and completed successfully.
+
+ ```bash
+ az containerapp job execution list \
+ --name "$JOB_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --output table \
+ --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}'
+ ```
+++
+## Create an Azure DevOps project and repository
+
+To execute a pipeline, you need an Azure DevOps project and repository.
+
+1. Navigate to [Azure DevOps](https://aex.dev.azure.com/) and sign in to your account.
+
+1. Select an existing organization or create a new one.
+
+1. In the organization overview page, select **New project** and enter the following values.
+
+ | Setting | Value |
+ |||
+ | *Project name* | Enter a name for your project. |
+ | *Visibility* | Select **Private**. |
+
+1. Select **Create**.
+
+1. From the side navigation, select **Repos**.
+
+1. Under *Initialize main branch with a README or .gitignore*, select **Add a README**.
+
+1. Leave the rest of the values as defaults and select **Initialize**.
+
+## Create a new agent pool
+
+Create a new agent pool to run the self-hosted runner.
+
+1. In your Azure DevOps project, expand the left navigation bar and select **Project settings**.
+
+ :::image type="content" source="media/runners/azure-devops-project-settings.png" alt-text="Screenshot of the Azure DevOps project settings button.":::
+
+1. Under the *Pipelines* section in the *Project settings* navigation menu, select **Agent pools**.
+
+ :::image type="content" source="media/runners/azure-devops-agent-pools.png" alt-text="Screenshot of Azure DevOps agent pools button.":::
+
+1. Select **Add pool** and enter the following values.
+
+ | Setting | Value |
+ |||
+ | *Pool to link* | Select **New**. |
+ | *Pool type* | Select **Self-hosted**. |
+ | *Name* | Enter **container-apps**. |
+ | *Grant access permission to all pipelines* | Select this checkbox. |
+
+1. Select **Create**.
+
+## Get an Azure DevOps personal access token
+
+To run a self-hosted runner, you need to create a personal access token (PAT) in Azure DevOps. The PAT is used to authenticate the runner with Azure DevOps. It's also used by the scale rule to determine the number of pending pipeline runs and trigger new job executions.
+
+1. In Azure DevOps, select *User settings* next to your profile picture in the upper-right corner.
+
+1. Select **Personal access tokens**.
+
+1. In the *Personal access tokens* page, select **New Token** and enter the following values.
+
+ | Setting | Value |
+ |||
+ | *Name* | Enter a name for your token. |
+ | *Organization* | Select the organization you chose or created earlier. |
+ | *Scopes* | Select **Custom defined**. |
+ | *Show all scopes* | Select **Show all scopes**. |
+ | *Agent Pools (Read & manage)* | Select **Agent Pools (Read & manage)**. |
+
+ Leave all other scopes unselected.
+
+1. Select **Create**.
+
+1. Copy the token value to a secure location.
+
+ You can't retrieve the token after you leave the page.
+
+1. Define variables that are used to configure the Container Apps jobs later.
+
+ ```bash
+ AZP_TOKEN="<AZP_TOKEN>"
+ ORGANIZATION_URL="<ORGANIZATION_URL>"
+ AZP_POOL="container-apps"
+ ```
+
+ Replace the placeholders with the following values:
+
+ | Placeholder | Value | Comments |
+ ||||
+ | `<AZP_TOKEN>` | The Azure DevOps PAT you generated. | |
+ | `<ORGANIZATION_URL>` | The URL of your Azure DevOps organization. | For example, `https://dev.azure.com/myorg` or `https://myorg.visualstudio.com`. |
+
+## Build the Azure Pipelines agent container image
+
+To create a self-hosted agent, you need to build a container image that runs the agent. In this section, you build the container image and push it to a container registry.
+
+> [!NOTE]
+> The image you build in this tutorial contains a basic self-hosted agent that's suitable for running as a Container Apps job. You can customize it to include additional tools or dependencies that your pipelines require.
+
+1. Back in your terminal, define a name for your container image and registry.
+
+ ```bash
+ CONTAINER_IMAGE_NAME="azure-pipelines-agent:1.0"
+ CONTAINER_REGISTRY_NAME="<CONTAINER_REGISTRY_NAME>"
+ ```
+
+ Replace `<CONTAINER_REGISTRY_NAME>` with a unique name for creating a container registry.
+
+ Container registry names must be *unique within Azure* and be from 5 to 50 characters in length containing numbers and lowercase letters only.
+
+1. Create a container registry.
+
+ ```bash
+ az acr create \
+ --name "$CONTAINER_REGISTRY_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --location "$LOCATION" \
+ --sku Basic \
+ --admin-enabled true
+ ```
+
+1. The Dockerfile for creating the runner image is available on [GitHub](https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial/tree/main/azure-pipelines-agent). Run the following command to clone the repository and build the container image in the cloud using the `az acr build` command.
+
+ ```bash
+ az acr build \
+ --registry "$CONTAINER_REGISTRY_NAME" \
+ --image "$CONTAINER_IMAGE_NAME" \
+ --file "Dockerfile.azure-pipelines" \
+ "https://github.com/Azure-Samples/container-apps-ci-cd-runner-tutorial.git"
+ ```
+
+ The image is now available in the container registry.
+
+## Create a placeholder self-hosted agent
+
+Before you can run a self-hosted agent in your new agent pool, you need to create a placeholder agent. Pipelines that use the agent pool fail when there's no placeholder agent. You can create a placeholder agent by running a job that registers an offline placeholder agent.
+
+1. Create a manual job in the Container Apps environment that creates the placeholder agent.
+
+ ```bash
+ az containerapp job create -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \
+ --trigger-type Manual \
+ --replica-timeout 300 \
+ --replica-retry-limit 0 \
+ --replica-completion-count 1 \
+ --parallelism 1 \
+ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
+ --cpu "2.0" \
+ --memory "4Gi" \
+ --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" \
+ --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" "AZP_PLACEHOLDER=1" "AZP_AGENT_NAME=placeholder-agent" \
+ --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io"
+ ```
+
+ The following table describes the key parameters used in the command.
+
+ | Parameter | Description |
+ | | |
+ | `--replica-timeout` | The maximum duration a replica can execute. |
+ | `--replica-retry-limit` | The number of times to retry a failed replica. |
+ | `--replica-completion-count` | The number of replicas to complete successfully before a job execution is considered successful. |
+ | `--parallelism` | The number of replicas to start per job execution. |
+ | `--secrets` | The secrets to use for the job. |
+ | `--env-vars` | The environment variables to use for the job. |
+ | `--registry-server` | The container registry server to use for the job. For an Azure Container Registry, the command automatically configures authentication. |
+
+ Setting the `AZP_PLACEHOLDER` environment variable configures the agent container to register as an offline placeholder agent without running a job.
+
+1. Execute the manual job to create the placeholder agent.
+
+ ```bash
+ az containerapp job start -n "$PLACEHOLDER_JOB_NAME" -g "$RESOURCE_GROUP"
+ ```
+
+1. List the executions of the job to confirm a job execution was created and completed successfully.
+
+ ```bash
+ az containerapp job execution list \
+ --name "$PLACEHOLDER_JOB_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --output table \
+ --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}'
+ ```
+
+1. Verify the placeholder agent was created in Azure DevOps.
+
+ 1. In Azure DevOps, navigate to your project.
+ 1. Select **Project settings** > **Agent pools** > **container-apps** > **Agents**.
+ 1. Confirm that a placeholder agent named `placeholder-agent` is listed.
+
+## Create a self-hosted agent as an event-driven job
+
+Now that you have a placeholder agent, you can create a self-hosted agent. In this section, you create an event-driven job that runs a self-hosted agent when a pipeline is triggered.
+
+```bash
+az containerapp job create -n "$JOB_NAME" -g "$RESOURCE_GROUP" --environment "$ENVIRONMENT" \
+ --trigger-type Event \
+ --replica-timeout 300 \
+ --replica-retry-limit 0 \
+ --replica-completion-count 1 \
+ --parallelism 1 \
+ --image "$CONTAINER_REGISTRY_NAME.azurecr.io/$CONTAINER_IMAGE_NAME" \
+ --min-executions 0 \
+ --max-executions 10 \
+ --polling-interval 30 \
+ --scale-rule-name "azure-pipelines" \
+ --scale-rule-type "azure-pipelines" \
+ --scale-rule-metadata "poolName=container-apps" "targetPipelinesQueueLength=1" \
+ --scale-rule-auth "personalAccessToken=personal-access-token" "organizationURL=organization-url" \
+ --cpu "2.0" \
+ --memory "4Gi" \
+ --secrets "personal-access-token=$AZP_TOKEN" "organization-url=$ORGANIZATION_URL" \
+ --env-vars "AZP_TOKEN=secretref:personal-access-token" "AZP_URL=secretref:organization-url" "AZP_POOL=$AZP_POOL" \
+ --registry-server "$CONTAINER_REGISTRY_NAME.azurecr.io"
+```
+
+The following table describes the scale rule parameters used in the command.
+
+| Parameter | Description |
+| | |
+| `--min-executions` | The minimum number of job executions to run per polling interval. |
+| `--max-executions` | The maximum number of job executions to run per polling interval. |
+| `--polling-interval` | The polling interval at which to evaluate the scale rule. |
+| `--scale-rule-name` | The name of the scale rule. |
+| `--scale-rule-type` | The type of scale rule to use. To learn more about the Azure Pipelines scaler, see the KEDA [documentation](https://keda.sh/docs/latest/scalers/azure-pipelines/). |
+| `--scale-rule-metadata` | The metadata for the scale rule. |
+| `--scale-rule-auth` | The authentication for the scale rule. |
+
+The scale rule configuration defines the event source to monitor. It's evaluated on each polling interval and determines how many job executions to trigger. To learn more, see [Set scaling rules](scale-app.md).
+
+The event-driven job is now created in the Container Apps environment.
+
+## Run a pipeline and verify the job
+
+Now that you've configured a self-hosted agent job, you can run a pipeline and verify it's working correctly.
+
+1. In the left-hand navigation of your Azure DevOps project, navigate to **Pipelines**.
+
+1. Select **Create pipeline**.
+
+1. Select **Azure Repos Git** as the location of your code.
+
+1. Select the repository you created earlier.
+
+1. Select **Starter pipeline**.
+
+1. In the pipeline YAML, change the `pool` from `vmImage: ubuntu-latest` to `name: container-apps`.
+
+ ```yaml
+ pool:
+ name: container-apps
+ ```
+
+1. Select **Save and run**.
+
+ The pipeline runs and uses the self-hosted agent job you created in the Container Apps environment.
+
+1. List the executions of the job to confirm a job execution was created and completed successfully.
+
+ ```bash
+ az containerapp job execution list \
+ --name "$JOB_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --output table \
+ --query '[].{Status: properties.status, Name: name, StartTime: properties.startTime}'
+ ```
++
+> [!TIP]
+> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
+
+## Clean up resources
+
+Once you're done, run the following command to delete the resource group that contains your Container Apps resources.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+
+```bash
+az group delete \
+ --resource-group $RESOURCE_GROUP
+```
+
+To delete your GitHub repository, see [Deleting a repository](https://docs.github.com/en/github/administering-a-repository/managing-repository-settings/deleting-a-repository).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Container Apps jobs](jobs.md)
container-apps Tutorial Dev Services Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-dev-services-kafka.md
+
+ Title: 'Tutorial: Create and use an Apache Kafka service for development'
+description: Create and use an Apache Kafka service for development
++++ Last updated : 06/06/2023+++
+# Tutorial: Create and use an Apache Kafka service for development
+
+The Azure Container Apps service enables you to provision services like Apache Kafka, Redis, [PostgreSQL](./tutorial-dev-services-postgresql.md) etc., on the same environment as your applications. Those services are deployed as special type of Container Apps. You can connect other applications to them securely without exporting secrets, or sharing them anywhere. Those services are deployed in the same private network as your applications so you don't have to setup or manage VNETs for simple development workflows. Finally, these services compute scale to 0 like other Container Apps when not used to cut down on cost for development.
+
+In this tutorial, you learn how to create and use a development Apache Kafka service. There are both step-by-step Azure CLI commands, and Bicep template fragments for each step. For Bicep, adding all fragments to the same bicep file and deploying the template all at once or after each incremental update works equally.
+
+> [!div class="checklist"]
+> * Create a Container Apps environment to deploy your service and container apps
+> * Create an Apache Kafka service
+> * Create and use test command line App to use the dev Apache Kafka
+> * Deploy a kafka-ui app to view
+> * Creating or Updating a consumer/prod that uses the dev service
+> * Compile a final bicep template to deploy all resources using a consistent and predictable template deployment
+> * Use an `azd` template for a one command deployment of all resources
+
+## Prerequisites
+
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
+- Optional: [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) of following AZD instructions
+
+> [!NOTE]
+> For a one command deployment, skip to the last `azd` [template step](#final-azd-template-for-all-resource).
+
+## Setup
+
+1. Define some values/parameters to we can use later for all commands/bicep resource.
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ RESOURCE_GROUP="kafka-dev"
+ LOCATION="northcentralus"
+ ENVIRONMENT="aca-env"
+ KAFKA_SVC="kafka01"
+ KAFKA_CLI_APP="kafka-cli-app"
+ KAFKA_UI_APP="kafka-ui-app"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ You still need to use the CLI to deploy the bicep template into a resource group. So define the following variables for the CLI
+
+ ```bash
+ RESOURCE_GROUP="kafka-dev"
+ LOCATION="northcentralus"
+ ```
+
+ For Bicep, start by creating a file called `kafka-dev.bicep` then add some parameters with default values to it
+
+ ```bicep
+ targetScope = 'resourceGroup'
+ param location string = resourceGroup().location
+ param appEnvironmentName string = 'aca-env'
+ param kafkaSvcName string = 'kafka01'
+ param kafkaCliAppName string = 'kafka-cli-app'
+ param kafkaUiAppName string = 'kafka-ui'
+ ```
+
+ to deploy the bicep template at any stage use:
+
+ ```bash
+ az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file kafka-dev.bicep
+ ```
+
+ # [azd](#tab/azd)
+
+ Define a couple of values to use for `azd`
+
+ ```bash
+ AZURE_ENV_NAME="azd-kafka-dev"
+ LOCATION="northcentralus"
+ ```
+
+ Initialize a Minimal azd template
+
+ ```bash
+ azd init \
+ --environment "$AZURE_ENV_NAME" \
+ --location "$LOCATION" \
+ --no-prompt
+ ```
+
+ > [!NOTE]
+ > `AZURE_ENV_NAME` is different from the Container App Environment name. `AZURE_ENV_NAME` in `azd` is for all resources in a template. Including other non Container Apps resources. We will create a different name for the Container Apps Environment.
+
+ then option `infra/main.bicep` and define a couple of parameters to use later in our template
+
+ ```bicep
+ param appEnvironmentName string = 'aca-env'
+ param kafkaSvcName string = 'kafka01'
+ param kafkaCliAppName string = 'kafka-cli-app'
+ param kafkaUiAppName string = 'kafka-ui'
+ ```
+
+ -
+
+1. Make sure to log in and upgrade/register all providers needed for your Azure Subscription
+
+ ```bash
+ az login
+ az upgrade
+ az bicep upgrade
+ az extension add --name containerapp --upgrade
+ az provider register --namespace Microsoft.App
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+## Create a Container App Environment
+
+1. Create a resource group
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az group create \
+ --name "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ ```bash
+ az group create \
+ --name "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
+
+ to deploy the bicep template at any stage use:
+
+ ```bash
+ az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file kafka-dev.bicep
+ ```
+
+ # [azd](#tab/azd)
+
+ No special setup is needed for managing resource groups in `azd`. `azd` drives a resource group from `AZURE_ENV_NAME`/`--environment` value.
+ You can however test the minimal template with
+
+ ```bash
+ azd up
+ ```
+
+ That command should create an empty resource group.
+
+ -
+
+1. Create a Container Apps environment
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az containerapp env create \
+ --name "$ENVIRONMENT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ add the following to your `kafka-dev.bicep` file
+
+ ```bicep
+ resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: appEnvironmentName
+ location: location
+ properties: {
+ appLogsConfiguration: {
+ destination: 'azure-monitor'
+ }
+ }
+ }
+ ```
+
+ > [!TIP]
+ > The Azure CLI will automatically create a Log Analytics work space for each environment. To achieve the same using a bicep template you must explicitly declare it and link it. This makes your deployment and resources significantly more clear and predictable, at the cost of some verbosity. To do that in Bicep, update the environment resource in bicep to
+
+ ```bicep
+ resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
+ name: '${appEnvironmentName}-log-analytics'
+ location: location
+ properties: {
+ sku: {
+ name: 'PerGB2018'
+ }
+ }
+ }
+
+ resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: appEnvironmentName
+ location: location
+ properties: {
+ appLogsConfiguration: {
+ destination: 'log-analytics'
+ logAnalyticsConfiguration: {
+ customerId: logAnalytics.properties.customerId
+ sharedKey: logAnalytics.listKeys().primarySharedKey
+ }
+ }
+ }
+ }
+ ```
+
+ # [azd](#tab/azd)
+
+ `azd` templates must use [bicep modules](/azure/azure-resource-manager/bicep/modules). First create a `./infra/core/host` folder. Then create `./infra/core/host/container-apps-environment.bicep` module with the following content
+
+ ```bicep
+ param name string
+ param location string = resourceGroup().location
+ param tags object = {}
+
+ resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
+ name: '${name}-log-analytics'
+ location: location
+ tags: tags
+ properties: {
+ sku: {
+ name: 'PerGB2018'
+ }
+ }
+ }
+
+ resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: name
+ location: location
+ tags: tags
+ properties: {
+ appLogsConfiguration: {
+ destination: 'log-analytics'
+ logAnalyticsConfiguration: {
+ customerId: logAnalytics.properties.customerId
+ sharedKey: logAnalytics.listKeys().primarySharedKey
+ }
+ }
+ }
+ }
+
+ output appEnvironmentId string = appEnvironment.id
+ ```
+
+ then in `./infra/main.bicep` load the module using
+
+ ```bicep
+ module appEnvironment './core/host/container-apps-environment.bicep' = {
+ name: 'appEnvironment'
+ scope: rg
+ params: {
+ name: appEnvironmentName
+ location: location
+ tags: tags
+ }
+ }
+ ```
+
+ run `azd up` to deploy.
+
+ -
+
+## Create an Apache Kafka service
+
+1. Create an Apache Kafka service
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ ENVIRONMENT_ID=$(az containerapp env show \
+ --name "$ENVIRONMENT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --output tsv \
+ --query id)
+
+ az rest \
+ --method PUT \
+ --url "/subscriptions/$(az account show --output tsv --query id)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.App/containerApps/$KAFKA_SVC?api-version=2023-04-01-preview" \
+ --body "{\"location\": \"$LOCATION\", \"properties\": {\"environmentId\": \"$ENVIRONMENT_ID\", \"configuration\": {\"service\": {\"type\": \"kafka\"}}}}"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ Add the following to `kafka-dev.bicep`
+
+ ```bicep
+ resource kafka 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: kafkaSvcName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ service: {
+ type: 'kafka'
+ }
+ }
+ }
+ }
+
+ output kafkaLogs string = 'az containerapp logs show -n ${kafka.name} -g ${resourceGroup().name} --follow --tail 30'
+ ```
+ To deploy the bicep template do
+
+ ```bash
+ az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file kafka-dev.bicep
+ ```
+
+ > [!TIP]
+ > The output `kafkaLogs` will output a CLI command to view the logs of postgres after it's been deployed. You can run the command to view the initialization logs of the new postgres service.
+
+ # [azd](#tab/azd)
+
+ Create a `./infra/core/host/container-app-service.bicep` module file with the following content
+
+ ```bicep
+ param name string
+ param location string = resourceGroup().location
+ param tags object = {}
+ param environmentId string
+ param serviceType string
+
+
+ resource service 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: name
+ location: location
+ tags: tags
+ properties: {
+ environmentId: environmentId
+ configuration: {
+ service: {
+ type: serviceType
+ }
+ }
+ }
+ }
+
+ output serviceId string = service.id
+ ```
+
+ Then update `./infra/main.bicep` to use the module with the following declaration:
+
+ ```bicep
+ module kafka './core/host/container-app-service.bicep' = {
+ name: 'kafka'
+ scope: rg
+ params: {
+ name: kafkaSvcName
+ location: location
+ tags: tags
+ environmentId: appEnvironment.outputs.appEnvironmentId
+ serviceType: 'kafka'
+ }
+ }
+ ```
+
+ Then deploy the template using `azd up`
+
+ -
+
+1. View log output from the postgres instance
+
+ # [Bash](#tab/bash)
+
+ use the logs command to view the logs
+
+ ```bash
+ az containerapp logs show \
+ --name $KAFKA_SVC \
+ --resource-group $RESOURCE_GROUP \
+ --follow --tail 30
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ The previous bicep example includes an output for the command to view the logs. For example:
+
+ ```bash
+ [
+ "az containerapp logs show -n kafka01 -g kafka-dev --follow --tail 30"
+ ]
+ ```
+
+ If you don't have the command, you can use the service name to get the logs using the CLI
+
+ ```bash
+ az containerapp logs show \
+ --name $KAFKA_SVC \
+ --resource-group $RESOURCE_GROUP \
+ --follow --tail 30
+ ```
+
+ # [azd](#tab/azd)
+
+ use the logs command to view the logs
+
+ ```bash
+ az containerapp logs show \
+ --name kafka01 \
+ --resource-group $RESOURCE_GROUP \
+ --follow --tail 30
+ ```
+
+ -
+
+ :::image type="content" source="media/tutorial-dev-services-kafka/azure-container-apps-kafka-service-logs.png" alt-text="Screenshot of container app kafka service logs.":::
+
+## Create a command line test app
+
+We start by creating an app to use `./kafka-topics.sh`, `./kafka-console-producer.sh`, and `kafka-console-consumer.sh` to connect to the Kafka instance.
+
+1. Create a `kafka-cli-app` app that binds to the PostgreSQL service
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az containerapp create \
+ --name "$KAFKA_CLI_APP" \
+ --image mcr.microsoft.com/k8se/services/kafka:3.4 \
+ --environment "$ENVIRONMENT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --min-replicas 1 \
+ --max-replicas 1 \
+ --command "/bin/sleep" "infinity"
+
+ az rest \
+ --method PATCH \
+ --headers "Content-Type=application/json" \
+ --url "/subscriptions/$(az account show --output tsv --query id)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.App/containerApps/$KAFKA_CLI_APP?api-version=2023-04-01-preview" \
+ --body "{\"properties\": {\"template\": {\"serviceBinds\": [{\"serviceId\": \"/subscriptions/$(az account show --output tsv --query id)/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.App/containerApps/$KAFKA_SVC\"}]}}}"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ Add the following to `postgres-dev.bicep`
+
+ ```bicep
+ resource kafkaCli 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: kafkaCliAppName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ template: {
+ serviceBinds: [
+ {
+ serviceId: kafka.id
+ }
+ ]
+ containers: [
+ {
+ name: 'kafka-cli'
+ image: 'mcr.microsoft.com/k8se/services/kafka:3.4'
+ command: [ '/bin/sleep', 'infinity' ]
+ }
+ ]
+ scale: {
+ minReplicas: 1
+ maxReplicas: 1
+ }
+ }
+ }
+ }
+
+ output kafkaCliExec string = 'az containerapp exec -n ${kafkaCli.name} -g ${resourceGroup().name} --command /bin/bash'
+ ```
+
+ > [!TIP]
+ > The output `kafkaCliExec` will output a CLI command to exec into the test app after it's been deployed.
+
+ # [azd](#tab/azd)
+
+ Create a module under `./infra/core/host/container-app.bicep` and add the following there
+
+ ```bicep
+ param name string
+ param location string = resourceGroup().location
+ param tags object = {}
+
+ param environmentId string
+ param serviceId string = ''
+ param containerName string
+ param containerImage string
+ param containerCommands array = []
+ param containerArgs array = []
+ param minReplicas int
+ param maxReplicas int
+ param targetPort int = 0
+ param externalIngress bool = false
+
+ resource app 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: name
+ location: location
+ tags: tags
+ properties: {
+ environmentId: environmentId
+ configuration: {
+ ingress: targetPort > 0 ? {
+ targetPort: targetPort
+ external: externalIngress
+ } : null
+ }
+ template: {
+ serviceBinds: !empty(serviceId) ? [
+ {
+ serviceId: serviceId
+ }
+ ] : null
+ containers: [
+ {
+ name: containerName
+ image: containerImage
+ command: !empty(containerCommands) ? containerCommands : null
+ args: !empty(containerArgs) ? containerArgs : null
+ }
+ ]
+ scale: {
+ minReplicas: minReplicas
+ maxReplicas: maxReplicas
+ }
+ }
+ }
+ }
+ ```
+
+ then use that module in `./infra/main.bicep` using
+
+ ```bicep
+ module kafkaCli './core/host/container-app.bicep' = {
+ name: 'kafkaCli'
+ scope: rg
+ params: {
+ name: kafkaCliAppName
+ location: location
+ tags: tags
+ environmentId: appEnvironment.outputs.appEnvironmentId
+ serviceId: kafka.outputs.serviceId
+ containerImage: 'mcr.microsoft.com/k8se/services/kafka:3.4'
+ containerName: 'kafka-cli'
+ maxReplicas: 1
+ minReplicas: 1
+ containerCommands: [ '/bin/sleep', 'infinity' ]
+ }
+ }
+ ```
+
+ deploy the template with `azd up`
+
+ -
+
+1. Run CLI exec command to connect to the test app
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az containerapp exec \
+ --name $KAFKA_CLI_APP \
+ --resource-group $RESOURCE_GROUP \
+ --command /bin/bash
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ The previous bicep example includes an output a second for the command to exec into the app. For example:
+
+ ```bash
+ [
+ "az containerapp logs show -n kafka01 -g kafka-dev --follow --tail 30",
+ "az containerapp exec -n kafka-cli-app -g kafka-dev --command /bin/bash"
+ ]
+ ```
+
+ If you don't have the command, you can get the app name to exec using the CLI
+
+ ```bash
+ az containerapp exec \
+ --name $KAFKA_CLI_APP \
+ --resource-group $RESOURCE_GROUP \
+ --command /bin/bash
+ ```
+
+ # [azd](#tab/azd)
+
+ ```bash
+ az containerapp exec \
+ --name kafka-cli-app \
+ --resource-group $RESOURCE_GROUP \
+ --command /bin/bash
+ ```
+
+ -
+
+ Using `--bind` or `serviceBinds` on the test app injects all the connection information into the application environment. Once you connect to the test container, you can inspect the values using
+
+ ```bash
+ env | grep "^KAFKA_"
+
+ KAFKA_SECURITYPROTOCOL=SASL_PLAINTEXT
+ KAFKA_BOOTSTRAPSERVER=kafka01:9092
+ KAFKA_HOME=/opt/kafka
+ KAFKA_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka-user" password="7dw..." user_kafka-user="7dw..." ;
+ KAFKA_BOOTSTRAP_SERVERS=kafka01:9092
+ KAFKA_SASLUSERNAME=kafka-user
+ KAFKA_SASL_USER=kafka-user
+ KAFKA_VERSION=3.4.0
+ KAFKA_SECURITY_PROTOCOL=SASL_PLAINTEXT
+ KAFKA_SASL_PASSWORD=7dw...
+ KAFKA_SASLPASSWORD=7dw...
+ KAFKA_SASL_MECHANISM=PLAIN
+ KAFKA_SASLMECHANISM=PLAIN
+ ```
+
+1. Us `kafka-topics.sh` to create a topic
+
+ First create a `kafka.props` file
+
+ ```bash
+ echo "security.protocol=$KAFKA_SECURITY_PROTOCOL" >> kafka.props && \
+ echo "sasl.mechanism=$KAFKA_SASL_MECHANISM" >> kafka.props && \
+ echo "sasl.jaas.config=$KAFKA_PROPERTIES_SASL_JAAS_CONFIG" >> kafka.props
+ ```
+
+ Create a `quickstart-events` topic
+
+ ```bash
+ /opt/kafka/bin/kafka-topics.sh \
+ --create --topic quickstart-events \
+ --bootstrap-server $KAFKA_BOOTSTRAP_SERVERS \
+ --command-config kafka.props
+ # Created topic quickstart-events.
+
+ /opt/kafka/bin/kafka-topics.sh \
+ --describe --topic quickstart-events \
+ --bootstrap-server $KAFKA_BOOTSTRAP_SERVERS \
+ --command-config kafka.props
+ # Topic: quickstart-events TopicId: lCkTKmvZSgSUCHozhhvz1Q PartitionCount: 1 ReplicationFactor: 1 Configs: segment.bytes=1073741824
+ # Topic: quickstart-events Partition: 0 Leader: 1 Replicas: 1 Isr: 1
+ ```
+
+1. Use `kafka-console-producer.sh` to write some events to the topic
+
+ ```bash
+ /opt/kafka/bin/kafka-console-producer.sh \
+ --topic quickstart-events \
+ --bootstrap-server $KAFKA_BOOTSTRAP_SERVERS \
+ --producer.config kafka.props
+
+ > this is my first event
+ > this is my second event
+ > this is my third event
+ > CTRL-C
+ ```
+
+ > [!NOTE]
+ > The `./kafka-console-producer.sh` command will prompt you to write events with `>`. Write some events as shown, then hit `CTRL-C` any time to finish.
+
+1. Use `kafka-console-consumer.sh` to read events from the topic
+
+ ```bash
+ /opt/kafka/bin/kafka-console-consumer.sh \
+ --topic quickstart-events \
+ --bootstrap-server $KAFKA_BOOTSTRAP_SERVERS \
+ --from-beginning \
+ --consumer.config kafka.props
+
+ # this is my first event
+ # this is my second event
+ # this is my third event
+ ```
++
+## Using a dev service with an existing app
+
+If you already have an app that uses Apache Kafka, you can update where the app reads the connection information to Kafka to use the following environment variables
+
+```bash
+KAFKA_HOME=/opt/kafka
+KAFKA_PROPERTIES_SASL_JAAS_CONFIG=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka-user" password="7dw..." user_kafka-user="7dw..." ;
+KAFKA_BOOTSTRAP_SERVERS=kafka01:9092
+KAFKA_SASL_USER=kafka-user
+KAFKA_VERSION=3.4.0
+KAFKA_SECURITY_PROTOCOL=SASL_PLAINTEXT
+KAFKA_SASL_PASSWORD=7dw...
+KAFKA_SASL_MECHANISM=PLAIN
+```
+
+Then using the CLI (or bicep) you can update the app to add a `--bind $KAFKA_SVC` to use the created dev service.
+
+## Deploying `kafka-ui` and binding it to the Kafka service
+
+For example, we can deploy [kafka-ui](https://github.com/provectus/kafka-ui) to view and manage the Kafka instance we have.
+
+# [Bash](#tab/bash)
+
+See Bicep or `azd` example
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource kafkaUi 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: kafkaUiAppName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ ingress: {
+ external: true
+ targetPort: 8080
+ }
+ }
+ template: {
+ serviceBinds: [
+ {
+ serviceId: kafka.id
+ name: 'kafka'
+ }
+ ]
+ containers: [
+ {
+ name: 'kafka-ui'
+ image: 'docker.io/provectuslabs/kafka-ui:latest'
+ command: [
+ '/bin/sh'
+ ]
+ args: [
+ '-c'
+ '''export KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="$KAFKA_BOOTSTRAP_SERVERS" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG="$KAFKA_PROPERTIES_SASL_JAAS_CONFIG" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM="$KAFKA_SASL_MECHANISM" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL="$KAFKA_SECURITY_PROTOCOL" && \
+ java $JAVA_OPTS -jar kafka-ui-api.jar'''
+ ]
+ resources: {
+ cpu: json('1.0')
+ memory: '2.0Gi'
+ }
+ }
+ ]
+ }
+ }
+}
+
+output kafkaUiUrl string = 'https://${kafkaUi.properties.configuration.ingress.fqdn}'
+```
+
+and visit the url printed url
+
+# [azd](#tab/azd)
+
+Update `./infra/main.bicep` with the following
+
+```bicep
+module kafkaUi './core/host/container-app.bicep' = {
+ name: 'kafka-ui'
+ scope: rg
+ params: {
+ name: kafkaUiAppName
+ location: location
+ tags: tags
+ environmentId: appEnvironment.outputs.appEnvironmentId
+ serviceId: kafka.outputs.serviceId
+ containerImage: 'docker.io/provectuslabs/kafka-ui:latest'
+ containerName: 'kafka-ui'
+ maxReplicas: 1
+ minReplicas: 1
+ containerCommands: [ '/bin/sh' ]
+ containerArgs: [
+ '-c'
+ '''export KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="$KAFKA_BOOTSTRAP_SERVERS" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG="$KAFKA_PROPERTIES_SASL_JAAS_CONFIG" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM="$KAFKA_SASL_MECHANISM" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL="$KAFKA_SECURITY_PROTOCOL" && \
+ java $JAVA_OPTS -jar kafka-ui-api.jar'''
+ ]
+ targetPort: 8080
+ externalIngress: true
+ }
+}
+```
+
+then deploy the template with `azd up`
++++
+## Final Bicep template for deploying all resources
+
+The following bicep template contains all the resources in this tutorial. You can create a `postgres-dev.bicep` file with this content
+
+```bicep
+targetScope = 'resourceGroup'
+param location string = resourceGroup().location
+param appEnvironmentName string = 'aca-env'
+param kafkaSvcName string = 'kafka01'
+param kafkaCliAppName string = 'kafka-cli-app'
+param kafkaUiAppName string = 'kafka-ui'
+
+resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
+ name: '${appEnvironmentName}-log-analytics'
+ location: location
+ properties: {
+ sku: {
+ name: 'PerGB2018'
+ }
+ }
+}
+
+resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: appEnvironmentName
+ location: location
+ properties: {
+ appLogsConfiguration: {
+ destination: 'log-analytics'
+ logAnalyticsConfiguration: {
+ customerId: logAnalytics.properties.customerId
+ sharedKey: logAnalytics.listKeys().primarySharedKey
+ }
+ }
+ }
+}
+
+resource kafka 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: kafkaSvcName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ service: {
+ type: 'kafka'
+ }
+ }
+ }
+}
+
+resource kafkaCli 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: kafkaCliAppName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ template: {
+ serviceBinds: [
+ {
+ serviceId: kafka.id
+ }
+ ]
+ containers: [
+ {
+ name: 'kafka-cli'
+ image: 'mcr.microsoft.com/k8se/services/kafka:3.4'
+ command: [ '/bin/sleep', 'infinity' ]
+ }
+ ]
+ scale: {
+ minReplicas: 1
+ maxReplicas: 1
+ }
+ }
+ }
+}
+
+resource kafkaUi 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: kafkaUiAppName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ ingress: {
+ external: true
+ targetPort: 8080
+ }
+ }
+ template: {
+ serviceBinds: [
+ {
+ serviceId: kafka.id
+ name: 'kafka'
+ }
+ ]
+ containers: [
+ {
+ name: 'kafka-ui'
+ image: 'docker.io/provectuslabs/kafka-ui:latest'
+ command: [
+ '/bin/sh'
+ ]
+ args: [
+ '-c'
+ '''export KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="$KAFKA_BOOTSTRAP_SERVERS" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SASL_JAAS_CONFIG="$KAFKA_PROPERTIES_SASL_JAAS_CONFIG" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SASL_MECHANISM="$KAFKA_SASL_MECHANISM" && \
+ export KAFKA_CLUSTERS_0_PROPERTIES_SECURITY_PROTOCOL="$KAFKA_SECURITY_PROTOCOL" && \
+ java $JAVA_OPTS -jar kafka-ui-api.jar'''
+ ]
+ resources: {
+ cpu: json('1.0')
+ memory: '2.0Gi'
+ }
+ }
+ ]
+ }
+ }
+}
+
+output kafkaUiUrl string = 'https://${kafkaUi.properties.configuration.ingress.fqdn}'
+
+output kafkaCliExec string = 'az containerapp exec -n ${kafkaCli.name} -g ${resourceGroup().name} --command /bin/bash'
+
+output kafkaLogs string = 'az containerapp logs show -n ${kafka.name} -g ${resourceGroup().name} --follow --tail 30'
+```
+
+Then use the Azure CLI to deploy it
+
+```bash
+RESOURCE_GROUP="kafka-dev"
+LOCATION="northcentralus"
+
+az group create \
+ --name "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+
+az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file kafka-dev.bicep
+```
+
+## Final `azd` template for all resource
+
+A final template can be found [here](https://github.com/ahmelsayed/aca-dev-service-kafka-azd). To deploy it, run the following commands
+
+```bash
+git clone https://github.com/Azure-Samples/aca-dev-service-kafka-azd
+cd aca-dev-service-kafka-azd
+azd up
+```
+
+## Clean up resources
+
+Once you're done, run the following command to delete the resource group that contains your Container Apps resources.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+
+```azurecli
+az group delete \
+ --resource-group $RESOURCE_GROUP
+```
container-apps Tutorial Dev Services Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-dev-services-postgresql.md
+
+ Title: 'Tutorial: Create and use a PostgreSQL service for development'
+description: Create and use a PostgreSQL service for development
++++ Last updated : 06/06/2023+++
+# Tutorial: Create and use a PostgreSQL service for development
+
+The Azure Container Apps service enables you to provision services like PostgreSQL, Redis, and Apache Kafka on the same environment as your applications. Those services are deployed as special type of Container Apps. You can connect other applications to them securely without exporting secrets, or sharing them anywhere. Those services are deployed in the same private network as your applications so you don't have to setup or manage VNETs for simple development workflows. Finally, these services compute scale to 0 like other Container Apps when not used to cut down on cost for development.
+
+In this tutorial, you learn how to create and use a development PostgreSQL service. There are both step-by-step Azure CLI commands, and Bicep template fragments for each step. For Bicep, adding all fragments to the same bicep file and deploying the template all at once or after each incremental update works equally.
+
+> [!div class="checklist"]
+> * Create a Container Apps environment to deploy your service and container apps
+> * Create a PostgreSQL service
+> * Create and use a test command line App to use the dev PostgreSQL
+> * Creating or Updating an app that uses the dev service
+> * Create a `pgweb` app.
+> * Compile a final bicep template to deploy all resources using a consistent and predictable template deployment
+> * Use an `azd` template for a one command deployment of all resources
+
+## Prerequisites
+
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
+- Optional: [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) of following AZD instructions
++
+> [!NOTE]
+> For a one command deployment, skip to the last `azd` [template step](#final-azd-template-for-all-resource).
+
+## Setup
+
+1. Define some values/parameters to we can use later for all commands/bicep resource.
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ RESOURCE_GROUP="postgres-dev"
+ LOCATION="northcentralus"
+ ENVIRONMENT="aca-env"
+ PG_SVC="postgres01"
+ PSQL_CLI_APP="psql-cloud-cli-app"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ You still need to use the CLI to deploy the bicep template into a resource group. So define the following variables for the CLI
+
+ ```bash
+ RESOURCE_GROUP="postgres-dev"
+ LOCATION="northcentralus"
+ ```
+
+ For Bicep, start by creating a file called `postgres-dev.bicep` then add some parameters with default values to it
+
+ ```bicep
+ targetScope = 'resourceGroup'
+ param location string = resourceGroup().location
+ param appEnvironmentName string = 'aca-env'
+ param pgSvcName string = 'postgres01'
+ param pgsqlCliAppName string = 'psql-cloud-cli-app'
+ ```
+
+ to deploy the bicep template at any stage use:
+
+ ```bash
+ az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file postgres-dev.bicep
+ ```
+
+ # [azd](#tab/azd)
+
+ Define a couple of values to use for `azd`
+
+ ```bash
+ AZURE_ENV_NAME="azd-postgres-dev"
+ LOCATION="northcentralus"
+ ```
+
+ Initialize a Minimal azd template
+
+ ```bash
+ azd init \
+ --environment "$AZURE_ENV_NAME" \
+ --location "$LOCATION" \
+ --no-prompt
+ ```
+
+ > [!NOTE]
+ > `AZURE_ENV_NAME` is different from the Container App Environment name. `AZURE_ENV_NAME` in `azd` is for all resources in a template. Including other non Container Apps resources. We will create a different name for the Container Apps Environment.
+
+ then option `infra/main.bicep` and define a couple of parameters to use later in our template
+
+ ```bicep
+ param appEnvironmentName string = 'aca-env'
+ param pgSvcName string = 'postgres01'
+ param pgsqlCliAppName string = 'psql-cloud-cli-app'
+ ```
+
+
+
+2. Make sure to log in and upgrade/register all providers needed for your Azure Subscription
+
+ ```bash
+ az login
+ az upgrade
+ az bicep upgrade
+ az extension add --name containerapp --upgrade
+ az provider register --namespace Microsoft.App
+ az provider register --namespace Microsoft.OperationalInsights
+ ```
+
+## Create a Container App Environment
+
+1. Create a resource group
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az group create \
+ --name "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ ```bash
+ az group create \
+ --name "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
+
+ to deploy the bicep template at any stage use:
+
+ ```bash
+ az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file postgres-dev.bicep
+ ```
+
+ # [azd](#tab/azd)
+
+ No special setup is needed for managing resource groups in `azd`. `azd` drives a resource group from `AZURE_ENV_NAME`/`--environment` value.
+ You can however test the minimal template with
+
+ ```bash
+ azd up
+ ```
+
+ That command should create an empty resource group.
+
+
+
+1. Create a Container Apps environment
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az containerapp env create \
+ --name "$ENVIRONMENT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ add the following to your `postgres-dev.bicep` file
+
+ ```bicep
+ resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: appEnvironmentName
+ location: location
+ properties: {
+ appLogsConfiguration: {
+ destination: 'azure-monitor'
+ }
+ }
+ }
+ ```
+
+ > [!TIP]
+ > The Azure CLI will automatically create a Log Analytics work space for each environment. To achieve the same using a bicep template you must explicitly declare it and link it. This makes your deployment and resources significantly more clear and predictable, at the cost of some verbosity. To do that in Bicep, update the environment resource in bicep to
+
+ ```bicep
+ resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
+ name: '${appEnvironmentName}-log-analytics'
+ location: location
+ properties: {
+ sku: {
+ name: 'PerGB2018'
+ }
+ }
+ }
+
+ resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: appEnvironmentName
+ location: location
+ properties: {
+ appLogsConfiguration: {
+ destination: 'log-analytics'
+ logAnalyticsConfiguration: {
+ customerId: logAnalytics.properties.customerId
+ sharedKey: logAnalytics.listKeys().primarySharedKey
+ }
+ }
+ }
+ }
+ ```
+
+ # [azd](#tab/azd)
+
+ `azd` templates must use [bicep modules](/azure/azure-resource-manager/bicep/modules). First create a `./infra/core/host` folder. Then create `./infra/core/host/container-apps-environment.bicep` module with the following content
+
+ ```bicep
+ param name string
+ param location string = resourceGroup().location
+ param tags object = {}
+
+ resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
+ name: '${name}-log-analytics'
+ location: location
+ tags: tags
+ properties: {
+ sku: {
+ name: 'PerGB2018'
+ }
+ }
+ }
+
+ resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: name
+ location: location
+ tags: tags
+ properties: {
+ appLogsConfiguration: {
+ destination: 'log-analytics'
+ logAnalyticsConfiguration: {
+ customerId: logAnalytics.properties.customerId
+ sharedKey: logAnalytics.listKeys().primarySharedKey
+ }
+ }
+ }
+ }
+
+ output appEnvironmentId string = appEnvironment.id
+ ```
+
+ then in `./infra/main.bicep` load the module using
+
+ ```bicep
+ module appEnvironment './core/host/container-apps-environment.bicep' = {
+ name: 'appEnvironment'
+ scope: rg
+ params: {
+ name: appEnvironmentName
+ location: location
+ tags: tags
+ }
+ }
+ ```
+
+ run `azd up` to deploy.
+
+
+
+## Create a PostgreSQL service
+
+1. Create a PostgreSQL service
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az containerapp service postgres create \
+ --name "$PG_SVC" \
+ --resource-group "$RESOURCE_GROUP" \
+ --environment "$ENVIRONMENT"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ Add the following to `postgres-dev.bicep`
+
+ ```bicep
+ resource postgres 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: pgSvcName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ service: {
+ type: 'postgres'
+ }
+ }
+ }
+ }
+
+ output postgresLogs string = 'az containerapp logs show -n ${postgres.name} -g ${resourceGroup().name} --follow --tail 30'
+ ```
+ To deploy the bicep template do
+
+ ```bash
+ az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file postgres-dev.bicep
+ ```
+
+ > [!TIP]
+ > The output `postgresLogs` will output a CLI command to view the logs of postgres after it's been deployed. You can run the command to view the initialization logs of the new postgres service.
+
+ # [azd](#tab/azd)
+
+ Create a `./infra/core/host/container-app-service.bicep` module file with the following content
+
+ ```bicep
+ param name string
+ param location string = resourceGroup().location
+ param tags object = {}
+ param environmentId string
+ param serviceType string
+
+
+ resource service 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: name
+ location: location
+ tags: tags
+ properties: {
+ environmentId: environmentId
+ configuration: {
+ service: {
+ type: serviceType
+ }
+ }
+ }
+ }
+
+ output serviceId string = service.id
+ ```
+
+ Then update `./infra/main.bicep` to use the module with the following declaration:
+
+ ```bicep
+ module postgres './core/host/container-app-service.bicep' = {
+ name: 'postgres'
+ scope: rg
+ params: {
+ name: pgSvcName
+ location: location
+ tags: tags
+ environmentId: appEnvironment.outputs.appEnvironmentId
+ serviceType: 'postgres'
+ }
+ }
+ ```
+
+ Then deploy the template using `azd up`
+
+
+
+1. View log output from the postgres instance
+
+ # [Bash](#tab/bash)
+
+ use the logs command to view the logs
+
+ ```bash
+ az containerapp logs show \
+ --name $PG_SVC \
+ --resource-group $RESOURCE_GROUP \
+ --follow --tail 30
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ The previous bicep example includes an output for the command to view the logs. For example:
+
+ ```bash
+ [
+ "az containerapp logs show -n postgres01 -g postgres-dev --follow --tail 30"
+ ]
+ ```
+
+ If you don't have the command, you can use the service name to get the logs using the CLI
+
+ ```bash
+ az containerapp logs show \
+ --name $PG_SVC \
+ --resource-group $RESOURCE_GROUP \
+ --follow --tail 30
+ ```
+
+ # [azd](#tab/azd)
+
+ use the logs command to view the logs
+
+ ```bash
+ az containerapp logs show \
+ --name postgres01 \
+ --resource-group $RESOURCE_GROUP \
+ --follow --tail 30
+ ```
+
+
+
+ :::image type="content" source="media/tutorial-dev-services-postgresql/azure-container-apps-postgresql-service-logs.png" alt-text="Screenshot of container app PostgreSQL service logs.":::
+
+## Create a command line test app to view and connect to the service
+
+We start by creating a debug app to use `psql` CLI to connect to the PostgreSQL instance.
+
+1. Create a `psql` app that binds to the PostgreSQL service
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az containerapp create \
+ --name "$PSQL_CLI_APP" \
+ --image mcr.microsoft.com/k8se/services/postgres:14 \
+ --bind "$PG_SVC" \
+ --environment "$ENVIRONMENT" \
+ --resource-group "$RESOURCE_GROUP" \
+ --min-replicas 1 \
+ --max-replicas 1 \
+ --command "/bin/sleep" "infinity"
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ Add the following to `postgres-dev.bicep`
+
+ ```bicep
+ resource pgsqlCli 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: pgsqlCliAppName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ template: {
+ serviceBinds: [
+ {
+ serviceId: postgres.id
+ }
+ ]
+ containers: [
+ {
+ name: 'psql'
+ image: 'mcr.microsoft.com/k8se/services/postgres:14'
+ command: [ '/bin/sleep', 'infinity' ]
+ }
+ ]
+ scale: {
+ minReplicas: 1
+ maxReplicas: 1
+ }
+ }
+ }
+ }
+
+ output pgsqlCliExec string = 'az containerapp exec -n ${pgsqlCli.name} -g ${resourceGroup().name} --revision ${pgsqlCli.properties.latestRevisionName} --command /bin/bash'
+ ```
+
+ > [!TIP]
+ > The output `pgsqlCliExec` will output a CLI command to exec into the test app after it's been deployed.
+
+ # [azd](#tab/azd)
+
+ Create a module under `./infra/core/host/container-app.bicep` and add the following there
+
+ ```bicep
+ param name string
+ param location string = resourceGroup().location
+ param tags object = {}
+
+ param environmentId string
+ param serviceId string = ''
+ param containerName string
+ param containerImage string
+ param containerCommands array = []
+ param containerArgs array = []
+ param minReplicas int
+ param maxReplicas int
+ param targetPort int = 0
+ param externalIngress bool = false
+
+ resource app 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: name
+ location: location
+ tags: tags
+ properties: {
+ environmentId: environmentId
+ configuration: {
+ ingress: targetPort > 0 ? {
+ targetPort: targetPort
+ external: externalIngress
+ } : null
+ }
+ template: {
+ serviceBinds: !empty(serviceId) ? [
+ {
+ serviceId: serviceId
+ }
+ ] : null
+ containers: [
+ {
+ name: containerName
+ image: containerImage
+ command: !empty(containerCommands) ? containerCommands : null
+ args: !empty(containerArgs) ? containerArgs : null
+ }
+ ]
+ scale: {
+ minReplicas: minReplicas
+ maxReplicas: maxReplicas
+ }
+ }
+ }
+ }
+ ```
+
+ then use that module in `./infra/main.bicep` using
+
+ ```bicep
+ module psqlCli './core/host/container-app.bicep' = {
+ name: 'psqlCli'
+ scope: rg
+ params: {
+ name: pgsqlCliAppName
+ location: location
+ tags: tags
+ environmentId: appEnvironment.outputs.appEnvironmentId
+ serviceId: postgres.outputs.serviceId
+ containerImage: 'mcr.microsoft.com/k8se/services/postgres:14'
+ containerName: 'psql'
+ maxReplicas: 1
+ minReplicas: 1
+ containerCommands: [ '/bin/sleep', 'infinity' ]
+ }
+ }
+ ```
+
+ deploy the template with `azd up`
+
+
+
+1. Run CLI exec command to connect to the test app
+
+ # [Bash](#tab/bash)
+
+ ```bash
+ az containerapp exec \
+ --name $PSQL_CLI_APP \
+ --resource-group $RESOURCE_GROUP \
+ --command /bin/bash
+ ```
+
+ # [Bicep](#tab/bicep)
+
+ The previous bicep example includes an output a second for the command to exec into the app. For example:
+
+ ```bash
+ [
+ "az containerapp logs show -n postgres01 -g postgres-dev --follow --tail 30",
+ "az containerapp exec -n psql-cloud-cli-app -g postgres-dev --command /bin/bash"
+ ]
+ ```
+
+ If you don't have the command, you can get the app name to exec using the CLI
+
+ ```bash
+ az containerapp exec \
+ --name $PSQL_CLI_APP \
+ --resource-group $RESOURCE_GROUP \
+ --command /bin/bash
+ ```
+
+ # [azd](#tab/azd)
+
+ ```bash
+ az containerapp exec \
+ --name psql-cloud-cli-app \
+ --resource-group $RESOURCE_GROUP \
+ --command /bin/bash
+ ```
+
+
+
+ Using `--bind` or `serviceBinds` on the test app injects all the connection information into the application environment. Once you connect to the test container, you can inspect the values using
+
+ ```bash
+ env | grep "^POSTGRES_"
+
+ POSTGRES_HOST=postgres01
+ POSTGRES_PASSWORD=AiSf...
+ POSTGRES_SSL=disable
+ POSTGRES_URL=postgres://postgres:AiSf...@postgres01:5432/postgres?sslmode=disable
+ POSTGRES_DATABASE=postgres
+ POSTGRES_PORT=5432
+ POSTGRES_USERNAME=postgres
+ POSTGRES_CONNECTION_STRING=host=postgres01 database=postgres user=postgres password=AiSf...
+ ```
+
+1. Us `psql` to connect to the service
+
+ ```bash
+ psql $POSTGRES_URL
+ ```
+
+ :::image type="content" source="media/tutorial-dev-services-postgresql/azure-container-apps-postgresql-psql.png" alt-text="Screenshot of container app using pgsql to connect to a PostgreSQL service.":::
+
+1. Create a table `accounts` and insert some data
+
+ ```sql
+ postgres=# CREATE TABLE accounts (
+ user_id serial PRIMARY KEY,
+ username VARCHAR ( 50 ) UNIQUE NOT NULL,
+ email VARCHAR ( 255 ) UNIQUE NOT NULL,
+ created_on TIMESTAMP NOT NULL,
+ last_login TIMESTAMP
+ );
+
+ postgres=# INSERT INTO accounts (username, email, created_on)
+ VALUES
+ ('user1', 'user1@example.com', current_timestamp),
+ ('user2', 'user2@example.com', current_timestamp),
+ ('user3', 'user3@example.com', current_timestamp);
+
+ postgres=# SELECT * FROM accounts;
+ ```
+
+ :::image type="content" source="media/tutorial-dev-services-postgresql/azure-container-apps-postgresql-psql-data.png" alt-text="Screenshot of container app using pgsql connect to PostgreSQL and create a table and seed some data.":::
+
+## Using a dev service with an existing app
+
+If you already have an app that uses PostgreSQL, you can update where the app reads the connection information to postgres to use the following environment variables
+
+```bash
+POSTGRES_HOST=postgres01
+POSTGRES_PASSWORD=AiSf...
+POSTGRES_SSL=disable
+POSTGRES_URL=postgres://postgres:AiSf...@postgres01:5432/postgres?sslmode=disable
+POSTGRES_DATABASE=postgres
+POSTGRES_PORT=5432
+POSTGRES_USERNAME=postgres
+POSTGRES_CONNECTION_STRING=host=postgres01 database=postgres user=postgres password=AiSf...
+```
+
+Then using the CLI (or bicep) you can update the app to add a `--bind $PG_SVC` to use the created dev service.
+
+## Deploying `pgweb` and binding it to the PostgreSQL service
+
+For example, we can deploy [pgweb](https://github.com/sosedoff/pgweb) to view and manage the PostgreSQL instance we have.
+
+# [Bash](#tab/bash)
+
+See Bicep or `azd` example
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource pgweb 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: 'pgweb'
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ ingress: {
+ external: true
+ targetPort: 8081
+ }
+ }
+ template: {
+ serviceBinds: [
+ {
+ serviceId: postgres.id
+ name: 'postgres'
+ }
+ ]
+ containers: [
+ {
+ name: 'pgweb'
+ image: 'docker.io/sosedoff/pgweb:latest'
+ command: [
+ '/bin/sh'
+ ]
+ args: [
+ '-c'
+ 'PGWEB_DATABASE_URL=$POSTGRES_URL /usr/bin/pgweb --bind=0.0.0.0 --listen=8081'
+ ]
+ }
+ ]
+ }
+ }
+}
+
+output pgwebUrl string = 'https://${pgweb.properties.configuration.ingress.fqdn}'
+```
+
+deploy the bicep template with the same command
+
+```bash
+az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file postgres-dev.bicep
+```
+
+and visit the url printed url
+
+# [azd](#tab/azd)
+
+Update `./infra/main.bicep` with the following
+
+```bicep
+module pgweb './core/host/container-app.bicep' = {
+ name: 'pgweb'
+ scope: rg
+ params: {
+ name: 'pgweb'
+ location: location
+ tags: tags
+ environmentId: appEnvironment.outputs.appEnvironmentId
+ serviceId: postgres.outputs.serviceId
+ containerImage: 'docker.io/sosedoff/pgweb:latest'
+ containerName: 'pgweb'
+ maxReplicas: 1
+ minReplicas: 1
+ containerCommands: [ '/bin/sh' ]
+ containerArgs: [
+ '-c'
+ 'PGWEB_DATABASE_URL=$POSTGRES_URL /usr/bin/pgweb --bind=0.0.0.0 --listen=8081'
+ ]
+ targetPort: 8081
+ externalIngress: true
+ }
+}
+```
+
+then deploy the template with `azd up`
++++++
+## Final Bicep template for deploying all resources
+
+The following bicep template contains all the resources in this tutorial. You can create a `postgres-dev.bicep` file with this content
+
+```bicep
+targetScope = 'resourceGroup'
+param location string = resourceGroup().location
+param appEnvironmentName string = 'aca-env'
+param pgSvcName string = 'postgres01'
+param pgsqlCliAppName string = 'psql-cloud-cli-app'
+
+resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2022-10-01' = {
+ name: '${appEnvironmentName}-log-analytics'
+ location: location
+ properties: {
+ sku: {
+ name: 'PerGB2018'
+ }
+ }
+}
+
+resource appEnvironment 'Microsoft.App/managedEnvironments@2023-04-01-preview' = {
+ name: appEnvironmentName
+ location: location
+ properties: {
+ appLogsConfiguration: {
+ destination: 'log-analytics'
+ logAnalyticsConfiguration: {
+ customerId: logAnalytics.properties.customerId
+ sharedKey: logAnalytics.listKeys().primarySharedKey
+ }
+ }
+ }
+}
+
+resource postgres 'Microsoft.App/containerApps@2023-04-01-preview' = {
+name: pgSvcName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ service: {
+ type: 'postgres'
+ }
+ }
+ }
+}
+
+resource pgsqlCli 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: pgsqlCliAppName
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ template: {
+ serviceBinds: [
+ {
+ serviceId: postgres.id
+ }
+ ]
+ containers: [
+ {
+ name: 'psql'
+ image: 'mcr.microsoft.com/k8se/services/postgres:14'
+ command: [ '/bin/sleep', 'infinity' ]
+ }
+ ]
+ scale: {
+ minReplicas: 1
+ maxReplicas: 1
+ }
+ }
+ }
+}
+
+resource pgweb 'Microsoft.App/containerApps@2023-04-01-preview' = {
+ name: 'pgweb'
+ location: location
+ properties: {
+ environmentId: appEnvironment.id
+ configuration: {
+ ingress: {
+ external: true
+ targetPort: 8081
+ }
+ }
+ template: {
+ serviceBinds: [
+ {
+ serviceId: postgres.id
+ name: 'postgres'
+ }
+ ]
+ containers: [
+ {
+ name: 'pgweb'
+ image: 'docker.io/sosedoff/pgweb:latest'
+ command: [
+ '/bin/sh'
+ ]
+ args: [
+ '-c'
+ 'PGWEB_DATABASE_URL=$POSTGRES_URL /usr/bin/pgweb --bind=0.0.0.0 --listen=8081'
+ ]
+ }
+ ]
+ }
+ }
+}
+
+output pgsqlCliExec string = 'az containerapp exec -n ${pgsqlCli.name} -g ${resourceGroup().name} --revision ${pgsqlCli.properties.latestRevisionName} --command /bin/bash'
+
+output postgresLogs string = 'az containerapp logs show -n ${postgres.name} -g ${resourceGroup().name} --follow --tail 30'
+
+output pgwebUrl string = 'https://${pgweb.properties.configuration.ingress.fqdn}'
+```
+
+Then use the Azure CLI to deploy it
+
+```bash
+RESOURCE_GROUP="postgres-dev"
+LOCATION="northcentralus"
+
+az group create \
+ --name "$RESOURCE_GROUP" \
+ --location "$LOCATION"
+
+az deployment group create -g $RESOURCE_GROUP \
+ --query 'properties.outputs.*.value' \
+ --template-file postgres-dev.bicep
+```
+
+## Final `azd` template for all resource
+
+A final template can be found [here](https://github.com/ahmelsayed/aca-dev-service-postgres-azd). To deploy it
+
+```bash
+git clone https://github.com/Azure-Samples/aca-dev-service-postgres-azd
+cd aca-dev-service-postgres-azd
+azd up
+```
+
+## Clean up resources
+
+Once you're done, run the following command to delete the resource group that contains your Container Apps resources.
+
+>[!CAUTION]
+> The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this tutorial exist in the specified resource group, they will also be deleted.
+
+```azurecli
+az group delete \
+ --resource-group $RESOURCE_GROUP
+```
container-registry Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md
description: Lists Azure Policy Regulatory Compliance controls available for Azu
Previously updated : 02/14/2023 Last updated : 06/12/2023
cosmos-db Computed Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md
The constraints on computed property names are:
### Query constraints
-Queries in the computed property definition must be valid syntactically and semantically, otherwise the create or update operation will fail. Queries should evaluate to a deterministic value for all items in a container.
+Queries in the computed property definition must be valid syntactically and semantically, otherwise the create or update operation will fail. Queries should evaluate to a deterministic value for all items in a container. Queries may evaluate to undefined or null for some items, and computed properties with undefined or null values behave the same as persisted properties with undefined or null values when used in queries.
The constraints on computed property query definitions are:
SELECT c.cp_lowerName FROM c
### WHERE clause
-Computed properties can be referenced in filter predicates like any persisted properties.
+Computed properties can be referenced in filter predicates like any persisted properties. It's recommended to add any relevant single or composite indexes when using computed properties in filters.
Let's take an example computed property definition to calculate a 20 percent price discount.
SELECT c.price - c.cp_20PercentDiscount as discountedPrice, c.name FROM c WHERE
### GROUP BY clause
-As with persisted properties, computed properties can be referenced in the GROUP BY clause and use the index whenever possible.
+As with persisted properties, computed properties can be referenced in the GROUP BY clause and use the index whenever possible. For the best performance, add any relevant single or composite indexes.
Let's take an example computed property definition that finds the primary category for each item from the `categoryName` property.
Add a composite index on two properties where one is computed, `cp_myComputedPro
} ```
+## RU consumption
+
+Adding computed properties to a container does not consume RUs. Write operations on containers that have computed properties defined may see a slight RU increase. If a computed property is indexed, RUs on write operations will increase to reflect the costs for indexing and evaluation of computed property. While in preview, RU charges related to computed properties are subject to change.
+ ## Next steps - [Getting started with queries](./getting-started.md)
cosmos-db Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
Here's an explanation of how the preceding template is constructed, broken down
### Triggers
-* Under `typeProperties`, two properties are parameterized. The first one is `maxConcurrency`, which is specified to have a default value and is of type`string`. It has the default parameter name `<entityName>_properties_typeProperties_maxConcurrency`.
+* Under `typeProperties`, two properties are parameterized. The first one is `maxConcurrency`, which is specified to have a default value and is of type `string`. It has the default parameter name `<entityName>_properties_typeProperties_maxConcurrency`.
* The `recurrence` property also is parameterized. Under it, all properties at that level are specified to be parameterized as strings, with default values and parameter names. An exception is the `interval` property, which is parameterized as type `int`. The parameter name is suffixed with `<entityName>_properties_typeProperties_recurrence_triggerSuffix`. Similarly, the `freq` property is a string and is parameterized as a string. However, the `freq` property is parameterized without a default value. The name is shortened and suffixed. For example, `<entityName>_freq`. ### LinkedServices
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
Title: How to schedule Azure-SSIS Integration Runtime
-description: This article describes how to schedule the starting and stopping of Azure-SSIS Integration Runtime by using Azure Data Factory.
+ Title: Schedule an Azure-SSIS integration runtime
+description: This article describes how to schedule starting and stopping an Azure-SSIS integration runtime by using Azure Data Factory.
ms.devlang: powershell
-# How to start and stop Azure-SSIS Integration Runtime on a schedule
+# Start and stop an Azure-SSIS integration runtime on a schedule
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article describes how to schedule the starting and stopping of Azure-SSIS Integration Runtime (IR) by using Azure Data Factory (ADF) and Azure Synapse. Azure-SSIS IR is compute resource dedicated for executing SQL Server Integration Services (SSIS) packages. Running Azure-SSIS IR has a cost associated with it. Therefore, you typically want to run your IR only when you need to execute SSIS packages in Azure and stop your IR when you do not need it anymore. You can use ADF or Synapse Pipelines portal or Azure PowerShell to [manually start or stop your IR](manage-azure-ssis-integration-runtime.md)).
+This article describes how to schedule the starting and stopping of an Azure-SQL Server Integration Services (SSIS) integration runtime (IR) by using Azure Data Factory and Azure Synapse Analytics. An Azure-SSIS IR is a compute resource that's dedicated for running SSIS packages.
-Alternatively, you can create Web activities in ADF or Synapse pipelines to start/stop your IR on schedule, e.g. starting it in the morning before executing your daily ETL workloads and stopping it in the afternoon after they are done. You can also chain an Execute SSIS Package activity between two Web activities that start and stop your IR, so your IR will start/stop on demand, just in time before/after your package execution. For more info about Execute SSIS Package activity, see [Run an SSIS package using Execute SSIS Package activity in ADF pipeline](how-to-invoke-ssis-package-ssis-activity.md) article.
+A cost is associated with running an Azure-SSIS IR. You typically want to run your IR only when you need to run SSIS packages in Azure and stop your IR when you don't need it anymore. You can use Data Factory, the Azure portal page for Azure Synapse Analytics pipelines, or Azure PowerShell to [manually start or stop your IR](manage-azure-ssis-integration-runtime.md).
+
+Alternatively, you can create web activities in Data Factory or Azure Synapse Analytics pipelines to start and stop your IR on a schedule. For example, you can start it in the morning before running your daily ETL workloads and stop it in the afternoon after the workloads are done.
+
+You can also chain an Execute SSIS Package activity between two web activities that start and stop your IR. Your IR will then start and stop on demand, before or after your package execution. For more information about the Execute SSIS Package activity, see [Run an SSIS package with the Execute SSIS Package activity in the Azure portal](how-to-invoke-ssis-package-ssis-activity.md).
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Prerequisites
-### Data Factory
-You will need an instance of Azure Data Factory to implement this walk through. If you do not have one already provisioned, you can follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md).
+To implement this walkthrough, you need:
-### Azure-SSIS Integration Runtime (IR)
-If you have not provisioned your Azure-SSIS IR already, provision it by following instructions in the [tutorial](./tutorial-deploy-ssis-packages-azure.md).
+- An instance of Azure Data Factory. If you don't have one provisioned, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md).
-## Create and schedule ADF pipelines that start and or stop Azure-SSIS IR
-> [!NOTE]
-> This section is not supported for Azure-SSIS in **Azure Synapse** with [data exfiltration protection](/azure/synapse-analytics/security/workspace-data-exfiltration-protection) enabled.
+- An Azure-SSIS IR. If you don't have one provisioned, follow the instructions in [Provision the Azure-SSIS integration runtime in Azure Data Factory](./tutorial-deploy-ssis-packages-azure.md).
+
+## Create and schedule Data Factory pipelines that start and stop an Azure-SSIS IR
-This section shows you how to use Web activities in ADF pipelines to start/stop your Azure-SSIS IR on schedule or start & stop it on demand. We will guide you to create three pipelines:
+> [!NOTE]
+> This section is not supported for Azure-SSIS in Azure Synapse Analytics with [data exfiltration protection](/azure/synapse-analytics/security/workspace-data-exfiltration-protection) enabled.
-1. The first pipeline contains a Web activity that starts your Azure-SSIS IR.
-2. The second pipeline contains a Web activity that stops your Azure-SSIS IR.
-3. The third pipeline contains an Execute SSIS Package activity chained between two Web activities that start/stop your Azure-SSIS IR.
+This section shows you how to use web activities in Data Factory pipelines to start and stop your Azure-SSIS IR on a schedule, or to start and stop it on demand. You'll create three pipelines:
-After you create and test those pipelines, you can create a schedule trigger and associate it with any pipeline. The schedule trigger defines a schedule for running the associated pipeline.
+- The first pipeline contains a web activity that starts your Azure-SSIS IR.
+- The second pipeline contains a web activity that stops your Azure-SSIS IR.
+- The third pipeline contains an Execute SSIS Package activity chained between two web activities that start and stop your Azure-SSIS IR.
-For example, you can create two triggers, the first one is scheduled to run daily at 6 AM and associated with the first pipeline, while the second one is scheduled to run daily at 6 PM and associated with the second pipeline. In this way, you have a period between 6 AM to 6 PM every day when your IR is running, ready to execute your daily ETL workloads.
+After you create and test those pipelines, you can create a trigger that defines a schedule for running a pipeline. For example, you can create two triggers. The first one is scheduled to run daily at 6 AM and is associated with the first pipeline. The second one is scheduled to run daily at 6 PM and is associated with the second pipeline. In this way, you have a period from 6 AM to 6 PM every day when your IR is running, ready to run your daily ETL workloads.
-If you create a third trigger that is scheduled to run daily at midnight and associated with the third pipeline, that pipeline will run at midnight every day, starting your IR just before package execution, subsequently executing your package, and immediately stopping your IR just after package execution, so your IR will not be running idly.
+If you create a third trigger that's scheduled to run daily at midnight and is associated with the third pipeline, that pipeline will run at midnight every day. It will start your IR just before package execution, and then run your package. It will immediately stop your IR just after package execution, so your IR won't run idly.
### Create your pipelines
-1. In the home page, select **Orchestrate**.
+1. On the Azure Data Factory home page, select **Orchestrate**.
:::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/orchestrate-button.png" alt-text="Screenshot that shows the Orchestrate button on the Azure Data Factory home page.":::
-
-2. In the **Activities** toolbox, expand **General** menu, and drag & drop a **Web** activity onto the pipeline designer surface. In **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to **Settings** tab, and do the following actions:
- > [!NOTE]
- > For Azure-SSIS in **Azure Synapse**, use corresponding Azure Synapse REST API to [Get Integration Runtime status](/rest/api/synapse/integration-runtimes/get), [Start Integration Runtime](/rest/api/synapse/integration-runtimes/start) and [Stop Integration Runtime](/rest/api/synapse/integration-runtimes/stop).
- 1. For **URL**, enter the following URL for REST API that starts Azure-SSIS IR, replacing `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR: `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}/start?api-version=2018-06-01`. Alternatively, you can also copy & paste the resource ID of your IR from its monitoring page on ADF UI/app to replace the following part of the above URL: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}`
-
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-ssis-ir-resource-id.png" alt-text="ADF SSIS IR Resource ID":::
+2. In the **Activities** toolbox, expand the **General** menu and drag a web activity onto the pipeline designer surface. On the **General** tab of the activity properties window, change the activity name to **startMyIR**. Switch to the **Settings** tab, and then do the following actions.
+
+ > [!NOTE]
+ > For Azure-SSIS in Azure Synapse Analytics, use the corresponding Azure Synapse Analytics REST API to [get the integration runtime status](/rest/api/synapse/integration-runtimes/get), [start the integration runtime](/rest/api/synapse/integration-runtimes/start), and [stop the integration runtime](/rest/api/synapse/integration-runtimes/stop).
+
+ 1. For **URL**, enter the following URL for the REST API that starts the Azure-SSIS IR. Replace `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR.
+
+ `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}/start?api-version=2018-06-01`
+
+ Alternatively, you can copy and paste the resource ID of your IR from its monitoring page on the Data Factory UI or app to replace the following part of the preceding URL: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}`.
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-ssis-ir-resource-id.png" alt-text="Screenshot that shows selections for finding the Azure Data Factory SSIS IR resource ID.":::
- 2. For **Method**, select **POST**.
+ 2. For **Method**, select **POST**.
3. For **Body**, enter `{"message":"Start my IR"}`.
- 4. For **Authentication**, select **Managed Identity** to use the specified system managed identity for your ADF, see [Managed identity for Data Factory](./data-factory-service-identity.md) article for more info.
+ 4. For **Authentication**, select **Managed Identity** to use the specified system-managed identity for your data factory. For more information, see [Managed identity for Azure Data Factory](./data-factory-service-identity.md).
5. For **Resource**, enter `https://management.azure.com/`.
-
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-web-activity-schedule-ssis-ir.png" alt-text="ADF Web Activity Schedule SSIS IR":::
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-web-activity-schedule-ssis-ir.png" alt-text="Screenshot that shows settings for an Azure Data Factory SSIS IR web activity schedule.":::
-3. Clone the first pipeline to create a second one, changing the activity name to **stopMyIR** and replacing the following properties.
+3. Clone the first pipeline to create a second one. Change the activity name to **stopMyIR**, and replace the following properties:
- 1. For **URL**, enter the following URL for REST API that stops Azure-SSIS IR, replacing `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR: `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}/stop?api-version=2018-06-01`.
- 2. For **Body**, enter `{"message":"Stop my IR"}`.
+ 1. For **URL**, enter the following URL for the REST API that stops the Azure-SSIS IR. Replace `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR.
-4. Create a third pipeline, drag & drop an **Execute SSIS Package** activity from **Activities** toolbox onto the pipeline designer surface, and configure it following the instructions in [Invoke an SSIS package using Execute SSIS Package activity in ADF](how-to-invoke-ssis-package-ssis-activity.md) article. Next, chain the Execute SSIS Package activity between two Web activities that start/stop your IR, similar to those Web activities in the first/second pipelines.
+ `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}/stop?api-version=2018-06-01`.
+ 1. For **Body**, enter `{"message":"Stop my IR"}`.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-web-activity-on-demand-ssis-ir.png" alt-text="ADF Web Activity On-Demand SSIS IR":::
+4. Create a third pipeline. Drag an **Execute SSIS Package** activity from the **Activities** toolbox onto the pipeline designer surface. Then, configure the activity by following the instructions in [Run an SSIS package with the Execute SSIS Package activity in the Azure portal](how-to-invoke-ssis-package-ssis-activity.md).
-5. Instead of manually creating the third pipeline, you can also automatically create it from a template. To do so, select the **...** symbol next to **Pipeline** to drop down a menu of pipeline actions, select the **Pipeline from template** action, select the **SSIS** check box under **Category**, select the **Schedule ADF pipeline to start and stop Azure-SSIS IR just in time before and after running SSIS package** template, select your IR in the **Azure-SSIS Integration Runtime** drop down menu, and finally select the **Use this template** button. Your pipeline will be automatically created with only SSIS package left for you to assign to the Execute SSIS Package activity.
+ Chain the Execute SSIS Package activity between two web activities that start and stop your IR, similar to those web activities in the first and second pipelines.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-on-demand-ssis-ir-template.png" alt-text="ADF On-Demand SSIS IR Template":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-web-activity-on-demand-ssis-ir.png" alt-text="Screenshot that shows chaining a package between web activities on a pipeline designer.":::
-6. To make the third pipeline more robust, you can ensure that the Web activities to start/stop your IR are retried if there are any transient errors due to network connectivity or other issues and only completed when your IR is actually started/stopped. To do so, you can replace each Web activity with an Until activity, which in turn contains two Web activities, one to start/stop your IR and another to check your IR status. Let's call the Until activities *Start SSIS IR* and *Stop SSIS IR*. The *Start SSIS IR* Until activity contains *Try Start SSIS IR* and *Get SSIS IR Status* Web activities. The *Stop SSIS IR* Until activity contains *Try Stop SSIS IR* and *Get SSIS IR Status* Web activities. On the **Settings** tab of *Start SSIS IR*/*Stop SSIS IR* Until activities, for **Expression**, enter `@equals('Started', activity('Get SSIS IR Status').output.properties.state)`/`@equals('Stopped', activity('Get SSIS IR Status').output.properties.state)`, respectively.
+ Instead of manually creating the third pipeline, you can also automatically create it from a template:
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-until-activity-on-demand-ssis-ir.png" alt-text="ADF Until Activity On-Demand SSIS IR":::
+ 1. Select the ellipsis (**...**) next to **Pipeline** to open a dropdown menu of pipeline actions. Then select the **Pipeline from template** action.
+ 1. Select the **SSIS** checkbox under **Category**.
+ 1. Select the **Schedule ADF pipeline to start and stop Azure-SSIS IR just in time before and after running SSIS package** template.
+ 1. On the **Azure-SSIS Integration Runtime** dropdown menu, select your IR.
+ 1. Select the **Use this template** button.
- Within both Until activities, the *Try Start SSIS IR*/*Try Stop SSIS IR* Web activities are similar to those Web activities in the first/second pipelines. On the **Settings** tab of *Get SSIS IR Status* Web activities, do the following actions:
+ After you create your pipeline automatically, only the SSIS package is left for you to assign to the Execute SSIS Package activity.
- 1. For **URL**, enter the following URL for REST API that gets Azure-SSIS IR status, replacing `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR: `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}?api-version=2018-06-01`.
- 2. For **Method**, select **GET**.
- 3. For **Authentication**, select **Managed Identity** to use the specified system managed identity for your ADF, see [Managed identity for Data Factory](./data-factory-service-identity.md) article for more info.
- 4. For **Resource**, enter `https://management.azure.com/`.
-
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-until-activity-on-demand-ssis-ir-open.png" alt-text="ADF Until Activity On-Demand SSIS IR Open":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-on-demand-ssis-ir-template.png" alt-text="Screenshot that shows selections for creating a pipeline from a template.":::
+
+5. To make the third pipeline more robust, you can ensure that the web activities to start and stop your IR are retried if there are any transient errors (for example, due to network connectivity). You can also ensure that those web activities are completed only when your IR is actually started or stopped.
-7. Assign the managed identity for your ADF a **Contributor** role to itself, so Web activities in its pipelines can call REST API to start/stop Azure-SSIS IRs provisioned in it:
+ To do so, you can replace each web activity with an Until activity. The Until activity contains two web activities: one to start and stop your IR, and another to check your IR status. Let's call the Until activities *Start SSIS IR* and *Stop SSIS IR*. The *Start SSIS IR* Until activity contains *Try Start SSIS IR* and *Get SSIS IR Status* web activities. The *Stop SSIS IR* Until activity contains *Try Stop SSIS IR* and *Get SSIS IR Status* web activities.
- 1. On your ADF page in the Azure portal, select **Access control (IAM)**.
+ On the **Settings** tab of the *Start SSIS IR* Until activity, for **Expression**, enter `@equals('Started', activity('Get SSIS IR Status').output.properties.state)`. On the **Settings** tab of the *Stop SSIS IR* Until activity, for **Expression**, enter `@equals('Stopped', activity('Get SSIS IR Status').output.properties.state)`.
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-until-activity-on-demand-ssis-ir.png" alt-text="Screenshot that shows web activities to start and stop an SSIS IR.":::
+
+ Within both Until activities, the *Try Start SSIS IR* and *Try Stop SSIS IR* web activities are similar to those web activities in the first and second pipelines. On the **Settings** tab for the *Get SSIS IR Status* web activities, do the following actions:
+
+ 1. For **URL**, enter the following URL for the REST API that gets the Azure-SSIS IR status. Replace `{subscriptionId}`, `{resourceGroupName}`, `{factoryName}`, and `{integrationRuntimeName}` with the actual values for your IR.
+
+ `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/integrationRuntimes/{integrationRuntimeName}?api-version=2018-06-01`
+ 1. For **Method**, select **GET**.
+ 1. For **Authentication**, select **Managed Identity** to use the specified system-managed identity for your data factory. For more information, see [Managed identity for Azure Data Factory](./data-factory-service-identity.md).
+ 1. For **Resource**, enter `https://management.azure.com/`.
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/adf-until-activity-on-demand-ssis-ir-open.png" alt-text="Screenshot that shows settings for Get SSIS IR Status web activities.":::
+
+6. Assign the managed identity for your data factory a **Contributor** role to itself, so web activities in its pipelines can call the REST API to start and stop Azure-SSIS IRs provisioned in it:
+
+ 1. On your Data Factory page in the Azure portal, select **Access control (IAM)**.
1. Select **Add** > **Add role assignment** to open the **Add role assignment** page. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
If you create a third trigger that is scheduled to run daily at midnight and ass
| | | | Role | Contributor | | Assign access to | User, group, or service principal |
- | Members | Your ADF username |
+ | Members | Your Data Factory username |
- :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot that shows Add role assignment page in Azure portal.":::
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot that shows the page for adding a role assignment in the Azure portal.":::
-8. Validate your ADF and all pipeline settings by clicking **Validate all/Validate** on the factory/pipeline toolbar. Close **Factory/Pipeline Validation Output** by clicking **>>** button.
+7. Validate your data factory and all pipeline settings by selecting **Validate all** or **Validate** on the factory or pipeline toolbar. Close **Factory Validation Output** or **Pipeline Validation Output** by selecting the double arrow (**>>**) button.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/validate-pipeline.png" alt-text="Validate pipeline":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/validate-pipeline.png" alt-text="Screenshot that shows the button for validating a pipeline.":::
### Test run your pipelines
-1. Select **Test Run** on the toolbar for each pipeline and see **Output** window in the bottom pane.
+1. Select **Test Run** on the toolbar for each pipeline. On the bottom pane, the **Output** tab lists pipeline runs.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/test-run-output.png" alt-text="Test Run":::
-
-2. To test the third pipeline, if you store your SSIS package in SSIS catalog (SSISDB), you can launch SQL Server Management Studio (SSMS) to check its execution. In **Connect to Server** window, do the following actions.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/test-run-output.png" alt-text="Screenshot that shows the button for running a test and the list of pipeline runs.":::
+
+2. To test the third pipeline, you can use SQL Server Management Studio if you store your SSIS package in the SSIS catalog (SSISDB). In the **Connect to Server** window, do the following actions:
1. For **Server name**, enter **&lt;your server name&gt;.database.windows.net**. 2. Select **Options >>**. 3. For **Connect to database**, select **SSISDB**.
- 4. Select **Connect**.
- 5. Expand **Integration Services Catalogs** -> **SSISDB** -> Your folder -> **Projects** -> Your SSIS project -> **Packages**.
- 6. Right-click the specified SSIS package to run and select **Reports** -> **Standard Reports** -> **All Executions**.
- 7. Verify that it ran.
+ 4. Select **Connect**.
+ 5. Expand **Integration Services Catalogs** > **SSISDB** > your folder > **Projects** > your SSIS project > **Packages**.
+ 6. Right-click the specified SSIS package to run, and then select **Reports** > **Standard Reports** > **All Executions**.
+ 7. Verify that the package ran.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/verify-ssis-package-run.png" alt-text="Verify SSIS package run":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/verify-ssis-package-run.png" alt-text="Screenshot that shows verification of an SSIS package run.":::
### Schedule your pipelines
-Now that your pipelines work as you expected, you can create triggers to run them at specified cadences. For details about associating triggers with pipelines, see [Trigger the pipeline on a schedule](quickstart-create-data-factory-portal.md#trigger-the-pipeline-on-a-schedule) article.
+Now that your pipelines work as you expected, you can create triggers to run them at specified cadences. For details about associating triggers with pipelines, see [Configure schedules for pipelines](/azure/devops/pipelines/process/scheduled-triggers).
+
+1. On the pipeline toolbar, select **Trigger**, and then select **New/Edit**.
-1. On the pipeline toolbar, select **Trigger** and select **New/Edit**.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/trigger-new-menu.png" alt-text="Screenshot that shows the menu option for creating or editing a trigger.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/trigger-new-menu.png" alt-text="Screenshot that highlights the Trigger -> New/Edit menu option.":::
+2. On the **Add Triggers** pane, select **+ New**.
-2. In **Add Triggers** pane, select **+ New**.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-triggers-new.png" alt-text="Screenshot that shows the pane for adding a trigger.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-triggers-new.png" alt-text="Add Triggers - New":::
+3. On the **New Trigger** pane, do the following actions:
-3. In **New Trigger** pane, do the following actions:
+ 1. For **Name**, enter a name for the trigger. In the following example, **trigger2** is the trigger name.
+ 2. For **Type**, select **Schedule**.
+ 3. For **Start date**, enter a start date and time in UTC.
+ 4. For **Recurrence**, enter a cadence for the trigger. In the following example, it's once every day.
+ 5. If you want the trigger to have an end date, select **Specify an end date**, and then select a date and time.
+ 6. Select **Start trigger on creation** to activate the trigger immediately after you publish all the Data Factory settings.
+ 7. Select **OK**.
- 1. For **Name**, enter a name for the trigger. In the following example, **Run daily** is the trigger name.
- 2. For **Type**, select **Schedule**.
- 3. For **Start Date (UTC)**, enter a start date and time in UTC.
- 4. For **Recurrence**, enter a cadence for the trigger. In the following example, it is **Daily** once.
- 5. For **End**, select **No End** or enter an end date and time after selecting **On Date**.
- 6. Select **Activated** to activate the trigger immediately after you publish the whole ADF settings.
- 7. Select **Next**.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/new-trigger-window.png" alt-text="Screenshot that shows the pane for creating a new trigger.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/new-trigger-window.png" alt-text="Trigger -> New/Edit":::
-
-4. In **Trigger Run Parameters** page, review any warning, and select **Finish**.
-5. Publish the whole ADF settings by selecting **Publish All** in the factory toolbar.
+4. On the **Trigger Run Parameters** page, review any warnings, and then select **Finish**.
+5. Publish all the Data Factory settings by selecting **Publish all** on the factory toolbar.
- :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/publish-all-button.png" alt-text="Screenshot that shows the Publish All button.":::
+ :::image type="content" source="./media/how-to-invoke-ssis-package-stored-procedure-activity/publish-all-button.png" alt-text="Screenshot that shows the button for publishing all Data Factory settings.":::
-### Monitor your pipelines and triggers in Azure portal
+### Monitor your pipelines and triggers in the Azure portal
-1. To monitor trigger runs and pipeline runs, use **Monitor** tab on the left of ADF UI/app. For detailed steps, see [Monitor the pipeline](quickstart-create-data-factory-portal.md#monitor-the-pipeline) article.
+- To monitor trigger runs and pipeline runs, use the **Monitor** tab on the left side of the Data Factory UI or app. For detailed steps, see [Visually monitor Azure Data Factory](monitor-visually.md).
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/pipeline-runs.png" alt-text="Pipeline runs":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/pipeline-runs.png" alt-text="Screenshot that shows the pane for monitoring pipeline runs.":::
-2. To view the activity runs associated with a pipeline run, select the first link (**View Activity Runs**) in **Actions** column. For the third pipeline, you will see three activity runs, one for each chained activity in the pipeline (Web activity to start your IR, Execute SSIS Package activity to execute your package, and Web activity to stop your IR). To view the pipeline runs again, select **Pipelines** link at the top.
+- To view the activity runs associated with a pipeline run, select the first link (**View Activity Runs**) in the **Actions** column. For the third pipeline, three activity runs appear: one for each chained activity in the pipeline (web activity to start your IR, Execute SSIS Package activity to run your package, and web activity to stop your IR). To view the pipeline runs again, select the **Pipelines** link at the top.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/activity-runs.png" alt-text="Activity runs":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/activity-runs.png" alt-text="Screenshot that shows activity runs.":::
-3. To view the trigger runs, select **Trigger Runs** from the drop-down list under **Pipeline Runs** at the top.
+- To view the trigger runs, select **Trigger Runs** from the dropdown list under **Pipeline Runs** at the top.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/trigger-runs.png" alt-text="Trigger runs":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/trigger-runs.png" alt-text="Screenshot that shows trigger runs.":::
-### Monitor your pipelines and triggers with PowerShell
+### Monitor your pipelines and triggers by using PowerShell
-Use scripts like the following examples to monitor your pipelines and triggers.
+Use scripts like the following examples to monitor your pipelines and triggers:
-1. Get the status of a pipeline run.
+- Get the status of a pipeline run:
```powershell Get-AzDataFactoryV2PipelineRun -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -PipelineRunId $myPipelineRun ```
-2. Get info about a trigger.
+- Get info about a trigger:
```powershell Get-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name "myTrigger" ```
-3. Get the status of a trigger run.
+- Get the status of a trigger run:
```powershell Get-AzDataFactoryV2TriggerRun -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -TriggerName "myTrigger" -TriggerRunStartedAfter "2018-07-15" -TriggerRunStartedBefore "2018-07-16" ```
-## Create and schedule Azure Automation runbook that starts/stops Azure-SSIS IR
+## Create and schedule an Azure Automation runbook that starts and stops an Azure-SSIS IR
-In this section, you will learn to create Azure Automation runbook that executes PowerShell script, starting/stopping your Azure-SSIS IR on a schedule. This is useful when you want to execute additional scripts before/after starting/stopping your IR for pre/post-processing.
+In this section, you learn how to create Azure Automation runbook that runs a PowerShell script to start and stop your Azure-SSIS IR on a schedule. This information is useful when you want to run additional scripts before or after starting and stopping your IR for pre-processing and post-processing.
### Create your Azure Automation account
-If you do not have an Azure Automation account already, create one by following the instructions in this step. For detailed steps, see [Create an Azure Automation account](../automation/quickstarts/create-azure-automation-account-portal.md) article. As part of this step, you create an **Azure Run As** account (a service principal in your Azure Active Directory) and assign it a **Contributor** role in your Azure subscription. Ensure that it is the same subscription that contains your ADF with Azure SSIS IR. Azure Automation will use this account to authenticate to Azure Resource Manager and operate on your resources.
+If you don't have an Azure Automation account, create one by following the instructions in this section. For detailed steps, see [Create an Azure Automation account](../automation/quickstarts/create-azure-automation-account-portal.md).
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, ADF UI/app is only supported in Microsoft Edge and Google Chrome web browsers.
-2. Sign in to [Azure portal](https://portal.azure.com/).
-3. Select **New** on the left menu, select **Monitoring + Management**, and select **Automation**.
+As part of this process, you create an **Azure Run As** account (a service principal in Azure Active Directory) and assign it a **Contributor** role in your Azure subscription. Ensure that it's the same subscription that contains your data factory with the Azure-SSIS IR. Azure Automation will use this account to authenticate to Azure Resource Manager and operate on your resources.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/new-automation.png" alt-text="Screenshot that highlights the Monitoring + Management > Automation option.":::
-
-2. In **Add Automation Account** pane, do the following actions.
+1. Open the Microsoft Edge or Google Chrome web browser. Currently, the Data Factory UI is supported only in these browsers.
+2. Sign in to the [Azure portal](https://portal.azure.com/).
+3. Select **New** on the left menu, select **Monitoring + Management**, and then select **Automation**.
- 1. For **Name**, enter a name for your Azure Automation account.
- 2. For **Subscription**, select the subscription that has your ADF with Azure-SSIS IR.
- 3. For **Resource group**, select **Create new** to create a new resource group or **Use existing** to select an existing one.
- 4. For **Location**, select a location for your Azure Automation account.
- 5. Confirm **Create Azure Run As account** as **Yes**. A service principal will be created in your Azure Active Directory and assigned a **Contributor** role in your Azure subscription.
- 6. Select **Pin to dashboard** to display it permanently in Azure dashboard.
- 7. Select **Create**.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/new-automation.png" alt-text="Screenshot that shows selections for opening Azure Automation in Azure Marketplace.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-automation-account-window.png" alt-text="New -> Monitoring + Management -> Automation":::
-
-3. You will see the deployment status of your Azure Automation account in Azure dashboard and notifications.
-
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/deploying-automation.png" alt-text="Deploying automation":::
-
-4. You will see the homepage of your Azure Automation account after it is created successfully.
+4. On the **Add Automation Account** pane, do the following actions:
+
+ 1. For **Name**, enter a name for your Azure Automation account.
+ 2. For **Subscription**, select the subscription that has your data factory with the Azure-SSIS IR.
+ 3. For **Resource group**, select **Create new** to create a new resource group, or select **Use existing** to use an existing one.
+ 4. For **Location**, select a location for your Azure Automation account.
+ 5. For **Create Azure Run As account**, select **Yes**. A service principal will be created in your Azure Active Directory instance and assigned a **Contributor** role in your Azure subscription.
+ 6. Select **Pin to dashboard** to display the account permanently on the Azure dashboard.
+ 7. Select **Create**.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/automation-home-page.png" alt-text="Automation home page":::
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-automation-account-window.png" alt-text="Screenshot that shows selections for adding an Azure Automation account.":::
-### Import ADF modules
+5. Monitor the deployment status of your Azure Automation account on the Azure dashboard and in notifications.
-1. Select **Modules** in **SHARED RESOURCES** section on the left menu and verify whether you have **Az.DataFactory** + **Az.Profile** in the list of modules.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/deploying-automation.png" alt-text="Screenshot of an indicator that shows Azure Automation deployment in progress.":::
- :::image type="content" source="media/how-to-schedule-azure-ssis-integration-runtime/automation-fix-image1.png" alt-text="Verify the required modules":::
+6. Confirm that the home page of your Azure Automation account appears. It means you created the account successfully.
-2. If you do not have **Az.DataFactory**, go to the PowerShell Gallery for [Az.DataFactory module](https://www.powershellgallery.com/packages/Az.DataFactory/), select **Deploy to Azure Automation**, select your Azure Automation account, and then select **OK**. Go back to view **Modules** in **SHARED RESOURCES** section on the left menu and wait until you see **STATUS** of **Az.DataFactory** module changed to **Available**.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/automation-home-page.png" alt-text="Screenshot that shows the Azure Automation home page.":::
- :::image type="content" source="media/how-to-schedule-azure-ssis-integration-runtime/automation-fix-image2.png" alt-text="Verify the Data Factory module":::
+### Import Data Factory modules
-3. If you do not have **Az.Profile**, go to the PowerShell Gallery for [Az.Profile module](https://www.powershellgallery.com/packages/Az.profile/), select **Deploy to Azure Automation**, select your Azure Automation account, and then select **OK**. Go back to view **Modules** in **SHARED RESOURCES** section on the left menu and wait until you see **STATUS** of the **Az.Profile** module changed to **Available**.
+On the left menu, in the **SHARED RESOURCES** section, select **Modules**. Verify that you have **Az.DataFactory** and **Az.Profile** in the list of modules. They're both required.
- :::image type="content" source="media/how-to-schedule-azure-ssis-integration-runtime/automation-fix-image3.png" alt-text="Verify the Profile module":::
+ :::image type="content" source="media/how-to-schedule-azure-ssis-integration-runtime/automation-fix-image1.png" alt-text="Screenshot that shows a list of modules in Azure Automation.":::
+
+If you don't have **Az.DataFactory**:
+
+1. Go to the [Az.DataFactory module](https://www.powershellgallery.com/packages/Az.DataFactory/) in the PowerShell Gallery.
+1. Select **Deploy to Azure Automation**, select your Azure Automation account, and then select **OK**.
+1. Go back to view **Modules** in the **SHARED RESOURCES** section on the left menu. Wait until **STATUS** for the **Az.DataFactory** module changes to **Available**.
+
+ :::image type="content" source="media/how-to-schedule-azure-ssis-integration-runtime/automation-fix-image2.png" alt-text="Screenshot that shows verification that the Data Factory module appears in the module list.":::
+
+If you don't have **Az.Profile**:
+
+1. Go to the [Az.Profile module](https://www.powershellgallery.com/packages/Az.profile/) in the PowerShell Gallery.
+1. Select **Deploy to Azure Automation**, select your Azure Automation account, and then select **OK**.
+1. Go back to view **Modules** in the **SHARED RESOURCES** section on the left menu. Wait until **STATUS** for the **Az.Profile** module changes to **Available**.
+
+ :::image type="content" source="media/how-to-schedule-azure-ssis-integration-runtime/automation-fix-image3.png" alt-text="Screenshot that shows verification that the profile module appears in the module list.":::
### Create your PowerShell runbook
-The following section provides steps for creating a PowerShell runbook. The script associated with your runbook either starts/stops Azure-SSIS IR based on the command you specify for **OPERATION** parameter. This section does not provide the complete details for creating a runbook. For more information, see [Create a runbook](../automation/learn/powershell-runbook-managed-identity.md) article.
+This section provides steps for creating a PowerShell runbook. The script associated with your runbook either starts or stops the Azure-SSIS IR, based on the command that you specify for the **OPERATION** parameter.
-1. Switch to **Runbooks** tab and select **+ Add a runbook** from the toolbar.
+The following steps don't provide the complete details for creating a runbook. For more information, see [Create a runbook](../automation/learn/powershell-runbook-managed-identity.md).
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/runbooks-window.png" alt-text="Screenshot that highlights the +Add a runbook button.":::
-
-2. Select **Create a new runbook** and do the following actions:
+1. Switch to the **Runbooks** tab and select **+ Add a runbook** from the toolbar.
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/runbooks-window.png" alt-text="Screenshot that shows the button for adding a runbook.":::
+
+2. Select **Create a new runbook**, and then do the following actions:
1. For **Name**, enter **StartStopAzureSsisRuntime**. 2. For **Runbook type**, select **PowerShell**. 3. Select **Create**.
-
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-runbook-window.png" alt-text="Add a runbook button":::
-
-3. Copy & paste the following PowerShell script to your runbook script window. Save and then publish your runbook by using **Save** and **Publish** buttons on the toolbar.
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-runbook-window.png" alt-text="Screenshot that shows details for creating a runbook.":::
+
+3. Copy and paste the following PowerShell script into your runbook script window. Save and then publish your runbook by using the **Save** and **Publish** buttons on the toolbar.
>[!NOTE]
- > This example uses System-assigned managed identity. If you are using Run As account (service principal) or User-assigned managed identity, refer to [Azure Automation Sample scripts](../automation/migrate-run-as-accounts-managed-identity.md?tabs=ua-managed-identity#sample-scripts) for login part.
+ > This example uses a system-assigned managed identity. If you're using a Run As account (service principal) or a user-assigned managed identity, refer to [Azure Automation sample scripts](../automation/migrate-run-as-accounts-managed-identity.md?tabs=ua-managed-identity#sample-scripts) for the login part.
>
- > Enable appropriate RBAC permissions for the managed identity of this Automation account. Refer to [Roles and permissions for Azure Data Factory](concepts-roles-permissions.md).
+ > Enable appropriate role-based access control (RBAC) permissions for the managed identity of this Automation account. For more information, see [Roles and permissions for Azure Data Factory](concepts-roles-permissions.md).
```powershell Param
The following section provides steps for creating a PowerShell runbook. The scri
"##### Completed #####" ```
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/edit-powershell-runbook.png" alt-text="Edit PowerShell runbook":::
-
-4. Test your runbook by selecting **Start** button on the toolbar.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/edit-powershell-runbook.png" alt-text="Screenshot of the interface for editing a PowerShell runbook.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-runbook-button.png" alt-text="Start runbook button":::
-
-5. In **Start Runbook** pane, do the following actions:
+4. Test your runbook by selecting the **Start** button on the toolbar.
- 1. For **RESOURCE GROUP NAME**, enter the name of resource group that has your ADF with Azure-SSIS IR.
- 2. For **DATA FACTORY NAME**, enter the name of your ADF with Azure-SSIS IR.
- 3. For **AZURESSISNAME**, enter the name of Azure-SSIS IR.
- 4. For **OPERATION**, enter **START**.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-runbook-button.png" alt-text="Screenshot that shows the button for starting a runbook.":::
+
+5. On the **Start Runbook** pane, do the following actions:
+
+ 1. For **RESOURCEGROUPNAME**, enter the name of resource group that has your data factory with the Azure-SSIS IR.
+ 2. For **DATAFACTORYNAME**, enter the name of your data factory with the Azure-SSIS IR.
+ 3. For **AZURESSISNAME**, enter the name of the Azure-SSIS IR.
+ 4. For **OPERATION**, enter **START**.
5. Select **OK**.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-runbook-window.png" alt-text="Start runbook window":::
-
-6. In the job window, select **Output** tile. In the output window, wait for the message **##### Completed #####** after you see **##### Starting #####**. Starting Azure-SSIS IR takes approximately 20 minutes. Close **Job** window and get back to **Runbook** window.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-runbook-window.png" alt-text="Screenshot of the pane for parameters in starting a runbook.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-completed.png" alt-text="Screenshot that highlights the Output tile.":::
-
-7. Repeat the previous two steps using **STOP** as the value for **OPERATION**. Start your runbook again by selecting **Start** button on the toolbar. Enter your resource group, ADF, and Azure-SSIS IR names. For **OPERATION**, enter **STOP**. In the output window, wait for the message **##### Completed #####** after you see **##### Stopping #####**. Stopping Azure-SSIS IR does not take as long as starting it. Close **Job** window and get back to **Runbook** window.
+6. On the **Job** pane, select the **Output** tile. On the **Output** pane, wait for the message **##### Completed #####** after you see **##### Starting #####**. Starting an Azure-SSIS IR takes about 20 minutes. Close the **Job** pane and get back to the **Runbook** page.
-8. You can also trigger your runbook via a webhook that can be created by selecting the **Webhooks** menu item or on a schedule that can be created by selecting the **Schedules** menu item as specified below.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-completed.png" alt-text="Screenshot that shows the output pane.":::
-## Create schedules for your runbook to start/stop Azure-SSIS IR
+7. Repeat the previous two steps by using **STOP** as the value for **OPERATION**. Start your runbook again by selecting the **Start** button on the toolbar. Enter your resource group, data factory, and Azure-SSIS IR names. For **OPERATION**, enter **STOP**. On the **Output** pane, wait for the message **##### Completed #####** after you see **##### Stopping #####**. Stopping an Azure-SSIS IR does not take as long as starting it. Close the **Job** pane and get back to the **Runbook** page.
-In the previous section, you have created your Azure Automation runbook that can either start or stop Azure-SSIS IR. In this section, you will create two schedules for your runbook. When configuring the first schedule, you specify **START** for **OPERATION**. Similarly, when configuring the second one, you specify **STOP** for **OPERATION**. For detailed steps to create schedules, see [Create a schedule](../automation/shared-resources/schedules.md#create-a-schedule) article.
+8. You can also trigger your runbook via a webhook. To create a webhook, select the **Webhooks** menu item. Or you can create the webhook on a schedule by selecting the **Schedules** menu item, as specified in the next section.
-1. In **Runbook** window, select **Schedules**, and select **+ Add a schedule** on the toolbar.
+## Create schedules for your runbook to start and stop an Azure-SSIS IR
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-schedules-button.png" alt-text="Azure SSIS IR - started":::
-
-2. In **Schedule Runbook** pane, do the following actions:
+In the previous section, you created an Azure Automation runbook that can either start or stop an Azure-SSIS IR. In this section, you create two schedules for your runbook. When you're configuring the first schedule, you specify **START** for **OPERATION**. When you're configuring the second one, you specify **STOP** for **OPERATION**. For detailed steps to create schedules, see [Create a schedule](../automation/shared-resources/schedules.md#create-a-schedule).
+
+1. On the **Runbook** page, select **Schedules**, and then select **+ Add a schedule** on the toolbar.
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/add-schedules-button.png" alt-text="Screenshot that shows the button for adding a schedule.":::
+
+2. On the **Schedule Runbook** pane, do the following actions:
- 1. Select **Link a schedule to your runbook**.
+ 1. Select **Link a schedule to your runbook**.
2. Select **Create a new schedule**.
- 3. In **New Schedule** pane, enter **Start IR daily** for **Name**.
- 4. For **Starts**, enter a time that is a few minutes past the current time.
- 5. For **Recurrence**, select **Recurring**.
- 6. For **Recur every**, enter **1** and select **Day**.
- 7. Select **Create**.
+ 3. On the **New Schedule** pane, enter **Start IR daily** for **Name**.
+ 4. For **Starts**, enter a time that's a few minutes past the current time.
+ 5. For **Recurrence**, select **Recurring**.
+ 6. For **Recur every**, enter **1** and select **Day**.
+ 7. Select **Create**.
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/new-schedule-start.png" alt-text="Schedule for Azure SSIS IR start":::
-
-3. Switch to **Parameters and run settings** tab. Specify your resource group, ADF, and Azure-SSIS IR names. For **OPERATION**, enter **START** and select **OK**. Select **OK** again to see the schedule on **Schedules** page of your runbook.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/new-schedule-start.png" alt-text="Screenshot that shows selections for scheduling the start of an Azure-SSIS IR.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-schedule.png" alt-text="Screenshot that highlights the Operation field.":::
-
-4. Repeat the previous two steps to create a schedule named **Stop IR daily**. Enter a time that is at least 30 minutes after the time you specified for **Start IR daily** schedule. For **OPERATION**, enter **STOP** and select **OK**. Select **OK** again to see the schedule on **Schedules** page of your runbook.
+3. Switch to the **Parameters and run settings** tab. Specify your resource group, data factory, and Azure-SSIS IR names. For **OPERATION**, enter **START**, and then select **OK**. Select **OK** again to see the schedule on the **Schedules** page of your runbook.
-5. In **Runbook** window, select **Jobs** on the left menu. You should see the jobs created by your schedules at the specified times and their statuses. You can see the job details, such as its output, similar to what you have seen after you tested your runbook.
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/start-schedule.png" alt-text="Screenshot that highlights the value for the operation parameter in scheduling the start of a runbook.":::
- :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/schedule-jobs.png" alt-text="Schedule for starting the Azure SSIS IR":::
-
-6. After you are done testing, disable your schedules by editing them. Select **Schedules** on the left menu, select **Start IR daily/Stop IR daily**, and select **No** for **Enabled**.
+4. Repeat the previous two steps to create a schedule named **Stop IR daily**. Enter a time that's at least 30 minutes after the time that you specified for the **Start IR daily** schedule. For **OPERATION**, enter **STOP**, and then select **OK**. Select **OK** again to see the schedule on the **Schedules** page of your runbook.
+
+5. On the **Runbook** page, select **Jobs** on the left menu. The page that opens lists the jobs created by your schedules at the specified times, along with their statuses. You can see the job details, such as its output, similar to what appeared after you tested your runbook.
+
+ :::image type="content" source="./media/how-to-schedule-azure-ssis-integration-runtime/schedule-jobs.png" alt-text="Screenshot that shows the schedule for starting an Azure-SSIS IR.":::
+
+6. When you finish testing, disable your schedules by editing them. Select **Schedules** on the left menu, select **Start IR daily/Stop IR daily**, and then select **No** for **Enabled**.
## Next steps+ See the following blog post:-- [Modernize and extend your ETL/ELT workflows with SSIS activities in ADF pipelines](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Modernize-and-Extend-Your-ETL-ELT-Workflows-with-SSIS-Activities/ba-p/388370)
-See the following articles from SSIS documentation:
+- [Modernize and extend your ETL/ELT workflows with SSIS activities in Azure Data Factory pipelines](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Modernize-and-Extend-Your-ETL-ELT-Workflows-with-SSIS-Activities/ba-p/388370)
+
+See the following articles from SSIS documentation:
- [Deploy, run, and monitor an SSIS package on Azure](/sql/integration-services/lift-shift/ssis-azure-deploy-run-monitor-tutorial) -- [Connect to SSIS catalog on Azure](/sql/integration-services/lift-shift/ssis-azure-connect-to-catalog-database)-- [Schedule package execution on Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages)
+- [Connect to the SSIS catalog in Azure](/sql/integration-services/lift-shift/ssis-azure-connect-to-catalog-database)
+- [Schedule package execution in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages)
- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth)
data-factory Pricing Examples Data Integration Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-data-integration-managed-vnet.md
Previously updated : 09/22/2022 Last updated : 06/12/2023 # Pricing example: Data integration in Azure Data Factory Managed VNET
To accomplish the scenario, you need to create two pipelines with the following
| **Operations** | **Types and Units** | | | | | Run Pipeline | 6 Activity runs **per execution** (2 for trigger runs, 4 for activity runs) = 1440, rounded up since the calculator only allows increments of 1000.|
-| Execute Delete Activity: pipeline execution time **per execution** = 7 min. If the Delete Activity execution in the first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC. | Total 7 min / 60 min \* 240 montly executions = 28 pipeline activity execution hours in Managed VNET. Pipeline activity supports up to 50 concurrent executions in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity. |
-| Copy Data Assumption: DIU execution time **per execution** = 10 min if the Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC and the Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC. | 10 min \ 60 min * 4 Azure Integration Runtime (default DIU setting = 4) For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
+| Execute Delete Activity: pipeline execution time **per execution** = 7 min. If the Delete Activity execution in the first pipeline is from 10:00 AM UTC to 10:05 AM UTC and the Delete Activity execution in second pipeline is from 10:02 AM UTC to 10:07 AM UTC. | Total (7 min + 60 min) / 60 min * 30 monthly executions = 33.5 pipeline activity execution hours in Managed VNET. Pipeline activity supports up to 50 concurrent executions in Managed VNET. There's a 60 minutes Time To Live (TTL) for pipeline activity. |
+| Copy Data Assumption: DIU execution time **per execution** = 10 min if the Copy execution in first pipeline is from 10:06 AM UTC to 10:15 AM UTC and the Copy Activity execution in second pipeline is from 10:08 AM UTC to 10:17 AM UTC.| [(10 min + 2 min (queue time charges up to 2 minutes)) / 60 min * 4 Azure Managed VNET Integration Runtime (default DIU setting = 4)] * 2 = 1.6 daily data movement activity execution hours in Managed VNET. For more information on data integration units and optimizing copy performance, see [this article](copy-activity-performance.md) |
## Pricing calculator example
-**Total scenario pricing for 30 days: $42.14**
+**Total scenario pricing for 30 days: $83.50**
:::image type="content" source="media/pricing-concepts/scenario-5-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for data integration with Managed VNET." lightbox="media/pricing-concepts/scenario-5-pricing-calculator.png":::
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
data-lake-store Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
databox Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
defender-for-cloud How To Enable Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-enable-agentless-containers.md
We do not support or charge stopped clusters. To get the value of agentless capa
We suggest that you unlock the locked resource group/subscription/cluster, make the relevant requests manually, and then re-lock the resource group/subscription/cluster by doing the following:
-1. Enable the feature flag manually via CLI:
+1. Enable the feature flag manually via CLI by using [Trusted Access](https://learn.microsoft.com/azure/aks/trusted-access-feature).
+ ``` CLI
We suggest that you unlock the locked resource group/subscription/cluster, make
```
-1. Perform the bind operation in the CLI:
+2. Perform the bind operation in the CLI:
``` CLI
For locked clusters, you can also do one of the following:
Learn more about [locked resources](/azure/azure-resource-manager/management/lock-resources?tabs=json).
+## Are you using an updated version of AKS?
+
+Learn more about [supported Kubernetes versions in Azure Kubernetes Service (AKS)](/azure/aks/supported-kubernetes-versions?tabs=azure-cli).
+ ## Support for exemptions You can customize your vulnerability assessment experience by exempting management groups, subscriptions, or specific resources from your secure score. Learn how to [create an exemption](exempt-resource.md) for a resource or subscription.
You can customize your vulnerability assessment experience by exempting manageme
- Learn how to [view and remediate vulnerability assessment findings for registry images and running images](view-and-remediate-vulnerability-assessment-findings.md). - Learn how to [create an exemption](exempt-resource.md) for a resource or subscription. - Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).++
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 06/11/2023 Last updated : 06/15/2023 # What's new in Microsoft Defender for Cloud?
Updates in June include:
|Date |Update | |||
+June 15 | [Control updates were made to the NIST 800-53 standards in regulatory compliance](#control-updates-were-made-to-the-nist-800-53-standards-in-regulatory-compliance)
|June 11 | [Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud](#planning-of-cloud-migration-with-an-azure-migrate-business-case-now-includes-defender-for-cloud) | |June 7 | [Express configuration for vulnerability assessments in Defender for SQL is now Generally Available](#express-configuration-for-vulnerability-assessments-in-defender-for-sql-is-now-generally-available) | |June 6 | [More scopes added to existing Azure DevOps Connectors](#more-scopes-added-to-existing-azure-devops-connectors) | |June 5 | [Onboarding directly (without Azure Arc) to Defender for Servers is now Generally Available](#onboarding-directly-without-azure-arc-to-defender-for-servers-is-now-generally-available) | |June 4 | [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) |
+### Control updates were made to the NIST 800-53 standards in regulatory compliance
+
+June 15, 2023
+
+The NIST 800-53 standards (both R4 and R5) have recently been updated with control changes in Microsoft Defender for Cloud regulatory compliance. The Microsoft-managed controls have been removed from the standard, and the information on the Microsoft responsibility implementation (as part of the cloud shared responsibility model) is now available only in the control details pane under **Microsoft Actions**.
+
+These controls were previously calculated as passed controls, so you may see a significant dip in your compliance score for NIST standards between April 2023 and May 2023.
+
+For more information on compliance controls, see [Tutorial: Regulatory compliance checks - Microsoft Defender for Cloud](regulatory-compliance-dashboard.md#investigate-regulatory-compliance-issues).
+ ### Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud June 11, 2023
defender-for-iot Plan Network Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/best-practices/plan-network-monitoring.md
- Title: Plan your sensor connections for OT monitoring - Microsoft Defender for IoT
-description: Learn about best practices for planning your OT network monitoring with Microsoft Defender for IoT.
- Previously updated : 06/02/2022--
-# Plan your sensor connections for OT monitoring
-
-After you've [understood your network's OT architecture and how the Purdue module applies](understand-network-architecture.md), start planning your sensor connections in a Microsoft Defender for IoT deployment.
--
-## Sensor placement considerations
-
-We recommend that Defender for IoT monitors traffic from Purdue layers 1 and 2. For some architectures, if OT traffic exists on layer 3, Defender for IoT will also monitor layer 3 traffic.
-
-Review your OT and ICS network diagram together with your site engineers to define the best place to connect to Defender for IoT, and where you can get the most relevant traffic for monitoring. We recommend that you meet with the local network and operational teams to clarify expectations. Create lists of the following data about your network:
--- Known devices-- Estimated number of devices-- Vendors and industrial protocols-- Switch models and whether they support port mirroring-- Switch managers, including external resources-- OT networks on your site-
-## Multi-sensor deployments
-
-The following table lists best practices when deploying multiple Defender for IoT sensors:
-
-| **Number** | **Meters** | **Dependency** | **Number of sensors** |
-|--|--|--|--|
-| The maximum distance between switches | 80 meters | Prepared Ethernet cable | More than 1 |
-| Number of OT networks | More than 1 | No physical connectivity | More than 1 |
-| Number of switches | Can use RSPAN configuration | Up to eight switches with local span close to the sensor by cabling distance | More than 1 |
--
-## Questions for planning your network connections
-
-While you're reviewing your site architecture to determine whether or not to monitor a specific switch, considering the following questions:
--- What is the cost/benefit versus the importance of monitoring this switch?--- If a switch is unmanaged, can you monitor the traffic from a higher-level switch? If the ICS architecture is a [ring topology](sample-connectivity-models.md#sample-ring-topology), only one switch in the ring needs monitoring.--- What is the security or operational risk in the network?--- Can you monitor the switch's VLAN? Is the VLAN visible in another switch that you can monitor?-
-Other common questions to consider when planning your network connections to Defender for IoT include:
--- What are the overall goals of the implementation? Are a complete inventory and accurate network map important?--- Are there multiple or redundant networks in the ICS? Are all the networks being monitored?--- Are there communications between the ICS and the enterprise (business) network? Are these communications being monitored?--- Are VLANs configured in the network design?--- How is maintenance of the ICS performed, with fixed or transient devices?--- Where are firewalls installed in the monitored networks?--- Is there any routing in the monitored networks?--- What OT protocols are active on the monitored networks?--- If we connect to this switch, will we see communication between the HMI and the PLCs?--- What is the physical distance between the ICS switches and the enterprise firewall?--- Can unmanaged switches be replaced with managed switches, or is the use of network TAPs an option?--- Is there any serial communication in the network? If yes, show it on the network diagram.--- If the Defender for IoT appliance should be connected to that switch, is there physical available rack space in that cabinet?-
-## Next steps
-
-After you've understood your own network's OT architecture and planned out your deployment, learn more about methods for traffic mirroring and passive or active monitoring, and browse sample connectivity methods.
-
-For more information, see:
--- [Traffic mirroring methods for OT monitoring](traffic-mirroring-methods.md)-- [Sample OT network connectivity models](sample-connectivity-models.md)-
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
Use the following hardware profiles for production line monitoring, typically in
|L60 | Up to 10 Mbps | 100 |Physical / Virtual| > [!IMPORTANT]
-> <a name="l60"></a>Defender for IoT software versions later than 23.1 are planned to require a minimum disk size of 100 GB. Therefore, the L60 hardware profile, which supports 60 GB of hard disk, will be deprecated in versions later than 23.1.
+> <a name="l60"></a>Defender for IoT software versions later than 23.2 are planned to require a minimum disk size of 100 GB. Therefore, the L60 hardware profile, which supports 60 GB of hard disk, will be deprecated in versions later than 23.2.
> > We recommend that you plan any new deployments accordingly, using hardware profiles that support at least 100 GB. Migration steps from the L60 hardware profile will be provided together with the L60 deprecation.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Check out our new structure to follow through viewing devices and assets, managi
- [Quickstart: Get started with Defender for IoT](getting-started.md) - [Tutorial: Microsoft Defender for IoT trial setup](tutorial-onboarding.md) - [Tutorial: Get started with Enterprise IoT](tutorial-getting-started-eiot-sensor.md)-- [Plan your sensor connections for OT monitoring](best-practices/plan-network-monitoring.md) > [!NOTE] > To send feedback on docs via GitHub, scroll to the bottom of the page and select the **Feedback** option for **This page**. We'd be glad to hear from you!
devtest-labs Connect Virtual Machine Through Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/connect-virtual-machine-through-browser.md
description: Learn how to connect to lab virtual machines (VMs) through a browse
Previously updated : 03/14/2022 Last updated : 06/14/2023 # Connect to DevTest Labs VMs through a browser with Azure Bastion
devtest-labs Devtest Lab Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-concepts.md
description: Learn definitions of some basic DevTest Labs concepts related to la
Previously updated : 03/03/2022 Last updated : 06/14/2023 # DevTest Labs concepts
For more information about the differences between custom images and formulas, s
In DevTest Labs, an environment is a collection of Azure platform-as-a-service (PaaS) resources, such as an Azure Web App or a SharePoint farm. You can create environments in labs by using ARM templates. For more information, see [Use ARM templates to create DevTest Labs environments](devtest-lab-create-environment-from-arm.md). For more information about ARM template structure and properties, see [Template format](../azure-resource-manager/templates/syntax.md#template-format).
event-grid Custom Azure Policies For Security Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-azure-policies-for-security-control.md
Last updated 06/24/2021
This article provides you with sample custom Azure policies to control the destinations that can be configured in Event Grid's event subscriptions. [Azure Policy](../governance/policy/overview.md) helps you enforce organizational standards and regulatory compliance for different concerns such as security, cost, resource consistency, management, etc. Prominent among those concerns are security compliance standards that help maintain a security posture for your organization. To help you with your security controls, the policies presented in this article help you prevent data exfiltration or the delivery of events to unauthorized endpoints or Azure services. > [!NOTE]
-> Azure Event Grid provides built-in policies for compliance domains and security controls related to several compliance standards. You can find those built-in policies in Event Grid's [Azure Security Benchmark](security-controls-policy.md#azure-security-benchmark).
+> Azure Event Grid provides built-in policies for compliance domains and security controls related to several compliance standards. You can find those built-in policies in Event Grid's [Microsoft Cloud Security Benchmark](security-controls-policy.md#microsoft-cloud-security-benchmark).
To prevent data exfiltration, organizations may want to limit the destinations to which Event Grid can deliver events. This can be done by assigning policies that allow the creation or update of [event subscriptions](concepts.md#event-subscriptions) that have as a destination one of the sanctioned destinations in the policy. The policy effect used to prevent a resource request to succeed is [deny](../governance/policy/concepts/effects.md#deny).
event-grid Event Schema Azure Health Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-azure-health-data-services.md
These Events are triggered when a DICOM image is created or deleted by calling t
|-|--| |**DicomImageCreated** |The event emitted after a DICOM image is created successfully.| |**DicomImageDeleted** |The event emitted after a DICOM image is deleted successfully.|
+ |**DicomImageUpdated** |The event emitted after a DICOM image is updated successfully.|
## Example events This section contains examples of what Azure Health Data Services Events message data would look like for each FHIR Observation and DICOM image event.
This section contains examples of what Azure Health Data Services Events message
} ```
+### DicomImageUpdated
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+```json
+{
+ "id": "83cb0f51-af41-e58c-3c6c-46344b349bc5",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{dicom-account}.dicom.azurehealthcareapis.com/v1/partitions/Microsoft.Default/studies/1.2.3.4.3/series/1.2.3.4.3.9423673/instances/1.3.6.1.4.1.45096.2.296485376.2210.1633373143.864442",
+ "data": {
+ "partitionName": "Microsoft.Default",
+ "imageStudyInstanceUid": "1.2.3.4.3",
+ "imageSeriesInstanceUid": "1.2.3.4.3.9423673",
+ "imageSopInstanceUid": "1.3.6.1.4.1.45096.2.296485376.2210.1633373143.864442",
+ "serviceHostName": "{dicom-account}.dicom.azurehealthcareapis.com",
+ "sequenceNumber": 2
+ },
+ "eventType": "Microsoft.HealthcareApis.DicomImageUpdated",
+ "dataVersion": "1",
+ "metadataVersion": "1",
+ "eventTime": "2023-06-09T16:55:44.7197137Z"
+}
+```
+# [CloudEvent schema](#tab/cloud-event-schema)
+```json
+{
+ "source": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.HealthcareApis/workspaces/{workspace-name}",
+ "subject": "{dicom-account}.dicom.azurehealthcareapis.com/v1/partitions/Microsoft.Default/studies/1.2.3.4.3/series/1.2.3.4.3.9423673/instances/1.3.6.1.4.1.45096.2.296485376.2210.1633373143.864442",
+ "type": "Microsoft.HealthcareApis.DicomImageUpdated",
+ "time": "2022-09-15T01:14:04.5613214Z",
+ "id": "7e8aca04-e815-4387-82a8-9fcf15a3114b",
+ "data": {
+ "partitionName": "Microsoft.Default",
+ "imageStudyInstanceUid": "1.2.3.4.3",
+ "imageSeriesInstanceUid": "1.2.3.4.3.9423673",
+ "imageSopInstanceUid": "1.3.6.1.4.1.45096.2.296485376.2210.1633373143.864442",
+ "serviceHostName": "{dicom-account}.dicom.azurehealthcareapis.com",
+ "sequenceNumber": 1
+ },
+ "specversion": "1.0"
+}
+```
+ ## Next steps * For an overview of the Azure Health Data Services Events feature, see [What are Events?](../healthcare-apis/events/events-overview.md).
event-grid Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
event-hubs Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 02/14/2023 Last updated : 06/12/2023
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 05/29/2023 Last updated : 06/14/2023
The tables in this article provide information on ExpressRoute geographical coverage and locations, ExpressRoute connectivity providers, and ExpressRoute System Integrators (SIs).
-> [!Note]
+> [!NOTE]
> Azure regions and ExpressRoute locations are two distinct and different concepts, understanding the difference between the two is critical to exploring Azure hybrid networking connectivity. > ## Azure regions
-Azure regions are global datacenters where Azure compute, networking, and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in.
+
+Azure regions are global datacenters where Azure compute, networking, and storage resources are located. When you create an Azure resource, you need to select a resource location. The resource location determines which Azure datacenter or availability zone the resource gets created in.
## ExpressRoute locations
-ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location doesn't need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
+ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are colocation facilities where Microsoft Enterprise edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location doesn't need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
-You'll have access to Azure services across all regions within a geopolitical region if you connected to at least one ExpressRoute location within the geopolitical region.
+You have access to Azure services across all regions within a geopolitical region if you connected to at least one ExpressRoute location within the geopolitical region.
[!INCLUDE [expressroute-azure-regions-geopolitical-region](../../includes/expressroute-azure-regions-geopolitical-region.md)]
The following table shows connectivity locations and the service providers for e
* **ER Direct** refers to [ExpressRoute Direct](expressroute-erdirect-about.md) support at each peering location. If you want to view the available bandwidth at a location, see [Determine available bandwidth](expressroute-howto-erdirect.md#resources) ### Global commercial Azure+ | Location | Address | Zone | Local Azure regions | ER Direct | Service providers |
-| | | | | | |
-| **Abu Dhabi** | Etisalat KDC | 3 | UAE Central | Supported | |
-| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | Supported | Aryaka Networks, AT&T NetBond, British Telecom, Colt, Equinix, euNetworks, GÉANT, InterCloud, Interxion, KPN, IX Reach, Level 3 Communications, Megaport, NTT Communications, Orange, Tata Communications, Telefonica, Telenor, Telia Carrier, Verizon, Zayo |
-| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported| BICS, British Telecom, CenturyLink Cloud Connect, Colt, DE-CIX, Equinix, euNetworks, GÉANT, Interxion, NL-IX, NOS, NTT Global DataCenters EMEA, Orange, Vodafone |
-| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | Supported | Equinix, Megaport |
-| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | Supported | Devoli, Kordia, Megaport, REANNZ, Spark NZ, Vocus Group NZ |
-| **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS, National Telecom UIH |
-| **Berlin** | [NTT GDC](https://services.global.ntt/en-us/newsroom/ntt-ltd-announces-access-to-microsoft-azure-expressroute-at-ntts-berlin-1-data-center) | 1 | Germany North | Supported | Colt, Equinix, NTT Global DataCenters EMEA|
-| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | CenturyLink Cloud Connect, Equinix |
+|--|--|--|--|--|--|
+| **Abu Dhabi** | Etisalat KDC | 3 | UAE Central | Supported | |
+| **Amsterdam** | [Equinix AM5](https://www.equinix.com/locations/europe-colocation/netherlands-colocation/amsterdam-data-centers/am5/) | 1 | West Europe | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>Colt<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>InterCloud<br/>Interxion<br/>KPN<br/>IX Reach<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>Tata Communications<br/>Telefonica<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Zayo |
+| **Amsterdam2** | [Interxion AMS8](https://www.interxion.com/Locations/amsterdam/schiphol/) | 1 | West Europe | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GÉANT<br/>Interxion<br/>NL-IX<br/>NOS<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Vodafone |
+| **Atlanta** | [Equinix AT2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at2/) | 1 | n/a | Supported | Equinix<br/>Megaport |
+| **Auckland** | [Vocus Group NZ Albany](https://www.vocus.co.nz/business/cloud-data-centres) | 2 | n/a | Supported | Devoli<br/>Kordia<br/>Megaport<br/>REANNZ<br/>Spark NZ<br/>Vocus Group NZ |
+| **Bangkok** | [AIS](https://business.ais.co.th/solution/en/azure-expressroute.html) | 2 | n/a | Supported | AIS<br/>National Telecom UIH |
+| **Berlin** | [NTT GDC](https://services.global.ntt/en-us/newsroom/ntt-ltd-announces-access-to-microsoft-azure-expressroute-at-ntts-berlin-1-data-center) | 1 | Germany North | Supported | Colt<br/>Equinix<br/>NTT Global DataCenters EMEA |
+| **Bogota** | [Equinix BG1](https://www.equinix.com/locations/americas-colocation/colombia-colocation/bogota-data-centers/bg1/) | 4 | n/a | Supported | CenturyLink Cloud Connect<br/>Equinix |
| **Busan** | [LG CNS](https://www.lgcns.com/business/cloud/datacenter/) | 2 | Korea South | n/a | LG CNS | | **Campinas** | [Ascenty](https://www.ascenty.com/en/data-centers-en/campinas/) | 3 | Brazil South | Supported | Ascenty | | **Canberra** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central | Supported | CDC |
-| **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2| Supported | CDC, Equinix |
-| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | Supported | BCX, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Teraco, Vodacom |
-| **Chennai** | Tata Communications | 2 | South India | Supported | BSNL, DE-CIX, Global CloudXchange (GCX), Lightstorm, SIFY, Tata Communications, VodafoneIdea |
+| **Canberra2** | [CDC](https://cdcdatacentres.com.au/about-us/) | 1 | Australia Central 2 | Supported | CDC<br/>Equinix |
+| **Cape Town** | [Teraco CT1](https://www.teraco.co.za/data-centre-locations/cape-town/) | 3 | South Africa West | Supported | BCX<br/>Internet Solutions - Cloud Connect<br/>Liquid Telecom<br/>MTN Global Connect<br/>Teraco<br/>Vodacom |
+| **Chennai** | Tata Communications | 2 | South India | Supported | BSNL<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>Lightstorm<br/>SIFY<br/>Tata Communications<br/>VodafoneIdea |
| **Chennai2** | Airtel | 2 | South India | Supported | Airtel |
-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks, AT&T Dynamic Exchange, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Equinix, InterCloud, Internet2, Level 3 Communications, Megaport, PacketFabric, PCCW Global Limited, Sprint, Tata Communications, Telia Carrier, Verizon, Vodafone, Zayo |
-| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite, DE-CIX |
-| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | GlobalConnect, Interxion |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks, AT&T Dynamic Exchange, AT&T NetBond, Cologix, Cox Business Cloud Port, Equinix, Intercloud, Internet2, Level 3 Communications, Megaport, Neutrona Networks, Orange, PacketFabric, Telmex Uninet, Telia Carrier, Transtelco, Verizon, Vodafone, Zayo|
-| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite, Megaport, PacketFabric, Zayo |
-| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect, Vodafone |
+| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | 1 | North Central US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Equinix<br/>InterCloud<br/>Internet2<br/>Level 3 Communications<br/>Megaport<br/>PacketFabric<br/>PCCW Global Limited<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Chicago2** | [CoreSite CH1](https://www.coresite.com/data-center/ch1-chicago-il) | 1 | North Central US | Supported | CoreSite<br/>DE-CIX |
+| **Copenhagen** | [Interxion CPH1](https://www.interxion.com/Locations/copenhagen/) | 1 | n/a | Supported | GlobalConnect<br/>Interxion |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | 1 | n/a | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>Cologix<br/>Cox Business Cloud Port<br/>Equinix<br/>Intercloud<br/>Internet2<br/>Level 3 Communications<br/>Megaport<br/>Neutrona Networks<br/>Orange<br/>PacketFabric<br/>Telmex Uninet<br/>Telia Carrier<br/>Transtelco<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Denver** | [CoreSite DE1](https://www.coresite.com/data-centers/locations/denver/de1) | 1 | West Central US | Supported | CoreSite<br/>Megaport<br/>PacketFabric<br/>Zayo |
+| **Doha** | [MEEZA MV2](https://www.meeza.net/services/data-centre-services/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect<br/>Vodafone |
| **Doha2** | [Ooredoo](https://www.ooredoo.qa/) | 3 | Qatar Central | Supported | Ooredoo Cloud Connect | | **Dubai** | [PCCS](http://www.pacificcontrols.net/cloudservices/) | 3 | UAE North | Supported | Etisalat UAE |
-| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX, du datamena, Equinix, GBI, Megaport, Orange, Orixcom |
-| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect, Colt, eir, Equinix, GEANT, euNetworks, Interxion, Megaport, Zayo|
-| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | Interxion, KPN, Orange |
-| **Frankfurt** | [Interxion FRA11](https://www.digitalrealty.com/data-centers/emea/frankfurt) | 1 | Germany West Central | Supported | AT&T NetBond, British Telecom, CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Equinix, euNetworks, GBI, GEANT, InterCloud, Interxion, Megaport, NTT Global DataCenters EMEA, Orange, Telia Carrier, T-Systems, Verizon, Zayo |
-| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX, Deutsche Telekom AG, Equinix, InterCloud |
-| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt, Equinix, InterCloud, Megaport, Swisscom |
-| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks, British Telecom, CenturyLink Cloud Connect, Chief Telecom, China Telecom Global, China Unicom, Colt, Equinix, InterCloud, Megaport, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telia Carrier, Verizon, Zayo |
-| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International, China Telecom Global, Deutsche Telekom AG, Equinix, iAdvantage, Megaport, PCCW Global Limited, SingTel, Vodafone |
-| **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | Supported | NTT Communications, Telin, XL Axiata |
-| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | Supported | BCX, British Telecom, Internet Solutions - Cloud Connect, Liquid Telecom, MTN Global Connect, Orange, Teraco, Vodacom |
-| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | DE-CIX, TIME dotCom |
-| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | Supported | CenturyLink Cloud Connect, Megaport, PacketFabric |
-| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond, Bezeq International, British Telecom, CenturyLink, Colt, Equinix, euNetworks, Intelsat, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo |
-| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, GTT, Interxion, IX Reach, JISC, Megaport, NTT Global DataCenters EMEA, Ooredoo Cloud Connect, Orange, SES, Sohonet, Telehouse - KDDI, Zayo, Vodafone |
-| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | AT&T Dynamic Exchange, CoreSite, Cloudflare, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
-| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix, PacketFabric |
-| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX, Interxion, Megaport, Telefonica |
-| **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect |
-| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet, Devoli, Equinix, Megaport, NETSG, NEXTDC, Optus, Orange, Telstra Corporation, TPG Telecom |
-| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | AT&T Dynamic Exchange, Claro, C3ntro, Equinix, Megaport, Neutrona Networks |
-| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | Supported | Colt, Equinix, Fastweb, IRIDEOS, Retelit, Vodafone |
-| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) and [Cologix MIN3](https://www.cologix.com/data-centers/minneapolis/min3/) | 1 | n/a | Supported | Cologix, Megaport |
-| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | Supported | Bell Canada, CenturyLink Cloud Connect, Cologix, Fibrenoire, Megaport, Telus, Zayo |
-| **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL, DE-CIX, Global CloudXchange (GCX), Reliance Jio, Sify, Tata Communications, Verizon |
-| **Mumbai2** | Airtel | 2 | West India | Supported | Airtel, Sify, Orange, Vodafone Idea |
-| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | Supported | Colt, DE-CIX, Megaport |
-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | Supported | CenturyLink Cloud Connect, Coresite, Crown Castle, DE-CIX, Equinix, InterCloud, Lightpath, Megaport, NTT Communications, Packet, Zayo |
-| **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | Supported | British Telecom, Colt, Jisc, Level 3 Communications, Next Generation Data |
-| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO, BBIX, Colt, Equinix, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT SmartConnect, Softbank, Tokai Communications |
-| **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | Supported| GlobalConnect, Megaport, Telenor, Telia Carrier |
-| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Interxion, Jaguar Network, Megaport, Orange, Telia Carrier, Zayo, Verizon|
+| **Dubai2** | [du datamena](http://datamena.com/solutions/data-centre) | 3 | UAE North | n/a | DE-CIX<br/>du datamena<br/>Equinix<br/>GBI<br/>Megaport<br/>Orange<br/>Orixcom |
+| **Dublin** | [Equinix DB3](https://www.equinix.com/locations/europe-colocation/ireland-colocation/dublin-data-centers/db3/) | 1 | North Europe | Supported | CenturyLink Cloud Connect<br/>Colt<br/>eir<br/>Equinix<br/>GEANT<br/>euNetworks<br/>Interxion<br/>Megaport<br/>Zayo |
+| **Dublin2** | [Interxion DUB2](https://www.interxion.com/locations/europe/dublin) | 1 | North Europe | Supported | Interxion<br/>KPN<br/>Orange |
+| **Frankfurt** | [Interxion FRA11](https://www.digitalrealty.com/data-centers/emea/frankfurt) | 1 | Germany West Central | Supported | AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/>DE-CIX<br/>Equinix<br/>euNetworks<br/>GBI<br/>GEANT<br/>InterCloud<br/>Interxion<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Orange<br/>Telia Carrier<br/>T-Systems<br/>Verizon<br/>Zayo |
+| **Frankfurt2** | [Equinix FR7](https://www.equinix.com/locations/europe-colocation/germany-colocation/frankfurt-data-centers/fr7/) | 1 | Germany West Central | Supported | DE-CIX<br/>Deutsche Telekom AG<br/>Equinix<br/>InterCloud |
+| **Geneva** | [Equinix GV2](https://www.equinix.com/locations/europe-colocation/switzerland-colocation/geneva-data-centers/gv2/) | 1 | Switzerland West | Supported | Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>Swisscom |
+| **Hong Kong** | [Equinix HK1](https://www.equinix.com/data-centers/asia-pacific-colocation/hong-kong-colocation/hong-kong-data-centers/hk1) | 2 | East Asia | Supported | Aryaka Networks<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Chief Telecom<br/>China Telecom Global<br/>China Unicom<br/>Colt<br/>Equinix<br/>InterCloud<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Zayo |
+| **Hong Kong2** | [iAdvantage MEGA-i](https://www.iadvantage.net/index.php/locations/mega-i) | 2 | East Asia | Supported | China Mobile International<br/>China Telecom Global<br/>Deutsche Telekom AG<br/>Equinix<br/>iAdvantage<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Vodafone |
+| **Jakarta** | [Telin](https://www.telin.net/) | 4 | n/a | Supported | NTT Communications<br/>Telin<br/>XL Axiata |
+| **Johannesburg** | [Teraco JB1](https://www.teraco.co.za/data-centre-locations/johannesburg/#jb1) | 3 | South Africa North | Supported | BCX<br/>British Telecom<br/>Internet Solutions - Cloud Connect<br/>Liquid Telecom<br/>MTN Global Connect<br/>Orange<br/>Teraco<br/>Vodacom |
+| **Kuala Lumpur** | [TIME dotCom Menara AIMS](https://www.time.com.my/enterprise/connectivity/direct-cloud) | 2 | n/a | n/a | DE-CIX<br/>TIME dotCom |
+| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | Supported | CenturyLink Cloud Connect<br/>Megaport<br/>PacketFabric |
+| **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond<br/>Bezeq International<br/>British Telecom<br/>CenturyLink<br/>Colt<br/>Equinix<br/>euNetworks<br/>Intelsat<br/>InterCloud<br/>Internet Solutions - Cloud Connect<br/>Interxion<br/>Jisc<br/>Level 3 Communications<br/>Megaport<br/>MTN<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>Tata Communications<br/>Telehouse - KDDI<br/>Telenor<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>GTT<br/>Interxion<br/>IX Reach<br/>JISC<br/>Megaport<br/>NTT Global DataCenters EMEA<br/>Ooredoo Cloud Connect<br/>Orange<br/>SES<br/>Sohonet<br/>Telehouse - KDDI<br/>Zayo<br/>Vodafone |
+| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>CoreSite<br/>Cloudflare<br/>Equinix*<br/>Megaport<br/>Neutrona Networks<br/>NTT<br/>Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
+| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix<br/>PacketFabric |
+| **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX<br/>Interxion<br/>Megaport<br/>Telefonica |
+| **Marseille** | [Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt<br/>DE-CIX<br/>GEANT<br/>Interxion<br/>Jaguar Network<br/>Ooredoo Cloud Connect |
+| **Melbourne** | [NextDC M1](https://www.nextdc.com/data-centres/m1-melbourne-data-centre) | 2 | Australia Southeast | Supported | AARNet<br/>Devoli<br/>Equinix<br/>Megaport<br/>NETSG<br/>NEXTDC<br/>Optus<br/>Orange<br/>Telstra Corporation<br/>TPG Telecom |
+| **Miami** | [Equinix MI1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/miami-data-centers/mi1/) | 1 | n/a | Supported | AT&T Dynamic Exchange<br/>Claro<br/>C3ntro<br/>Equinix<br/>Megaport<br/>Neutrona Networks |
+| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | n/a | Supported | Colt<br/>Equinix<br/>Fastweb<br/>IRIDEOS<br/>Retelit<br/>Vodafone |
+| **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) and [Cologix MIN3](https://www.cologix.com/data-centers/minneapolis/min3/) | 1 | n/a | Supported | Cologix<br/>Megaport |
+| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | Supported | Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Fibrenoire<br/>Megaport<br/>Telus<br/>Zayo |
+| **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>Reliance Jio<br/>Sify<br/>Tata Communications<br/>Verizon |
+| **Mumbai2** | Airtel | 2 | West India | Supported | Airtel<br/>Sify<br/>Orange<br/>Vodafone Idea |
+| **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | Supported | Colt<br/>DE-CIX<br/>Megaport |
+| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | 1 | n/a | Supported | CenturyLink Cloud Connect<br/>Coresite<br/>Crown Castle<br/>DE-CIX<br/>Equinix<br/>InterCloud<br/>Lightpath<br/>Megaport<br/>NTT Communications<br/>Packet<br/>Zayo |
+| **Newport(Wales)** | [Next Generation Data](https://www.nextgenerationdata.co.uk) | 1 | UK West | Supported | British Telecom<br/>Colt<br/>Jisc<br/>Level 3 Communications<br/>Next Generation Data |
+| **Osaka** | [Equinix OS1](https://www.equinix.com/locations/asia-colocation/japan-colocation/osaka-data-centers/os1/) | 2 | Japan West | Supported | AT TOKYO<br/>BBIX<br/>Colt<br/>Equinix<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT SmartConnect<br/>Softbank<br/>Tokai Communications |
+| **Oslo** | [DigiPlex Ulven](https://www.digiplex.com/locations/oslo-datacentre) | 1 | Norway East | Supported | GlobalConnect<br/>Megaport<br/>Telenor<br/>Telia Carrier |
+| **Paris** | [Interxion PAR5](https://www.interxion.com/Locations/paris/) | 1 | France Central | Supported | British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Interxion<br/>Jaguar Network<br/>Megaport<br/>Orange<br/>Telia Carrier<br/>Zayo<br/>Verizon |
| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix |
-| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Equinix, Megaport, NextDC |
-| **Phoenix** | [EdgeConneX PHX01](https://www.edgeconnex.com/locations/north-america/phoenix-az/) | 1 | West US 3 | Supported | Cox Business Cloud Port, CenturyLink Cloud Connect, DE-CIX, Megaport, Zayo |
-| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | |
-| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India| Supported | Lightstorm, Tata Communications |
-| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada, Equinix, Megaport, Telus |
-| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Cirion Technologies, Megaport, Transtelco|
-| **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | |
-| **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Cirion Technologies, Equinix |
-| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect, Megaport, Zayo |
+| **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Equinix<br/>Megaport<br/>NextDC |
+| **Phoenix** | [EdgeConneX PHX01](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | 1 | West US 3 | Supported | Cox Business Cloud Port<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Megaport<br/>Zayo |
+| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | |
+| **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | Supported | Lightstorm<br/>Tata Communications |
+| **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada<br/>Equinix<br/>Megaport<br/>Telus |
+| **Queretaro (Mexico)** | [KIO Networks QR01](https://www.kionetworks.com/es-mx/) | 4 | n/a | Supported | Cirion Technologies<br/>Megaport<br/>Transtelco |
+| **Quincy** | [Sabey Datacenter - Building A](https://sabeydatacenters.com/data-center-locations/central-washington-data-centers/quincy-data-center) | 1 | West US 2 | Supported | |
+| **Rio de Janeiro** | [Equinix-RJ2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/rio-de-janeiro-data-centers/rj2/) | 3 | Brazil Southeast | Supported | Cirion Technologies<br/>Equinix |
+| **San Antonio** | [CyrusOne SA1](https://cyrusone.com/locations/texas/san-antonio-texas/) | 1 | South Central US | Supported | CenturyLink Cloud Connect<br/>Megaport<br/>Zayo |
| **Santiago** | [EdgeConnex SCL](https://www.edgeconnex.com/locations/south-america/santiago/) | 3 | n/a | Supported | PitChile |
-| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks, Ascenty Data Centers, British Telecom, Equinix, InterCloud, Level 3 Communications, Neutrona Networks, Orange, RedCLARA, Tata Communications, Telefonica, UOLDIVEO |
-| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers, Tivit |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, PacketFabric, Telus, Zayo |
-| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX, KT, LG CNS, LGUplus, Equinix, Sejong Telecom, SK Telecom |
+| **Sao Paulo** | [Equinix SP2](https://www.equinix.com/locations/americas-colocation/brazil-colocation/sao-paulo-data-centers/sp2/) | 3 | Brazil South | Supported | Aryaka Networks<br/>Ascenty Data Centers<br/>British Telecom<br/>Equinix<br/>InterCloud<br/>Level 3 Communications<br/>Neutrona Networks<br/>Orange<br/>RedCLARA<br/>Tata Communications<br/>Telefonica<br/>UOLDIVEO |
+| **Sao Paulo2** | [TIVIT TSM](https://www.tivit.com/en/tivit/) | 3 | Brazil South | Supported | Ascenty Data Centers<br/>Tivit |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | 1 | West US 2 | Supported | Aryaka Networks<br/>CenturyLink Cloud Connect<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>PacketFabric<br/>Telus<br/>Zayo |
+| **Seoul** | [KINX Gasan IDC](https://www.kinx.net/?lang=en) | 2 | Korea Central | Supported | KINX<br/>KT<br/>LG CNS<br/>LGUplus<br/>Equinix<br/>Sejong Telecom<br/>SK Telecom |
| **Seoul2** | [KT IDC](https://www.kt-idc.com/eng/introduce/sub1_4_10.jsp#tab) | 2 | Korea Central | n/a | KT |
-| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks, AT&T Dynamic Exchange, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Colt, Comcast, Coresite, Cox Business Cloud Port, Equinix, InterCloud, Internet2, IX Reach, Packet, PacketFabric, Level 3 Communications, Megaport, Orange, Sprint, Tata Communications, Telia Carrier, Verizon, Vodafone, Zayo |
-| **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | Supported | Colt, Coresite |
-| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks, AT&T NetBond, British Telecom, China Mobile International, Epsilon Global Communications, Equinix, InterCloud, Level 3 Communications, Megaport, NTT Communications, Orange, PCCW Global Limited, SingTel, Tata Communications, Telstra Corporation, Verizon, Vodafone |
-| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect, China Unicom Global, Colt, DE-CIX, Epsilon Global Communications, Equinix, Megaport, PCCW Global Limited, SingTel, Telehouse - KDDI |
-| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported | GlobalConnect, Megaport, Telenor |
-| **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | Sweden Central | Supported | Equinix, GlobalConnect, Interxion, Megaport, Telia Carrier |
-| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet, AT&T NetBond, British Telecom, Devoli, Equinix, Kordia, Megaport, NEXTDC, NTT Communications, Optus, Orange, Spark NZ, Telstra Corporation, TPG Telecom, Verizon, Vocus Group NZ |
-| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport, NETSG, NextDC |
-| **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom, Chunghwa Telecom, FarEasTone |
-| **Tel Aviv** | Bezeq International | 2 | n/a | Supported | |
-| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> |
-| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Equinix, IX Reach, Megaport, PCCW Global Limited, Tokai Communications |
-| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | NEC, SCSK |
-| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Orange, Telus, Verizon, Zayo |
+| **Silicon Valley** | [Equinix SV1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv1/) | 1 | West US | Supported | Aryaka Networks<br/>AT&T Dynamic Exchange<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Equinix<br/>InterCloud<br/>Internet2<br/>IX Reach<br/>Packet<br/>PacketFabric<br/>Level 3 Communications<br/>Megaport<br/>Orange<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Vodafone<br/>Zayo |
+| **Silicon Valley2** | [Coresite SV7](https://www.coresite.com/data-centers/locations/silicon-valley/sv7) | 1 | West US | Supported | Colt<br/>Coresite |
+| **Singapore** | [Equinix SG1](https://www.equinix.com/data-centers/asia-pacific-colocation/singapore-colocation/singapore-data-center/sg1) | 2 | Southeast Asia | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>China Mobile International<br/>Epsilon Global Communications<br/>Equinix<br/>InterCloud<br/>Level 3 Communications<br/>Megaport<br/>NTT Communications<br/>Orange<br/>PCCW Global Limited<br/>SingTel<br/>Tata Communications<br/>Telstra Corporation<br/>Verizon<br/>Vodafone |
+| **Singapore2** | [Global Switch Tai Seng](https://www.globalswitch.com/locations/singapore-data-centres/) | 2 | Southeast Asia | Supported | CenturyLink Cloud Connect<br/>China Unicom Global<br/>Colt<br/> DE-CIX<br/>Epsilon Global Communications<br/>Equinix<br/>Megaport<br/>PCCW Global Limited<br/>SingTel<br/>Telehouse - KDDI |
+| **Stavanger** | [Green Mountain DC1](https://greenmountain.no/dc1-stavanger/) | 1 | Norway West | Supported | GlobalConnect<br/>Megaport<br/>Telenor |
+| **Stockholm** | [Equinix SK1](https://www.equinix.com/locations/europe-colocation/sweden-colocation/stockholm-data-centers/sk1/) | 1 | Sweden Central | Supported | Equinix<br/>GlobalConnect<br/>Interxion<br/>Megaport<br/>Telia Carrier |
+| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet<br/>AT&T NetBond<br/>British Telecom<br/>Devoli<br/>Equinix<br/>Kordia<br/>Megaport<br/>NEXTDC<br/>NTT Communications<br/>Optus<br/>Orange<br/>Spark NZ<br/>Telstra Corporation<br/>TPG Telecom<br/>Verizon<br/>Vocus Group NZ |
+| **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport<br/>NETSG<br/>NextDC |
+| **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom<br/>Chunghwa Telecom<br/>FarEasTone |
+| **Tel Aviv** | Bezeq International | 2 | n/a | Supported | |
+| **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks<br/>AT&T NetBond<br/>BBIX<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT EAST<br/>Orange<br/>Softbank<br/>Telehouse - KDDI<br/>Verizon </br></br> |
+| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO<br/>China Unicom Global<br/>Colt<br/>Equinix<br/>IX Reach<br/>Megaport<br/>PCCW Global Limited<br/>Tokai Communications |
+| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | NEC<br/>SCSK |
+| **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo |
| **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | Fibrenoire |
-| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada, Cologix, Megaport, Telus, Zayo |
-| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | n/a | Supported | Equinix |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/), [Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US, East US 2 | Supported | Aryaka Networks, AT&T NetBond, British Telecom, CenturyLink Cloud Connect, Cologix, Colt, Comcast, Coresite, Cox Business Cloud Port, Crown Castle, Equinix, Internet2, InterCloud, Iron Mountain, IX Reach, Level 3 Communications, Lightpath, Megaport, Neutrona Networks, NTT Communications, Orange, PacketFabric, SES, Sprint, Tata Communications, Telia Carrier, Verizon, Zayo |
-| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | n/a | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo |
-| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom, Zayo |
+| **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo |
+| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Equinix<br/>Internet2<br/>InterCloud<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Zayo |
+| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | n/a | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Viasat<br/>Zayo |
+| **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt<br/>Equinix<br/>Intercloud<br/>Interxion<br/>Megaport<br/>Swisscom<br/>Zayo |
### National cloud environments
The following table shows connectivity locations and the service providers for e
Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud can't connect to the Azure regions in the others. ### US Government cloud+ | Location | Address | Local Azure regions | ER Direct | Service providers |
-| | | | | |
+|--|--|--|--|--|
| **Atlanta** | [Equinix AT1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/atlanta-data-centers/at1/) | n/a | Supported | Equinix |
-| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | Supported | AT&T NetBond, British Telecom, Equinix, Level 3 Communications, Verizon |
-| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | Supported | Equinix, Internet2, Megaport, Verizon |
-| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | Supported | Equinix, CenturyLink Cloud Connect, Verizon |
-| **Phoenix** | [CyrusOne Chandler](https://cyrusone.com/locations/arizona/phoenix-arizona-chandler/) | US Gov Arizona | Supported | AT&T NetBond, CenturyLink Cloud Connect, Megaport |
-| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | Supported | CenturyLink Cloud Connect, Megaport |
-| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | Supported | AT&T, Equinix, Level 3 Communications, Verizon |
-| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | Supported | Equinix, Megaport |
-| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | US DoD East, US Gov Virginia | Supported | AT&T NetBond, CenturyLink Cloud Connect, Equinix, Level 3 Communications, Megaport, Verizon |
+| **Chicago** | [Equinix CH1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/chicago-data-centers/ch1/) | n/a | Supported | AT&T NetBond<br/>British Telecom<br/>Equinix<br/>Level 3 Communications<br/>Verizon |
+| **Dallas** | [Equinix DA3](https://www.equinix.com/locations/americas-colocation/united-states-colocation/dallas-data-centers/da3/) | n/a | Supported | Equinix<br/>Internet2<br/>Megaport<br/>Verizon |
+| **New York** | [Equinix NY5](https://www.equinix.com/locations/americas-colocation/united-states-colocation/new-york-data-centers/ny5/) | n/a | Supported | Equinix<br/>CenturyLink Cloud Connect<br/>Verizon |
+| **Phoenix** | [CyrusOne Chandler](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | US Gov Arizona | Supported | AT&T NetBond<br/>CenturyLink Cloud Connect<br/>Megaport |
+| **San Antonio** | [CyrusOne SA2](https://cyrusone.com/locations/texas/san-antonio-texas-ii/) | US Gov Texas | Supported | CenturyLink Cloud Connect<br/>Megaport |
+| **Silicon Valley** | [Equinix SV4](https://www.equinix.com/locations/americas-colocation/united-states-colocation/silicon-valley-data-centers/sv4/) | n/a | Supported | AT&T<br/>Equinix<br/>Level 3 Communications<br/>Verizon |
+| **Seattle** | [Equinix SE2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/seattle-data-centers/se2/) | n/a | Supported | Equinix<br/>Megaport |
+| **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/) | US DoD East<br/>US Gov Virginia | Supported | AT&T NetBond<br/>CenturyLink Cloud Connect<br/>Equinix<br/>Level 3 Communications<br/>Megaport<br/>Verizon |
### China+ | Location | Address | Local Azure regions | ER Direct | Service providers |
-| | | | | |
+|--|--|--|--|--|
| **Beijing** | China Telecom | n/a | Supported | China Telecom |
-| **Beijing2** | GDS | n/a | Supported | China Telecom, China Unicom, GDS |
+| **Beijing2** | GDS | n/a | Supported | China Telecom<br/>China Unicom<br/>GDS |
| **Shanghai** | China Telecom | n/a | Supported | China Telecom |
-| **Shanghai2** | GDS | n/a | Supported | China Telecom, China Unicom, GDS |
+| **Shanghai2** | GDS | n/a | Supported | China Telecom<br/>China Unicom<br/>GDS |
To learn more, see [ExpressRoute in China](https://www.azure.cn/home/features/expressroute/).
To learn more, see [ExpressRoute in China](https://www.azure.cn/home/features/ex
If your connectivity provider isn't listed in previous sections, you can still create a connection.
-* Check with your connectivity provider to see if they're connected to any of the exchanges in the table above. You can check the following links to gather more information about services offered by exchange providers. Several connectivity providers are already connected to Ethernet exchanges.
+* Check with your connectivity provider to see if they're connected to any of the exchanges in the table. You can check the following links to gather more information about services offered by exchange providers. Several connectivity providers are already connected to Ethernet exchanges.
* [Cologix](https://www.cologix.com/) * [CoreSite](https://www.coresite.com/) * [DE-CIX](https://www.de-cix.net/en/services/microsoft-azure-peering-service)
If you're remote and don't have fiber connectivity or want to explore other conn
* [Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services) | Location | Exchange | Connectivity providers |
-| | | |
-| **Amsterdam** | Equinix, Interxion, Level 3 Communications | BICS, CloudXpress, Eurofiber, Fastweb S.p.A, Gulf Bridge International, Kalaam Telecom Bahrain B.S.C, MainOne, Nianet, POST Telecom Luxembourg, Proximus, RETN, TDC Erhverv, Telecom Italia Sparkle, Telekom Deutschland GmbH, Telia |
-| **Atlanta** | Equinix| Crown Castle
+|--|--|--|
+| **Amsterdam** | Equinix<br/>Interxion<br/>Level 3 Communications | BICS<br/>CloudXpress<br/>Eurofiber<br/>Fastweb S.p.A<br/>Gulf Bridge International<br/>Kalaam Telecom Bahrain B.S.C<br/>MainOne<br/>Nianet<br/>POST Telecom Luxembourg<br/>Proximus<br/>RETN<br/>TDC Erhverv<br/>Telecom Italia Sparkle<br/>Telekom Deutschland GmbH<br/>Telia |
+| **Atlanta** | Equinix | Crown Castle |
| **Cape Town** | Teraco | MTN | | **Chennai** | Tata Communications | Tata Teleservices |
-| **Chicago** | Equinix| Crown Castle, Spectrum Enterprise, Windstream |
-| **Dallas** | Equinix, Megaport | Axtel, C3ntro Telecom, Cox Business, Crown Castle, Data Foundry, Spectrum Enterprise, Transtelco |
-| **Frankfurt** | Interxion | BICS, Cinia, Equinix, Nianet, QSC AG, Telekom Deutschland GmbH |
+| **Chicago** | Equinix | Crown Castle<br/>Spectrum Enterprise<br/>Windstream |
+| **Dallas** | Equinix<br/>Megaport | Axtel<br/>C3ntro Telecom<br/>Cox Business<br/>Crown Castle<br/>Data Foundry<br/>Spectrum Enterprise<br/>Transtelco |
+| **Frankfurt** | Interxion | BICS<br/>Cinia<br/>Equinix<br/>Nianet<br/>QSC AG<br/>Telekom Deutschland GmbH |
| **Hamburg** | Equinix | Cinia |
-| **Hong Kong SAR** | Equinix | Chief, Macroview Telecom |
+| **Hong Kong SAR** | Equinix | Chief<br/>Macroview Telecom |
| **Johannesburg** | Teraco | MTN |
-| **London** | BICS, Equinix, euNetworks| Bezeq International Ltd., CoreAzure, Epsilon Telecommunications Limited, Exponential E, HSO, NexGen Networks, Proximus, Tamares Telecom, Zain |
-| **Los Angeles** | Equinix |Crown Castle, Spectrum Enterprise, Transtelco |
+| **London** | BICS<br/>Equinix<br/>euNetworks | Bezeq International Ltd.<br/>CoreAzure<br/>Epsilon Telecommunications Limited<br/>Exponential E<br/>HSO<br/>NexGen Networks<br/>Proximus<br/>Tamares Telecom<br/>Zain |
+| **Los Angeles** | Equinix | Crown Castle<br/>Spectrum Enterprise<br/>Transtelco |
| **Madrid** | Level3 | Zertia |
-| **Montreal** | Cologix| Airgate Technologies, Inc. Aptum Technologies, Oncore Cloud Services Inc., Rogers, Zirro |
+| **Montreal** | Cologix | Airgate Technologies<br/>Inc. Aptum Technologies<br/>Oncore Cloud Services Inc.<br/>Rogers<br/>Zirro |
| **Mumbai** | Tata Communications | Tata Teleservices |
-| **New York** |Equinix, Megaport | Altice Business, Crown Castle, Spectrum Enterprise, Webair |
+| **New York** | Equinix<br/>Megaport | Altice Business<br/>Crown Castle<br/>Spectrum Enterprise<br/>Webair |
| **Paris** | Equinix | Proximus | | **Quebec City** | Megaport | Fibrenoire | | **Sao Paulo** | Equinix | Venha Pra Nuvem |
-| **Seattle** |Equinix | Alaska Communications |
-| **Silicon Valley** |Coresite, Equinix | Cox Business, Spectrum Enterprise, Windstream, X2nsat Inc. |
-| **Singapore** |Equinix |1CLOUDSTAR, BICS, CMC Telecom, Epsilon Telecommunications Limited, LGA Telecom, United Information Highway (UIH) |
-| **Slough** | Equinix | HSO|
-| **Sydney** | Megaport | Macquarie Telecom Group|
-| **Tokyo** | Equinix | ARTERIA Networks Corporation, BroadBand Tower, Inc. |
-| **Toronto** | Equinix, Megaport | Airgate Technologies Inc., Beanfield Metroconnect, Aptum Technologies, IVedha Inc, Oncore Cloud Services Inc., Rogers, Thinktel, Zirro|
-| **Washington DC** |Equinix | Altice Business, BICS, Cox Business, Crown Castle, Gtt Communications Inc, Epsilon Telecommunications Limited, Masergy, Windstream |
+| **Seattle** | Equinix | Alaska Communications |
+| **Silicon Valley** | Coresite<br/>Equinix | Cox Business<br/>Spectrum Enterprise<br/>Windstream<br/>X2nsat Inc. |
+| **Singapore** | Equinix | 1CLOUDSTAR<br/>BICS<br/>CMC Telecom<br/>Epsilon Telecommunications Limited<br/>LGA Telecom<br/>United Information Highway (UIH) |
+| **Slough** | Equinix | HSO |
+| **Sydney** | Megaport | Macquarie Telecom Group |
+| **Tokyo** | Equinix | ARTERIA Networks Corporation<br/>BroadBand Tower<br/>Inc. |
+| **Toronto** | Equinix<br/>Megaport | Airgate Technologies Inc.<br/>Beanfield Metroconnect<br/>Aptum Technologies<br/>IVedha Inc<br/>Oncore Cloud Services Inc.<br/>Rogers<br/>Thinktel<br/>Zirro |
+| **Washington DC** | Equinix | Altice Business<br/>BICS<br/>Cox Business<br/>Crown Castle<br/>Gtt Communications Inc<br/>Epsilon Telecommunications Limited<br/>Masergy<br/>Windstream |
## ExpressRoute system integrators Enabling private connectivity to fit your needs can be challenging, based on the scale of your network. You can work with any of the system integrators listed in the following table to assist you with onboarding to ExpressRoute. | Continent | System integrators |
-| | |
-| **Asia** |Avanade Inc., OneAs1a |
-| **Australia** | Ensyst, IT Consultancy, MOQdigital, Vigilant.IT |
-| **Europe** |Avanade Inc., Altogee, Bright Skies GmbH, Inframon, MSG Services, New Signature, Nelite, Orange Networks, sol-tec |
-| **North America** |Avanade Inc., Equinix Professional Services, FlexManage, Lightstream, Perficient, Presidio |
-| **South America** |Avanade Inc., Venha Pra Nuvem |
+|--|--|
+| **Asia** | Avanade Inc.<br/>OneAs1a |
+| **Australia** | Ensyst<br/>IT Consultancy<br/>MOQdigital<br/>Vigilant.IT |
+| **Europe** | Avanade Inc.<br/>Altogee<br/>Bright Skies GmbH<br/>Inframon<br/>MSG Services<br/>New Signature<br/>Nelite<br/>Orange Networks<br/>sol-tec |
+| **North America** | Avanade Inc.<br/>Equinix Professional Services<br/>FlexManage<br/>Lightstream<br/>Perficient<br/>Presidio |
+| **South America** | Avanade Inc.<br/>Venha Pra Nuvem |
## Next steps * For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
|Service provider | Microsoft Azure | Microsoft 365 | Locations | | | | | |
-| **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |Supported |Supported | Melbourne, Sydney |
-| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2, Mumbai2 |
+| **[AARNet](https://www.aarnet.edu.au/network-and-services/connectivity-services/azure-expressroute)** |Supported |Supported | Melbourne<br/>Sydney |
+| **[Airtel](https://www.airtel.in/business/#/)** | Supported | Supported | Chennai2<br/>Mumbai2 |
| **[AIS](https://business.ais.co.th/solution/en/azure-expressroute.html)** | Supported | Supported | Bangkok |
-| **[Aryaka Networks](https://www.aryaka.com/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Hong Kong SAR, Sao Paulo, Seattle, Silicon Valley, Singapore, Tokyo, Washington DC |
-| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** | Supported | Supported | Campinas, Sao Paulo, Sao Paulo2 |
-| **AT&T Dynamic Exchange** | Supported | Supported | Chicago, Dallas, Los Angeles, Miami, Silicon Valley |
-| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** | Supported | Supported | Amsterdam, Chicago, Dallas, Frankfurt, London, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
-| **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka, Tokyo2 |
-| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo, Tokyo2 |
-| **[BCX](https://www.bcx.co.za/solutions/connectivity/)** | Supported | Supported | Cape Town, Johannesburg|
-| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** | Supported | Supported | Montreal, Toronto, Quebec City, Vancouver |
+| **[Aryaka Networks](https://www.aryaka.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Hong Kong SAR<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC |
+| **[Ascenty Data Centers](https://www.ascenty.com/en/cloud/microsoft-express-route)** | Supported | Supported | Campinas<br/>Sao Paulo<br/>Sao Paulo2 |
+| **AT&T Dynamic Exchange** | Supported | Supported | Chicago<br/>Dallas<br/>Los Angeles<br/>Miami<br/>Silicon Valley |
+| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>London<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
+| **[AT TOKYO](https://www.attokyo.com/connectivity/azure.html)** | Supported | Supported | Osaka<br/>Tokyo2 |
+| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
+| **[BCX](https://www.bcx.co.za/solutions/connectivity/)** | Supported | Supported | Cape Town<br/>Johannesburg|
+| **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** | Supported | Supported | Montreal<br/>Toronto<br/>Quebec City<br/>Vancouver |
| **[Bezeq International](https://selfservice.bezeqint.net/english)** | Supported | Supported | London |
-| **[BICS](https://www.bics.com/cloud-connect/)** | Supported | Supported | Amsterdam2, London2 |
-| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** | Supported | Supported | Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
-| **BSNL** | Supported | Supported | Chennai, Mumbai |
+| **[BICS](https://www.bics.com/cloud-connect/)** | Supported | Supported | Amsterdam2<br/>London2 |
+| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Frankfurt<br/>Hong Kong SAR<br/>Johannesburg<br/>London<br/>London2<br/>Newport(Wales)<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
+| **BSNL** | Supported | Supported | Chennai<br/>Mumbai |
| **[C3ntro](https://www.c3ntro.com/)** | Supported | Supported | Miami |
-| **CDC** | Supported | Supported | Canberra, Canberra2 |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** | Supported | Supported | Amsterdam2, Chicago, Dallas, Dublin, Frankfurt, Hong Kong, Las Vegas, London, London2, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Singapore2, Tokyo, Toronto, Washington DC, Washington DC2 |
-| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported | Hong Kong, Taipei |
-| **China Mobile International** |Supported |Supported | Hong Kong, Hong Kong2, Singapore |
-| **China Telecom Global** |Supported |Supported | Hong Kong, Hong Kong2 |
-| **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** | Supported | Supported | Frankfurt, Hong Kong, Singapore2, Tokyo2 |
+| **CDC** | Supported | Supported | Canberra<br/>Canberra2 |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** | Supported | Supported | Amsterdam2<br/>Chicago<br/>Dallas<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>Las Vegas<br/>London<br/>London2<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Toronto<br/>Washington DC<br/>Washington DC2 |
+| **[Chief Telecom](https://www.chief.com.tw/)** |Supported |Supported | Hong Kong<br/>Taipei |
+| **China Mobile International** |Supported |Supported | Hong Kong<br/>Hong Kong2<br/>Singapore |
+| **China Telecom Global** |Supported |Supported | Hong Kong<br/>Hong Kong2 |
+| **[China Unicom Global](https://cloudbond.chinaunicom.cn/home-en)** | Supported | Supported | Frankfurt<br/>Hong Kong<br/>Singapore2<br/>Tokyo2 |
| **Chunghwa Telecom** |Supported |Supported | Taipei | | **Claro** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles |
-| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Seoul, Silicon Valley, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich |
-| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported | Chicago, Silicon Valley, Washington DC |
-| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported | Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 |
-| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** |Supported |Supported | Dallas, Phoenix, Silicon Valley, Washington DC |
+| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** |Supported |Supported | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Berlin<br/>Chicago<br/>Dublin<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>London<br/>London2<br/>Marseille<br/>Milan<br/>Munich<br/>Newport<br/>Osaka<br/>Paris<br/>Seoul<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Tokyo2<br/>Washington DC<br/>Zurich |
+| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** |Supported |Supported | Chicago<br/>Silicon Valley<br/>Washington DC |
+| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported | Chicago<br/>Chicago2<br/>Denver<br/>Los Angeles<br/>New York<br/>Silicon Valley<br/>Silicon Valley2<br/>Washington DC<br/>Washington DC2 |
+| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** |Supported |Supported | Dallas<br/>Phoenix<br/>Silicon Valley<br/>Washington DC |
| **Crown Castle** |Supported |Supported | New York |
-| **[DE-CIX](https://www.de-cix.net/en/services/directcloud/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Chicago2, Dallas, Dubai2, Frankfurt, Frankfurt2, Kuala Lumpur, Madrid, Marseille, Mumbai, Munich, New York, Phoenix, Singapore2 |
-| **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | Supported | Supported | Bogota, Queretaro, Rio De Janeiro |
+| **[DE-CIX](https://www.de-cix.net/en/services/directcloud/microsoft-azure)** | Supported |Supported | Amsterdam2<br/>Chennai<br/>Chicago2<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Frankfurt2<br/>Kuala Lumpur<br/>Madrid<br/>Marseille<br/>Mumbai<br/>Munich<br/>New York<br/>Phoenix<br/>Singapore2 |
+| **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | Supported | Supported | Bogota<br/>Queretaro<br/>Rio De Janeiro |
| **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles |
-| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** | Supported | Supported | Chicago, Dallas, Minneapolis, Montreal, Toronto, Vancouver, Washington DC |
-| **[Colt](https://www.colt.net/direct-connect/azure/)** | Supported | Supported | Amsterdam, Amsterdam2, Berlin, Chicago, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong, London, London2, Marseille, Milan, Munich, Newport, Osaka, Paris, Paris2, Seoul, Silicon Valley, Singapore2, Tokyo, Tokyo2, Washington DC, Zurich |
-| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** | Supported | Supported | Chicago, Silicon Valley, Washington DC |
-| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** | Supported | Supported | Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 |
-| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** | Supported | Supported | Dallas, Phoenix, Silicon Valley, Washington DC |
-| **Crown Castle** | Supported | Supported | New York, Washington DC |
-| **[DE-CIX](https://www.de-cix.net/en/services/microsoft-azure-peering-service)** | Supported |Supported | Amsterdam2, Chennai, Chicago2, Dallas, Dubai2, Frankfurt, Frankfurt2, Kuala Lumpur, Madrid, Marseille, Mumbai, Munich, New York, Phoenix, Singapore2 |
-| **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland, Melbourne, Sydney |
+| **[Cologix](https://cologix.com/connectivity/cloud/cloud-connect/microsoft-azure/)** | Supported | Supported | Chicago<br/>Dallas<br/>Minneapolis<br/>Montreal<br/>Toronto<br/>Vancouver<br/>Washington DC |
+| **[Colt](https://www.colt.net/direct-connect/azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Berlin<br/>Chicago<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>London2<br/>Marseille<br/>Milan<br/>Munich<br/>Newport<br/>Osaka<br/>Paris<br/>Paris2<br/>Seoul<br/>Silicon Valley<br/>Singapore2<br/>Tokyo<br/>Tokyo2<br/>Washington DC<br/>Zurich |
+| **[Comcast](https://business.comcast.com/landingpage/microsoft-azure)** | Supported | Supported | Chicago<br/>Silicon Valley<br/>Washington DC |
+| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** | Supported | Supported | Chicago<br/>Chicago2<br/>Denver<br/>Los Angeles<br/>New York<br/>Silicon Valley<br/>Silicon Valley2<br/>Washington DC<br/>Washington DC2 |
+| **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** | Supported | Supported | Dallas<br/>Phoenix<br/>Silicon Valley<br/>Washington DC |
+| **Crown Castle** | Supported | Supported | New York<br/>Washington DC |
+| **[DE-CIX](https://www.de-cix.net/en/services/microsoft-azure-peering-service)** | Supported |Supported | Amsterdam2<br/>Chennai<br/>Chicago2<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Frankfurt2<br/>Kuala Lumpur<br/>Madrid<br/>Marseille<br/>Mumbai<br/>Munich<br/>New York<br/>Phoenix<br/>Singapore2 |
+| **[Devoli](https://devoli.com/expressroute)** | Supported |Supported | Auckland<br/>Melbourne<br/>Sydney |
| **[Deutsche Telekom AG IntraSelect](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported |Supported | Frankfurt |
-| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/managed-platform-services/azure-managed-services/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2, Hong Kong2 |
+| **[Deutsche Telekom AG](https://www.t-systems.com/de/en/cloud-services/managed-platform-services/azure-managed-services/cloudconnect-for-azure)** | Supported |Supported | Frankfurt2<br/>Hong Kong2 |
| **du datamena** |Supported |Supported | Dubai2 | | **[eir evo](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin |
-| **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | Supported | Supported | Hong Kong2, Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, Hong Kong2, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Perth, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Tokyo2, Toronto, Washington DC, Warsaw, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
+| **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** | Supported | Supported | Hong Kong2<br/>Singapore<br/>Singapore2 |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Atlanta<br/>Berlin<br/>Bogota<br/>Canberra2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong SAR<br/>Hong Kong2<br/>London<br/>London2<br/>Los Angeles*<br/>Los Angeles2<br/>Melbourne<br/>Miami<br/>Milan<br/>New York<br/>Osaka<br/>Paris<br/>Paris2<br/>Perth<br/>Quebec City<br/>Rio de Janeiro<br/>Sao Paulo<br/>Seattle<br/>Seoul<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stockholm<br/>Sydney<br/>Tokyo<br/>Tokyo2<br/>Toronto<br/>Washington DC<br/>Warsaw<br/>Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported | Dubai |
-| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | Supported | Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London |
+| **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>London |
| **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** | Supported | Supported | Taipei | | **[Fastweb](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | Supported |Supported | Milan |
-| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | Supported | Supported | Montreal, Quebec City, Toronto2 |
-| **[GBI](https://www.gbiinc.com/microsoft-azure/)** | Supported | Supported | Dubai2, Frankfurt |
-| **[GÉANT](https://www.geant.org/Networks)** | Supported | Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, Marseille |
-| **[GlobalConnect](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported | Supported | Copenhagen, Oslo, Stavanger, Stockholm |
+| **[Fibrenoire](https://fibrenoire.ca/en/services/cloudextn-2/)** | Supported | Supported | Montreal<br/>Quebec City<br/>Toronto2 |
+| **[GBI](https://www.gbiinc.com/microsoft-azure/)** | Supported | Supported | Dubai2<br/>Frankfurt |
+| **[GÉANT](https://www.geant.org/Networks)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Dublin<br/>Frankfurt<br/>Marseille |
+| **[GlobalConnect](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported | Supported | Copenhagen<br/>Oslo<br/>Stavanger<br/>Stockholm |
| **[GlobalConnect DK](https://www.globalconnect.no/tjenester/nettverk/cloud-access)** | Supported | Supported | Amsterdam |
-| **GTT** |Supported |Supported | Amsterdam, London2, Washington DC |
-| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai, Mumbai |
+| **GTT** |Supported |Supported | Amsterdam<br/>London2<br/>Washington DC |
+| **[Global Cloud Xchange (GCX)](https://globalcloudxchange.com/cloud-platform/cloud-x-fusion/)** | Supported| Supported | Chennai<br/>Mumbai |
| **[iAdvantage](https://www.scx.sunevision.com/)** | Supported | Supported | Hong Kong2 |
-| **Intelsat** | Supported | Supported | London2, Washington DC2 |
-| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam, Chicago, Dallas, Frankfurt, Frankfurt2, Geneva, Hong Kong, London, New York, Paris, Sao Paulo, Silicon Valley, Singapore, Tokyo, Washington DC, Zurich |
-| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** | Supported | Supported | Chicago, Dallas, Silicon Valley, Washington DC |
-| **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** | Supported | Supported | Osaka, Tokyo, Tokyo2 |
-| **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** | Supported | Supported | Cape Town, Johannesburg, London |
-| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** | Supported | Supported | Amsterdam, Amsterdam2, Copenhagen, Dublin, Dublin2, Frankfurt, London, London2, Madrid, Marseille, Paris, Stockholm, Zurich |
+| **Intelsat** | Supported | Supported | London2<br/>Washington DC2 |
+| **[InterCloud](https://www.intercloud.com/)** |Supported |Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Frankfurt2<br/>Geneva<br/>Hong Kong<br/>London<br/>New York<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Tokyo<br/>Washington DC<br/>Zurich |
+| **[Internet2](https://internet2.edu/services/cloud-connect/#service-cloud-connect)** | Supported | Supported | Chicago<br/>Dallas<br/>Silicon Valley<br/>Washington DC |
+| **[Internet Initiative Japan Inc. - IIJ](https://www.iij.ad.jp/en/news/pressrelease/2015/1216-2.html)** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
+| **[Internet Solutions - Cloud Connect](https://www.is.co.za/solution/cloud-connect/)** | Supported | Supported | Cape Town<br/>Johannesburg<br/>London |
+| **[Interxion](https://www.interxion.com/why-interxion/colocate-with-the-clouds/Microsoft-Azure/)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Copenhagen<br/>Dublin<br/>Dublin2<br/>Frankfurt<br/>London<br/>London2<br/>Madrid<br/>Marseille<br/>Paris<br/>Stockholm<br/>Zurich |
| **[IRIDEOS](https://irideos.it/)** | Supported | Supported | Milan | | **Iron Mountain** | Supported |Supported | Washington DC |
-| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**| Supported | Supported | Amsterdam, London2, Silicon Valley, Tokyo2, Toronto, Washington DC |
-| **Jaguar Network** |Supported |Supported | Marseille, Paris |
-| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** | Supported | Supported | London, London2, Newport(Wales) |
-| **KDDI** | Supported | Supported | Osaka, Tokyo, Tokyo2 |
+| **[IX Reach](https://www.ixreach.com/partners/cloud-partners/microsoft-azure/)**| Supported | Supported | Amsterdam<br/>London2<br/>Silicon Valley<br/>Tokyo2<br/>Toronto<br/>Washington DC |
+| **Jaguar Network** |Supported |Supported | Marseille<br/>Paris |
+| **[Jisc](https://www.jisc.ac.uk/microsoft-azure-expressroute)** | Supported | Supported | London<br/>London2<br/>Newport(Wales) |
+| **KDDI** | Supported | Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
| **[KINX](https://www.kinx.net/service/cloudhub/clouds/microsoft_azure_expressroute/?lang=en)** | Supported | Supported | Seoul |
-| **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported | Supported | Auckland, Sydney |
-| **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | Supported | Supported | Amsterdam, Dublin2|
-| **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul, Seoul2 |
-| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** | Supported | Supported | Amsterdam, Chicago, Dallas, London, Newport (Wales), Sao Paulo, Seattle, Silicon Valley, Singapore, Washington DC |
-| **LG CNS** | Supported | Supported | Busan, Seoul |
-| **Lightpath** | Supported | Supported | New York, Washington DC |
-| **[Lightstorm](https://polarin.lightstorm.net/)** | Supported | Supported | Pune, Chennai |
-| **[Liquid Intelligent Technologies](https://liquidcloud.africa/connect/)** | Supported | Supported | Cape Town, Johannesburg |
+| **[Kordia](https://www.kordia.co.nz/cloudconnect)** | Supported | Supported | Auckland<br/>Sydney |
+| **[KPN](https://www.kpn.com/zakelijk/cloud/connect.htm)** | Supported | Supported | Amsterdam<br/>Dublin2|
+| **[KT](https://cloud.kt.com/)** | Supported | Supported | Seoul<br/>Seoul2 |
+| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>London<br/>Newport (Wales)<br/>Sao Paulo<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Washington DC |
+| **LG CNS** | Supported | Supported | Busan<br/>Seoul |
+| **Lightpath** | Supported | Supported | New York<br/>Washington DC |
+| **[Lightstorm](https://polarin.lightstorm.net/)** | Supported | Supported | Pune<br/>Chennai |
+| **[Liquid Intelligent Technologies](https://liquidcloud.africa/connect/)** | Supported | Supported | Cape Town<br/>Johannesburg |
| **[LGUplus](http://www.uplus.co.kr/)** |Supported |Supported | Seoul |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** | Supported | Supported | Amsterdam, Atlanta, Auckland, Chicago, Dallas, Denver, Dubai2, Dublin, Frankfurt, Geneva, Hong Kong, Hong Kong2, Las Vegas, London, London2, Los Angeles, Madrid, Melbourne, Miami, Minneapolis, Montreal, Munich, New York, Osaka, Oslo, Paris, Perth, Phoenix, Quebec City, Queretaro (Mexico), San Antonio, Seattle, Silicon Valley, Singapore, Singapore2, Stavanger, Stockholm, Sydney, Sydney2, Tokyo, Tokyo2 Toronto, Vancouver, Washington DC, Washington DC2, Zurich |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** | Supported | Supported | Amsterdam<br/>Atlanta<br/>Auckland<br/>Chicago<br/>Dallas<br/>Denver<br/>Dubai2<br/>Dublin<br/>Frankfurt<br/>Geneva<br/>Hong Kong<br/>Hong Kong2<br/>Las Vegas<br/>London<br/>London2<br/>Los Angeles<br/>Madrid<br/>Melbourne<br/>Miami<br/>Minneapolis<br/>Montreal<br/>Munich<br/>New York<br/>Osaka<br/>Oslo<br/>Paris<br/>Perth<br/>Phoenix<br/>Quebec City<br/>Queretaro (Mexico)<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Singapore<br/>Singapore2<br/>Stavanger<br/>Stockholm<br/>Sydney<br/>Sydney2<br/>Tokyo<br/>Tokyo2 Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich |
| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Supported | Supported | London |
-| **MTN Global Connect** | Supported | Supported | Cape Town, Johannesburg|
+| **MTN Global Connect** | Supported | Supported | Cape Town<br/>Johannesburg|
| **[National Telecom](https://www.nc.ntplc.co.th/cat/category/264/855/CAT+Direct+Cloud+Connect+for+Microsoft+ExpressRoute?lang=en_EN)** | Supported | Supported | Bangkok | | **NEC** | Supported | Supported | Tokyo3 |
-| **[NETSG](https://www.netsg.co/dc-cloud/cloud-and-dc-interconnect/)** | Supported | Supported | Melbourne, Sydney2 |
-| **[Neutrona Networks](https://flo.net/)** | Supported | Supported | Dallas, Los Angeles, Miami, Sao Paulo, Washington DC |
+| **[NETSG](https://www.netsg.co/dc-cloud/cloud-and-dc-interconnect/)** | Supported | Supported | Melbourne<br/>Sydney2 |
+| **[Neutrona Networks](https://flo.net/)** | Supported | Supported | Dallas<br/>Los Angeles<br/>Miami<br/>Sao Paulo<br/>Washington DC |
| **[Next Generation Data](https://vantage-dc-cardiff.co.uk/)** | Supported | Supported | Newport(Wales) |
-| **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** | Supported | Supported | Melbourne, Perth, Sydney, Sydney2 |
-| **NL-IX** | Supported | Supported | Amsterdam2, Dublin2 |
-| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** | Supported | Supported | Amsterdam2, Madrid |
-| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** | Supported | Supported | Amsterdam, Hong Kong SAR, London, Los Angeles, New York, Osaka, Singapore, Sydney, Tokyo, Washington DC |
-| **NTT Communications India Network Services Pvt Ltd** | Supported | Supported | Chennai, Mumbai |
-| **NTT Communications - Flexible InterConnect** |Supported |Supported | Jakarta, Osaka, Singapore2, Tokyo, Tokyo2 |
+| **[NEXTDC](https://www.nextdc.com/services/axon-ethernet/microsoft-expressroute)** | Supported | Supported | Melbourne<br/>Perth<br/>Sydney<br/>Sydney2 |
+| **NL-IX** | Supported | Supported | Amsterdam2<br/>Dublin2 |
+| **[NOS](https://www.nos.pt/empresas/corporate/cloud/cloud/Pages/nos-cloud-connect.aspx)** | Supported | Supported | Amsterdam2<br/>Madrid |
+| **[NTT Communications](https://www.ntt.com/en/services/network/virtual-private-network.html)** | Supported | Supported | Amsterdam<br/>Hong Kong SAR<br/>London<br/>Los Angeles<br/>New York<br/>Osaka<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
+| **NTT Communications India Network Services Pvt Ltd** | Supported | Supported | Chennai<br/>Mumbai |
+| **NTT Communications - Flexible InterConnect** |Supported |Supported | Jakarta<br/>Osaka<br/>Singapore2<br/>Tokyo<br/>Tokyo2 |
| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** |Supported |Supported | Tokyo |
-| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported | Amsterdam2, Berlin, Frankfurt, London2 |
+| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** |Supported |Supported | Amsterdam2<br/>Berlin<br/>Frankfurt<br/>London2 |
| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** |Supported |Supported | Osaka |
-| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha, Doha2, London2, Marseille |
-| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |Supported |Supported | Melbourne, Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam, Amsterdam2, Chicago, Dallas, Dubai2, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Mumbai2, Melbourne, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC |
+| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha<br/>Doha2<br/>London2<br/>Marseille |
+| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |Supported |Supported | Melbourne<br/>Sydney |
+| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Frankfurt<br/>Hong Kong SAR<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Washington DC |
| **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam, Chicago, Dallas, Denver, Las Vegas, London, Los Angeles2, Miami, New York, Silicon Valley, Toronto, Washington DC |
-| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago, Hong Kong, Hong Kong2, London, Singapore, Singapore2, Tokyo2 |
+| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** |Supported |Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Silicon Valley<br/>Toronto<br/>Washington DC |
+| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** |Supported |Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
| **[NTT EAST](https://business.ntt-east.co.jp/service/crossconnect/)** | Supported | Supported | Tokyo |
-| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** | Supported | Supported | Amsterdam2, Berlin, Frankfurt, London2 |
+| **[NTT Global DataCenters EMEA](https://hello.global.ntt/)** | Supported | Supported | Amsterdam2<br/>Berlin<br/>Frankfurt<br/>London2 |
| **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** | Supported | Supported | Osaka |
-| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** | Supported | Supported | Doha, Doha2, London2, Marseille |
-| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Supported | Supported | Melbourne, Sydney |
-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam, Amsterdam2, Chicago, Dallas, Dubai2, Dublin2 Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Mumbai2, Melbourne, Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
+| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** | Supported | Supported | Doha<br/>Doha2<br/>London2<br/>Marseille |
+| **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Supported | Supported | Melbourne<br/>Sydney |
+| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong SAR<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
| **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 |
-| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam, Chicago, Dallas, Denver, Las Vegas, London, Los Angeles2, Miami, New York, Seattle, Silicon Valley, Toronto, Washington DC |
-| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago, Hong Kong, Hong Kong2, London, Singapore, Singapore2, Tokyo2 |
+| **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC |
+| **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
| **PitChile** | Supported | Supported | Santiago | | **[REANNZ](https://www.reannz.co.nz/products-and-services/cloud-connect/)** | Supported | Supported | Auckland | | **RedCLARA** | Supported | Supported | Sao Paulo |
The following table shows locations by service provider. If you want to view ava
| **[Retelit](https://www.retelit.it/EN/Home.aspx)** | Supported | Supported | Milan | | **SCSK** |Supported | Supported | Tokyo3 | | **[Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms)** | Supported | Supported | Seoul |
-| **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported | Supported | London2, Washington DC |
-| **[SIFY](https://sifytechnologies.com/)** | Supported | Supported | Chennai, Mumbai2 |
-| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2, Singapore, Singapore2 |
+| **[SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)** | Supported | Supported | London2<br/>Washington DC |
+| **[SIFY](https://sifytechnologies.com/)** | Supported | Supported | Chennai<br/>Mumbai2 |
+| **[SingTel](https://www.singtel.com/about-us/news-releases/singtel-provide-secure-private-access-microsoft-azure-public-cloud)** |Supported |Supported | Hong Kong2<br/>Singapore<br/>Singapore2 |
| **[SK Telecom](http://b2b.tworld.co.kr/bizts/solution/solutionTemplate.bs?solutionId=0085)** | Supported | Supported | Seoul |
-| **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported | Osaka, Tokyo, Tokyo2 |
-| **[Sohonet](https://www.sohonet.com/fastlane/)** | Supported | Supported | Los Angeles, London2 |
-| **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** | Supported | Supported | Auckland, Sydney |
-| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva, Zurich |
-| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | Supported | Supported | Amsterdam, Chennai, Chicago, Hong Kong SAR, London, Mumbai, Pune, Sao Paulo, Silicon Valley, Singapore, Washington DC |
-| **[Telefonica](https://www.telefonica.com/es/home)** | Supported | Supported | Amsterdam, Sao Paulo, Madrid |
-| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** | Supported | Supported | London, London2, Singapore2 |
-| **Telenor** |Supported |Supported | Amsterdam, London, Oslo, Stavanger |
-| **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong, London, Oslo, Paris, Seattle, Silicon Valley, Stockholm, Washington DC |
+| **[Softbank](https://www.softbank.jp/biz/cloud/cloud_access/direct_access_for_az/)** |Supported |Supported | Osaka<br/>Tokyo<br/>Tokyo2 |
+| **[Sohonet](https://www.sohonet.com/fastlane/)** | Supported | Supported | Los Angeles<br/>London2 |
+| **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** | Supported | Supported | Auckland<br/>Sydney |
+| **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva<br/>Zurich |
+| **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | Supported | Supported | Amsterdam<br/>Chennai<br/>Chicago<br/>Hong Kong SAR<br/>London<br/>Mumbai<br/>Pune<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Washington DC |
+| **[Telefonica](https://www.telefonica.com/es/home)** | Supported | Supported | Amsterdam<br/>Sao Paulo<br/>Madrid |
+| **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** | Supported | Supported | London<br/>London2<br/>Singapore2 |
+| **Telenor** |Supported |Supported | Amsterdam<br/>London<br/>Oslo<br/>Stavanger |
+| **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Oslo<br/>Paris<br/>Seattle<br/>Silicon Valley<br/>Stockholm<br/>Washington DC |
| **[Telin](https://www.telin.net/product/data-connectivity/telin-cloud-exchange)** | Supported | Supported | Jakarta | | **Telmex Uninet**| Supported | Supported | Dallas |
-| **[Telstra Corporation](https://www.telstra.com.au/business-enterprise/network-services/networks/cloud-direct-connect/)** | Supported | Supported | Melbourne, Singapore, Sydney |
-| **[Telus](https://www.telus.com)** | Supported | Supported | Montreal, Quebec City, Seattle, Toronto, Vancouver |
-| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** | Supported | Supported | Cape Town, Johannesburg |
+| **[Telstra Corporation](https://www.telstra.com.au/business-enterprise/network-services/networks/cloud-direct-connect/)** | Supported | Supported | Melbourne<br/>Singapore<br/>Sydney |
+| **[Telus](https://www.telus.com)** | Supported | Supported | Montreal<br/>Quebec City<br/>Seattle<br/>Toronto<br/>Vancouver |
+| **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** | Supported | Supported | Cape Town<br/>Johannesburg |
| **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur | | **[Tivit](https://tivit.com/solucoes/public-cloud/)** |Supported |Supported | Sao Paulo2 |
-| **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka, Tokyo2 |
-| **TPG Telecom**| Supported | Supported | Melbourne, Sydney |
-| **[Transtelco](https://transtelco.net/enterprise-services/)** | Supported | Supported | Dallas, Queretaro(Mexico City)|
-| **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported | Chicago, Silicon Valley, Washington DC |
+| **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka<br/>Tokyo2 |
+| **TPG Telecom**| Supported | Supported | Melbourne<br/>Sydney |
+| **[Transtelco](https://transtelco.net/enterprise-services/)** | Supported | Supported | Dallas<br/>Queretaro(Mexico City)|
+| **[T-Mobile/Sprint](https://www.t-mobile.com/business/solutions/networking/cloud-networking )** |Supported |Supported | Chicago<br/>Silicon Valley<br/>Washington DC |
| **[T-Systems](https://geschaeftskunden.telekom.de/vernetzung-digitalisierung/produkt/intraselect)** | Supported | Supported | Frankfurt | | **UOLDIVEO** | Supported | Supported | Sao Paulo | | **[UIH](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)** | Supported | Supported | Bangkok |
-| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Frankfurt, Hong Kong SAR, London, Mumbai, Paris, Silicon Valley, Singapore, Sydney, Tokyo, Toronto, Washington DC |
+| **[Verizon](https://enterprise.verizon.com/products/network/application-enablement/secure-cloud-interconnect/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong SAR<br/>London<br/>Mumbai<br/>Paris<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
| **[Viasat](https://news.viasat.com/newsroom/press-releases/viasat-introduces-direct-cloud-connect-a-new-service-providing-fast-secure-private-connections-to-business-critical-cloud-services)** | Supported | Supported | Washington DC2 |
-| **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland, Sydney |
-| **Vodacom** | Supported | Supported | Cape Town, Johannesburg|
-| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** | Supported | Supported | Amsterdam2, Chicago, Dallas, Hong Kong2, London, London2, Milan, Silicon Valley, Singapore |
-| **[Vi (Vodafone Idea)](https://www.myvi.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Chennai, Mumbai2 |
+| **[Vocus Group NZ](https://www.vocus.co.nz/business/cloud-data-centres)** | Supported | Supported | Auckland<br/>Sydney |
+| **Vodacom** | Supported | Supported | Cape Town<br/>Johannesburg|
+| **[Vodafone](https://www.vodafone.com/business/global-enterprise/global-connectivity/vodafone-ip-vpn-cloud-connect)** | Supported | Supported | Amsterdam2<br/>Chicago<br/>Dallas<br/>Hong Kong2<br/>London<br/>London2<br/>Milan<br/>Silicon Valley<br/>Singapore |
+| **[Vi (Vodafone Idea)](https://www.myvi.in/business/enterprise-solutions/connectivity/vpn-extended-connect)** | Supported | Supported | Chennai<br/>Mumbai2 |
| **Vodafone Qatar** | Supported | Supported | Doha | | **XL Axiata** | Supported | Supported | Jakarta |
-| **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** | Supported | Supported | Amsterdam, Chicago, Dallas, Denver, Dublin, Frankfurt, Hong Kong, London, London2, Los Angeles, Montreal, New York, Paris, Phoenix, San Antonio, Seattle, Silicon Valley, Toronto, Vancouver, Washington DC, Washington DC2, Zurich|
+| **[Zayo](https://www.zayo.com/services/packet/cloudlink/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Dublin<br/>Frankfurt<br/>Hong Kong<br/>London<br/>London2<br/>Los Angeles<br/>Montreal<br/>New York<br/>Paris<br/>Phoenix<br/>San Antonio<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Vancouver<br/>Washington DC<br/>Washington DC2<br/>Zurich|
### National cloud environment
Azure national clouds are isolated from each other and from global commercial Az
| Service provider | Microsoft Azure | Office 365 | Locations | | | | | |
-| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Chicago, Phoenix, Silicon Valley, Washington DC |
-| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |New York, Phoenix, San Antonio, Washington DC |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Atlanta, Chicago, Dallas, New York, Seattle, Silicon Valley, Washington DC |
+| **[AT&T NetBond](https://www.synaptic.att.com/clouduser/html/productdetail/ATT_NetBond.htm)** |Supported |Supported |Chicago<br/>Phoenix<br/>Silicon Valley<br/>Washington DC |
+| **[CenturyLink Cloud Connect](https://www.centurylink.com/cloudconnect)** |Supported |Supported |New York<br/>Phoenix<br/>San Antonio<br/>Washington DC |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported |Atlanta<br/>Chicago<br/>Dallas<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Washington DC |
| **[Internet2](https://internet2.edu/services/microsoft-azure-expressroute/)** |Supported |Supported |Dallas |
-| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported |Chicago, Silicon Valley, Washington DC |
-| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported | Supported | Chicago, Dallas, San Antonio, Seattle, Washington DC |
-| **[Verizon](http://news.verizonenterprise.com/2014/04/secure-cloud-interconnect-solutions-enterprise/)** |Supported |Supported |Chicago, Dallas, New York, Silicon Valley, Washington DC |
+| **[Level 3 Communications](https://www.lumen.com/en-us/hybrid-it-cloud/cloud-connect.html)** |Supported |Supported |Chicago<br/>Silicon Valley<br/>Washington DC |
+| **[Megaport](https://www.megaport.com/services/microsoft-expressroute/)** |Supported | Supported | Chicago<br/>Dallas<br/>San Antonio<br/>Seattle<br/>Washington DC |
+| **[Verizon](http://news.verizonenterprise.com/2014/04/secure-cloud-interconnect-solutions-enterprise/)** |Supported |Supported |Chicago<br/>Dallas<br/>New York<br/>Silicon Valley<br/>Washington DC |
### China | Service provider | Microsoft Azure | Office 365 | Locations | | | | | |
-| **China Telecom** |Supported |Not Supported |Beijing, Beijing2, Shanghai, Shanghai2 |
-| **China Unicom** | Supported | Not Supported | Beijing2, Shanghai2 |
-| **[GDS](http://www.gds-services.com/en/about_2.html)** |Supported |Not Supported |Beijing2, Shanghai2 |
+| **China Telecom** |Supported |Not Supported |Beijing<br/>Beijing2<br/>Shanghai<br/>Shanghai2 |
+| **China Unicom** | Supported | Not Supported | Beijing2<br/>Shanghai2 |
+| **[GDS](http://www.gds-services.com/en/about_2.html)** |Supported |Not Supported |Beijing2<br/>Shanghai2 |
-To learn more, see [ExpressRoute in China](https://www.azure.cn/home/features/expressroute/).
+To learn more<br/>see [ExpressRoute in China](https://www.azure.cn/home/features/expressroute/).
### Germany
If you're remote and don't have fiber connectivity, or you want to explore other
| Connectivity provider | Exchange | Locations | | | | | | **[1CLOUDSTAR](https://www.1cloudstar.com/services/cloudconnect-azure-expressroute.html)** | Equinix |Singapore |
-| **[Airgate Technologies, Inc.](https://www.airgate.ca/)** | Equinix, Cologix | Toronto, Montreal |
+| **[Airgate Technologies, Inc.](https://www.airgate.ca/)** | Equinix<br/>Cologix | Toronto<br/>Montreal |
| **[Alaska Communications](https://www.alaskacommunications.com/Business)** |Equinix |Seattle |
-| **[Altice Business](https://lightpathfiber.com/applications/cloud-connect)** |Equinix |New York, Washington DC |
-| **[Aptum Technologies](https://aptum.com/services/cloud/managed-azure/)**| Equinix | Montreal, Toronto |
+| **[Altice Business](https://lightpathfiber.com/applications/cloud-connect)** |Equinix |New York<br/>Washington DC |
+| **[Aptum Technologies](https://aptum.com/services/cloud/managed-azure/)**| Equinix | Montreal<br/>Toronto |
| **[Arteria Networks Corporation](https://www.arteria-net.com/business/service/cloud/sca/)** |Equinix |Tokyo | | **[Axtel](https://alestra.mx/landing/expressrouteazure/)** |Equinix |Dallas| | **[Beanfield Metroconnect](https://www.beanfield.com/business/cloud-exchange)** |Megaport |Toronto| | **[Bezeq International Ltd.](https://www.bezeqint.net/english)** | euNetworks | London |
-| **[BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/)** | Equinix | Amsterdam, Frankfurt, London, Singapore, Washington DC |
+| **[BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/)** | Equinix | Amsterdam<br/>Frankfurt<br/>London<br/>Singapore<br/>Washington DC |
| **[BroadBand Tower, Inc.](https://www.bbtower.co.jp/product-service/network/)** | Equinix | Tokyo |
-| **[C3ntro Telecom](https://www.c3ntro.com/)** | Equinix, Megaport | Dallas |
+| **[C3ntro Telecom](https://www.c3ntro.com/)** | Equinix<br/>Megaport | Dallas |
| **[Chief](https://www.chief.com.tw/)** | Equinix | Hong Kong SAR |
-| **[Cinia](https://www.cinia.fi/palvelutiedotteet)** | Equinix, Megaport | Frankfurt, Hamburg |
+| **[Cinia](https://www.cinia.fi/palvelutiedotteet)** | Equinix<br/>Megaport | Frankfurt<br/>Hamburg |
| **CloudXpress** | Equinix | Amsterdam | | **[CMC Telecom](https://cmctelecom.vn/san-pham/value-added-service-and-it/cmc-telecom-cloud-express-en/)** | Equinix | Singapore | | **[CoreAzure](https://www.coreazure.com/)**| Equinix | London |
-| **[Cox Business](https://www.cox.com/business/networking/cloud-connectivity.html)**| Equinix | Dallas, Silicon Valley, Washington DC |
-| **[Crown Castle](https://fiber.crowncastle.com/solutions/added/cloud-connect)**| Equinix | Atlanta, Chicago, Dallas, Los Angeles, New York, Washington DC |
+| **[Cox Business](https://www.cox.com/business/networking/cloud-connectivity.html)**| Equinix | Dallas<br/>Silicon Valley<br/>Washington DC |
+| **[Crown Castle](https://fiber.crowncastle.com/solutions/added/cloud-connect)**| Equinix | Atlanta<br/>Chicago<br/>Dallas<br/>Los Angeles<br/>New York<br/>Washington DC |
| **[Data Foundry](https://www.datafoundry.com/services/cloud-connect)** | Megaport | Dallas |
-| **[Epsilon Telecommunications Limited](https://www.epsilontel.com/solutions/cloud-connect/)** | Equinix | London, Singapore, Washington DC |
+| **[Epsilon Telecommunications Limited](https://www.epsilontel.com/solutions/cloud-connect/)** | Equinix | London<br/>Singapore<br/>Washington DC |
| **[Eurofiber](https://eurofiber.nl/microsoft-azure/)** | Equinix | Amsterdam | | **[Exponential E](https://www.exponential-e.com/services/connectivity-services/)** | Equinix | London | | **[Fastweb S.p.A](https://www.fastweb.it/grandi-aziende/dati-voce/scheda-prodotto/fast-company/)** | Equinix | Amsterdam |
If you're remote and don't have fiber connectivity, or you want to explore other
| **[FPT Telecom International](https://cloudconnect.vn/en)** |Equinix |Singapore| | **[Gtt Communications Inc](https://www.gtt.net)** |Equinix | Washington DC | | **[Gulf Bridge International](https://gbiinc.com/)** | Equinix | Amsterdam |
-| **[HSO](https://www.hso.co.uk/products/cloud-direct)** |Equinix | London, Slough |
+| **[HSO](https://www.hso.co.uk/products/cloud-direct)** |Equinix | London<br/>Slough |
| **[IVedha Inc](https://ivedha.com/cloud-services)**| Equinix | Toronto | | **[Kaalam Telecom Bahrain B.S.C](https://kalaam-telecom.com/)**| Level 3 Communications |Amsterdam | | **LGA Telecom** |Equinix |Singapore|
If you're remote and don't have fiber connectivity, or you want to explore other
| **[Macquarie Telecom Group](https://macquariegovernment.com/secure-cloud/secure-cloud-exchange/)** | Megaport | Sydney | | **[MainOne](https://www.mainone.net/services/connectivity/cloud-connect/)** |Equinix | Amsterdam | | **[Masergy](https://www.masergy.com/sd-wan/multi-cloud-connectivity)** | Equinix | Washington DC |
-| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Teraco | Cape Town, Johannesburg |
+| **[MTN](https://www.mtnbusiness.co.za/en/Cloud-Solutions/Pages/microsoft-express-route.aspx)** | Teraco | Cape Town<br/>Johannesburg |
| **[NexGen Networks](https://www.nexgen-net.com/nexgen-networks-direct-connect-microsoft-azure-expressroute.html)** | Interxion | London |
-| **[Nianet](https://www.globalconnect.dk/)** |Equinix | Amsterdam, Frankfurt |
-| **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Montreal, Toronto |
+| **[Nianet](https://www.globalconnect.dk/)** |Equinix | Amsterdam<br/>Frankfurt |
+| **[Oncore Cloud Service Inc](https://www.oncore.cloud/services/ue-for-expressroute)**| Equinix | Montreal<br/>Toronto |
| **[POST Telecom Luxembourg](https://business.post.lu/grandes-entreprises/telecom-ict/telecom)**| Equinix | Amsterdam |
-| **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**| Equinix | Amsterdam, Dublin, London, Paris |
+| **[Proximus](https://www.proximus.be/en/id_b_cl_proximus_external_cloud_connect/companies-and-public-sector/discover/magazines/expert-blog/proximus-external-cloud-connect.html)**| Equinix | Amsterdam<br/>Dublin<br/>London<br/>Paris |
| **[QSC AG](https://www2.qbeyond.de/en/)** |Interxion | Frankfurt | | **[RETN](https://retn.net/products/cloud-connect)** | Equinix | Amsterdam |
-| **Rogers** | Cologix, Equinix | Montreal, Toronto |
-| **[Spectrum Enterprise](https://enterprise.spectrum.com/services/internet-networking/wan/cloud-connect.html)** | Equinix | Chicago, Dallas, Los Angeles, New York, Silicon Valley |
+| **Rogers** | Cologix<br/>Equinix | Montreal<br/>Toronto |
+| **[Spectrum Enterprise](https://enterprise.spectrum.com/services/internet-networking/wan/cloud-connect.html)** | Equinix | Chicago<br/>Dallas<br/>Los Angeles<br/>New York<br/>Silicon Valley |
| **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London |
-| **[Tata Teleservices](https://www.tatatelebusiness.com/data-services/ez-cloud-connect/)** | Tata Communications | Chennai, Mumbai |
+| **[Tata Teleservices](https://www.tatatelebusiness.com/data-services/ez-cloud-connect/)** | Tata Communications | Chennai<br/>Mumbai |
| **[TDC Erhverv](https://tdc.dk/Produkter/cloudaccessplus)** | Equinix | Amsterdam | | **[Telecom Italia Sparkle](https://www.tisparkle.com/our-platform/corporate-platform/sparkle-cloud-connect#catalogue)**| Equinix | Amsterdam |
-| **[Telekom Deutschland GmbH](https://cloud.telekom.de/de/infrastruktur/managed-it-services/managed-hybrid-infrastructure-mit-microsoft-azure)** | Interxion | Amsterdam, Frankfurt |
+| **[Telekom Deutschland GmbH](https://cloud.telekom.de/de/infrastruktur/managed-it-services/managed-hybrid-infrastructure-mit-microsoft-azure)** | Interxion | Amsterdam<br/>Frankfurt |
| **[Telia](https://www.telia.se/foretag/losningar/produkter-tjanster/datanet)** | Equinix | Amsterdam | | **[ThinkTel](https://www.thinktel.ca/services/agile-ix-data/expressroute/)** | Equinix | Toronto | | **[United Information Highway (UIH)](https://www.uih.co.th/en/network-solutions/global-network/cloud-direct-for-microsoft-azure-expressroute)**| Equinix | Singapore | | **[Venha Pra Nuvem](https://venhapranuvem.com.br/)** | Equinix | Sao Paulo | | **[Webair](https://opti9tech.com/partners/)**| Megaport | New York |
-| **[Windstream](https://www.windstreamenterprise.com/solutions/)**| Equinix | Chicago, Silicon Valley, Washington DC |
-| **[X2nsat Inc.](https://x2n.com/expressroute/)** |Coresite |Silicon Valley, Silicon Valley 2|
+| **[Windstream](https://www.windstreamenterprise.com/solutions/)**| Equinix | Chicago<br/>Silicon Valley<br/>Washington DC |
+| **[X2nsat Inc.](https://x2n.com/expressroute/)** |Coresite |Silicon Valley<br/>Silicon Valley 2|
| **Zain** |Equinix |London| | **[Zertia](https://www.zertia.es)**| Level 3 | Madrid |
-| **Zirro**| Cologix, Equinix | Montreal, Toronto |
+| **Zirro**| Cologix<br/>Equinix | Montreal<br/>Toronto |
## Connectivity through datacenter providers | Provider | Exchange | | | |
-| **[CyrusOne](https://www.cyrusone.com/cloud-solutions/microsoft-azure)** | Megaport, PacketFabric |
-| **[Cyxtera](https://www.cyxtera.com/data-center-services/interconnection)** | Megaport, PacketFabric |
+| **[CyrusOne](https://www.cyrusone.com/cloud-solutions/microsoft-azure)** | Megaport<br/>PacketFabric |
+| **[Cyxtera](https://www.cyxtera.com/data-center-services/interconnection)** | Megaport<br/>PacketFabric |
| **[Databank](https://www.databank.com/platforms/connectivity/cloud-direct-connect/)** | Megaport | | **[DataFoundry](https://www.datafoundry.com/services/cloud-connect/)** | Megaport |
-| **[Digital Realty](https://www.digitalrealty.com/services/interconnection/service-exchange/)** | IX Reach, Megaport PacketFabric |
-| **[EdgeConnex](https://www.edgeconnex.com/services/edge-data-centers-proximity-matters/)** | Megaport, PacketFabric |
-| **[Flexential](https://www.flexential.com/connectivity/cloud-connect-microsoft-azure-expressroute)** | IX Reach, Megaport, PacketFabric |
-| **[QTS Data Centers](https://www.qtsdatacenters.com/hybrid-solutions/connectivity/azure-cloud )** | Megaport, PacketFabric |
+| **[Digital Realty](https://www.digitalrealty.com/services/interconnection/service-exchange/)** | IX Reach<br/>Megaport PacketFabric |
+| **[EdgeConnex](https://www.edgeconnex.com/services/edge-data-centers-proximity-matters/)** | Megaport<br/>PacketFabric |
+| **[Flexential](https://www.flexential.com/connectivity/cloud-connect-microsoft-azure-expressroute)** | IX Reach<br/>Megaport<br/>PacketFabric |
+| **[QTS Data Centers](https://www.qtsdatacenters.com/hybrid-solutions/connectivity/azure-cloud )** | Megaport<br/>PacketFabric |
| **[Stream Data Centers](https://www.streamdatacenters.com/products-services/network-cloud/)** | Megaport |
-| **RagingWire Data Centers** | IX Reach, Megaport, PacketFabric |
+| **RagingWire Data Centers** | IX Reach<br/>Megaport<br/>PacketFabric |
| **[T5 Datacenters](https://t5datacenters.com/)** | IX Reach |
-| **vXchnge** | IX Reach, Megaport |
+| **vXchnge** | IX Reach<br/>Megaport |
## Connectivity through National Research and Education Networks (NREN)
Enabling private connectivity to fit your needs can be challenging, based on the
| System integrator | Continent | | | | | **[Altogee](https://altogee.be/diensten/express-route/)** | Europe |
-| **[Avanade Inc.](https://www.avanade.com/)** | Asia, Europe, North America, South America |
+| **[Avanade Inc.](https://www.avanade.com/)** | Asia<br/>Europe<br/>North America<br/>South America |
| **[Bright Skies GmbH](https://www.rackspace.com/bright-skies)** | Europe | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Australia | **[Equinix Professional Services](https://www.equinix.com/services/consulting/)** | North America |
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
This quickstart describes how to use Bicep to create an Azure Front Door Standard/Premium with a Web App as origin.
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] ## Prerequisites
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure CLI. You'll create this profile using two Web Apps as your origin, and add a WAF security policy. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
frontdoor Create Front Door Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-portal.md
In this quickstart, you'll learn how to create an Azure Front Door profile using
With *Custom create*, you deploy two App services. Then, you create the Azure Front Door profile using the two App services as your origin. Lastly, you'll verify connectivity to your App services using the Azure Front Door frontend hostname.
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ ## Prerequisites An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
frontdoor Create Front Door Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-powershell.md
In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure PowerShell. You'll create this profile using two Web Apps as your origin. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
This quickstart describes how to use Terraform to create a Front Door profile to set up high availability for a web endpoint.
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
+ The steps in this article were tested with the following Terraform and Terraform provider versions: - [Terraform v1.3.2](https://releases.hashicorp.com/terraform/)
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
Previously updated : 02/16/2023 Last updated : 06/14/2023 zone_pivot_groups: front-door-tiers
zone_pivot_groups: front-door-tiers
Azure Front Door is a modern content delivery network (CDN), with dynamic site acceleration and load balancing capabilities. When caching is configured on your route, the edge site that receives each request checks its cache for a valid response. Caching helps to reduce the amount of traffic sent to your origin server. If no cached response is available, the request is forwarded to the origin.
-Each Front Door edge site manages its own cache, and requests might be served by different edge sites. As a result, you might still see some traffic reach your origin, even if you served cached responses.
+Each Front Door edge site manages its own cache, and requests might get served by different edge sites. As a result, you might still see some traffic reach your origin, even if you served cached responses.
+
+Caching can significantly decrease latency and reduce the load on origin servers. However, not all types of traffic can benefit from caching. Static assets such as images, CSS, and JavaScript files are ideal for caching. While dynamic assets, such as authenticated API endpoints, shouldn't be cached to prevent the leakage of personal information. It's recommended to have separate routes for static and dynamic assets, with caching disabled for the latter.
+
+> [!WARNING]
+> Before you enable caching, thoroughly review the public documentation, and test all possible scenarios before enabling caching. As noted previously, with misconfiguration you can inadvertently cache user specific data that can be shared by multiple users resulting privacy incidents.
## Request methods
Only requests that use the `GET` request method are cacheable. All other request
## Delivery of large files
-Azure Front Door delivers large files without a cap on file size. If caching is enabled, Front Door uses a technique called *object chunking*. When a large file is requested, Front Door retrieves smaller pieces of the file from the origin. After receiving a full file request or byte-range file request, the Azure Front Door environment requests the file from the origin in chunks of 8 MB.
+Azure Front Door delivers large files without a cap on file size. If caching is enabled, Front Door uses a technique called *object chunking*. When a large file is requested, Front Door retrieves smaller pieces of the file from the origin. After Front Door receives a full file request or byte-range file request, the Front Door environment requests the file from the origin in chunks of 8 MB.
-After the chunk arrives at the Azure Front Door environment, it's cached and immediately served to the user. Front Door then pre-fetches the next chunk in parallel. This pre-fetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection. For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
+After the chunk arrives at the Azure Front Door environment, it's cached and immediately served to the user. Front Door then prefetches the next chunk in parallel. This prefetch ensures that the content stays one chunk ahead of the user, which reduces latency. This process continues until the entire file gets downloaded (if requested) or the client closes the connection. For more information on the byte-range request, read [RFC 7233](https://www.rfc-editor.org/info/rfc7233).
-Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Subsequent requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, pre-fetching is used to request chunks from the origin.
+Front Door caches any chunks as they're received so the entire file doesn't need to be cached on the Front Door cache. Subsequent requests for the file or byte ranges are served from the cache. If the chunks aren't all cached, prefetching is used to request chunks from the origin.
This optimization relies on the origin's ability to support byte-range requests. If the origin doesn't support byte-range requests, or if it doesn't handle range requests correctly, then this optimization isn't effective.
When your origin responds to a request with a `Range` header, it must respond in
> [!TIP] > If your origin compresses the response, ensure that the `Content-Range` header value matches the actual length of the compressed response. -- **Return a non-ranged response.** If your origin can't handle range requests, it can ignore the `Range` header and return a non-ranged response. Ensure that the origin returns a response status code other than 206. For example, the origin might return a 200 OK response.
+- **Return a non-ranged response.** If your origin can't handle range requests, it can ignore the `Range` header and return a nonranged response. Ensure that the origin returns a response status code other than 206. For example, the origin might return a 200 OK response.
## File compression
Cache purges on the Front Door are case-insensitive. Additionally, they're query
## Cache expiration
-The following order of headers is used to determine how long an item will be stored in our cache:
+The following order of headers is used to determine how long an item gets stored in our cache:
1. `Cache-Control: s-maxage=<seconds>` 1. `Cache-Control: max-age=<seconds>` 1. `Expires: <http-date>`
-Some `Cache-Control` response header values indicate that the response isn't cacheable. These values include `private`, `no-cache`, and `no-store`. Front Door honors these header values and won't cache the responses, even if you override the caching behavior by using the Rules Engine.
+Some `Cache-Control` response header values indicate that the response isn't cacheable. These values include `private`, `no-cache`, and `no-store`. Front Door honors these header values and doesn't cache the responses, even if you override the caching behavior by using the Rules Engine.
-If the `Cache-Control` header isn't present on the response from the origin, by default Front Door will randomly determine a cache duration between one and three days.
+If the `Cache-Control` header isn't present on the response from the origin, by default Front Door randomly determines a cache duration between one and three days.
> [!NOTE] > Cache expiration can't be greater than **366 days**.
If the `Cache-Control` header isn't present on the response from the origin, by
## Request headers
-The following request headers won't be forwarded to the origin when caching is enabled:
+The following request headers don't get forwarded to the origin when caching is enabled:
- `Content-Length` - `Transfer-Encoding`
The following request headers won't be forwarded to the origin when caching is e
## Response headers
-If the origin response is cacheable, then the `Set-Cookie` header is removed before the response is sent to the client. If an origin response isn't cacheable, Front Door doesn't strip the header. For example, if the origin response includes a `Cache-Control` header with a `max-age` value, this indicates to Front Door that the response is cacheable, and the `Set-Cookie` header is stripped.
+If the origin response is cacheable, then the `Set-Cookie` header is removed before the response is sent to the client. If an origin response isn't cacheable, Front Door doesn't strip the header. For example, if the origin response includes a `Cache-Control` header with a `max-age` value indicates to Front Door that the response is cacheable, and the `Set-Cookie` header is stripped.
In addition, Front Door attaches the `X-Cache` header to all responses. The `X-Cache` response header includes one of the following values: -- `TCP_HIT` or `TCP_REMOTE_HIT`: The first 8 MB chunk of the response is a cache hit, and the content is served from the Front Door cache.-- `TCP_MISS`: The first 8 MB chunk of the response is a cache miss, and the content is fetched from the origin.
+- `TCP_HIT` or `TCP_REMOTE_HIT`: The first 8-MB chunk of the response is a cache hit, and the content is served from the Front Door cache.
+- `TCP_MISS`: The first 8-MB chunk of the response is a cache miss, and the content is fetched from the origin.
- `PRIVATE_NOSTORE`: Request can't be cached because the *Cache-Control* response header is set to either *private* or *no-store*. - `CONFIG_NOCACHE`: Request is configured to not cache in the Front Door profile.
Cache behavior and duration can be configured in Rules Engine. Rules Engine cach
* **When caching is enabled**, the cache behavior is different depending on the cache behavior value applied by the Rules Engine:
- * **Honor origin**: Azure Front Door will always honor origin response header directive. If the origin directive is missing, Azure Front Door will cache contents anywhere from one to three days.
- * **Override always**: Azure Front Door will always override with the cache duration, meaning that it will cache the contents for the cache duration ignoring the values from origin response directives. This behavior will only be applied if the response is cacheable.
- * **Override if origin missing**: If the origin doesnΓÇÖt return caching TTL values, Azure Front Door will use the specified cache duration. This behavior will only be applied if the response is cacheable.
+ * **Honor origin**: Azure Front Door always honors origin response header directive. If the origin directive is missing, Azure Front Door caches contents anywhere from one to three days.
+ * **Override always**: Azure Front Door always overrides with the cache duration, meaning that it caches the contents for the cache duration ignoring the values from origin response directives. This behavior only applies if the response is cacheable.
+ * **Override if origin missing**: If the origin doesnΓÇÖt return caching TTL values, Azure Front Door uses the specified cache duration. This behavior only applies if the response is cacheable.
> [!NOTE] > * Azure Front Door makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Front Door might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your origins are offline.
Cache behavior and duration can be configured in Rules Engine. Rules Engine cach
::: zone pivot="front-door-classic"
-Cache behavior and duration can be configured in both the Front Door designer routing rule and in Rules Engine. Rules Engine caching configuration will always override the Front Door designer routing rule configuration.
+Cache behavior and duration can be configured in both the Front Door designer routing rule and in Rules Engine. Rules Engine caching configuration always overrides the Front Door designer routing rule configuration.
* **When caching is disabled**, Azure Front Door (classic) doesnΓÇÖt cache the response contents, irrespective of origin response directives. * **When caching is enabled**, the cache behavior is different for different values of *Use cache default duration*.
- * When *Use cache default duration* is set to **Yes**, Azure Front Door (classic) will always honor origin response header directive. If the origin directive is missing, Front Door will cache contents anywhere from one to three days.
- * When *Use cache default duration* is set to **No**, Azure Front Door (classic) will always override with the *cache duration* (required fields), meaning that it will cache the contents for the cache duration ignoring the values from origin response directives.
+ * When *Use cache default duration* is set to **Yes**, Azure Front Door (classic) always honor origin response header directive. If the origin directive is missing, Front Door caches contents anywhere from one to three days.
+ * When *Use cache default duration* is set to **No**, Azure Front Door (classic) always override with the *cache duration* (required fields), meaning that it caches the contents for the cache duration ignoring the values from origin response directives.
> [!NOTE] > * Azure Front Door (classic) makes no guarantees about the amount of time that the content is stored in the cache. Cached content may be removed from the edge cache before the content expiration if the content is not frequently used. Azure Front Door (classic) might be able to serve data from the cache even if the cached data has expired. This behavior can help your site to remain partially available when your origins are offline.
frontdoor Front Door Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-overview.md
Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) th
:::image type="content" source="./media/overview/front-door-overview.png" alt-text="Diagram of Azure Front Door routing user traffic to endpoints." lightbox="./media/overview/front-door-overview-expanded.png":::
+> [!NOTE]
+> For web workloads, we highly recommend utilizing [**Azure DDoS protection**](../ddos-protection/ddos-protection-overview.md) and a [**web application firewall**](../web-application-firewall/overview.md) to safeguard against emerging DDoS attacks. Another option is to employ [**Azure Front Door**](../frontdoor/web-application-firewall.md) along with a web application firewall. Azure Front Door offers platform-level [**protection against network-level DDoS attacks**](../frontdoor/front-door-ddos.md).
## Why use Azure Front Door?
frontdoor How To Configure Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/how-to-configure-caching.md
Before you can create an Azure Front Door endpoint with Front Door manager, you
To create an Azure Front Door profile and endpoint, see [Create an Azure Front Door profile](create-front-door-portal.md).
+Caching can significantly decrease latency and reduce the load on origin servers. However, not all types of traffic can benefit from caching. Static assets such as images, CSS, and JavaScript files are ideal for caching. While dynamic assets, such as authenticated API endpoints, shouldn't be cached to prevent the leakage of personal information. It's recommended to have separate routes for static and dynamic assets, with caching disabled for the latter.
+
+> [!WARNING]
+> Before you enable caching, thoroughly review the caching documentation, and test all possible scenarios before enabling caching. As noted previously, with misconfiguration you can inadvertently cache user specific data that can be shared by multiple users resulting privacy incidents.
+ ## Configure caching by using the Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true) and navigate to your Azure Front Door profile.
governance NZ_ISM_Restricted_V3_5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/NZ_ISM_Restricted_v3_5.md
Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 02/14/2023 Last updated : 06/12/2023
For more information about this compliance standard, see
_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
-The following mappings are to the **NZ ISM Restricted v3.5** controls. Use the
-navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+The following mappings are to the **NZ ISM Restricted v3.5** controls. Many of the controls
are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. Then, find and select the **New Zealand ISM Restricted v3.5** Regulatory Compliance built-in
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
+|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) |
+|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |
|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
-|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
-|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
-|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
-|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
-|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) |
+|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) |
+|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |
|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | ### 16.5.10 Authentication
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
-|[Audit usage of custom RBAC rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
+|[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |
|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) | |[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) | |[Log Analytics agent should be installed on your virtual machine for Azure Security Center monitoring](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4fe33eb-e377-4efb-ab31-0784311bc499) |This policy audits any Windows/Linux virtual machines (VMs) if the Log Analytics agent is not installed which Security Center uses to monitor for security vulnerabilities and threats |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_InstallLaAgentOnVm.json) |
initiative definition.
|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) | |[Resource logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) | |[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) |
-|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) | ### 16.6.12 Event log protection
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) |
### 16.1.46 Suspension of access
initiative definition.
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[Deprecated accounts should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b1cbf55-e8b6-442f-ba4c-7