Updates from: 01/10/2022 02:04:54
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
In a PowerShell window, run the following command, providing the script and user
To disable the updated experience for your users, complete these steps: 1. Sign in to the Azure portal as a user administrator.
-2. Go to **Azure Active Directory** > **User settings** > **Manage settings for access panel preview features**.
-3. Under **Users can use preview features for registering and managing security info**, set the selector to **None**, and then select **Save**.
+2. Go to **Azure Active Directory** > **User settings** > **Manage user feature settings**.
+3. Under **Users can use the combined security information registration experience**, set the selector to **None**, and then select **Save**.
Users will no longer be prompted to register by using the updated experience.
active-directory F5 Aad Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-aad-integration.md
No previous experience or F5 BIG-IP knowledge is necessary to implement SHA, but
## Deployment scenarios - Configuring a BIG-IP for SHA is achieved using any of the many available methods, including several template based options, or a manual configuration. The following tutorials provide detailed guidance on implementing some of the more common patterns for BIG-IP and Azure AD SHA, using these methods.
The following tutorials provide detailed guidance on implementing some of the mo
The advanced approach provides a more elaborate, yet flexible way of implementing SHA by manually creating all BIG-IP configuration objects. You would use this approach for scenarios not covered by the guided configuration templates.
+Refer to the following advanced configuration guides for your integration requirements:
+ - [F5 BIG-IP in Azure deployment walk-through](f5-bigip-deployment-guide.md) - [Securing F5 BIG-IP SSL-VPN with Azure AD SHA](f5-aad-password-less-vpn.md)
The advanced approach provides a more elaborate, yet flexible way of implementin
- [F5 BIG-IP APM and Azure AD SSO to Kerberos applications](f5-big-ip-kerberos-advanced.md) -- [F5 BIG-IP APM and Azure AD SSO to Header-based applications](f5-big-ip-header-advanced.md)
+- [F5 BIG-IP APM and Azure AD SSO to header-based applications](f5-big-ip-header-advanced.md)
- [F5 BIG-IP APM and Azure AD SSO to forms-based applications](f5-big-ip-forms-advanced.md)
The advanced approach provides a more elaborate, yet flexible way of implementin
The Guided Configuration wizard, available from BIG-IP version 13.1 aims to minimize time and effort implementing common BIG-IP publishing scenarios. Its workflow-based framework provides an intuitive deployment experience tailored to specific access topologies.
-The latest version of the Guided Configuration 16.1 now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
+The latest version of the Guided Configuration 16.1 now offers an Easy Button feature. With **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, without management overhead of having to do so on a per app basis.
+
+Refer to the following guided configuration guides using Easy Button templates for your integration requirements:
- [F5 BIG-IP Easy Button for SSO to Kerberos applications](f5-big-ip-kerberos-easy-button.md)
+- [F5 BIG-IP Easy Button for SSO to header-based applications](f5-big-ip-headers-easy-button.md)
+ - [F5 BIG-IP Easy Button for SSO to header-based and LDAP applications](f5-big-ip-ldap-header-easybutton.md) ## Additional resources
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
+
+ Title: Configure F5 BIG-IPΓÇÖs Easy Button for Header-based SSO
+description: learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications using F5ΓÇÖs BIG-IP Easy Button Guided Configuration.
+++++++ Last updated : 01/07/2022++++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for header-based SSO
+
+In this article, youΓÇÖll learn to implement Secure Hybrid Access (SHA) with single sign-on (SSO) to header-based applications using F5ΓÇÖs BIG-IP Easy Button Guided Configuration.
+
+Configuring a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+
+ * Improved Zero Trust governance through Azure AD pre-authentication and authorization
+
+ * Full SSO between Azure AD and BIG-IP published services
+
+ * Manage Identities and access from a single control plane, [the Azure portal](https://portal.azure.com/)
+
+To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+For this scenario, we have a legacy application using HTTP authorization headers to control access to protected content.
+
+Ideally, application access should be managed directly by Azure AD but being legacy it lacks any form of modern authentication protocol. Modernization would take considerable effort and time, introducing inevitable costs and risk of potential downtime. Instead, a BIG-IP deployed between the public internet and the internal application will be used to gate inbound access to the application.
+
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and header-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The SHA solution for this scenario is made up of:
+
+**Application:** BIG-IP published service to be protected by and Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SSO to the BIG-IP. Through SSO, Azure AD provides the BIG-IP with any required attributes including a user identifier.
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-ldap/sp-initiated-flow.png)
+
+| Steps| Description |
+| - |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced CA policies |
+| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP injects Azure AD attributes as headers in request to the application |
+| 6| Application authorizes request and returns payload |
+
+## Prerequisites
+
+Prior BIG-IP experience isnΓÇÖt necessary, but youΓÇÖll need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP license SKUs
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP Access Policy ManagerΓäó (APM) standalone license
+
+ * F5 BIG-IP Access Policy Manager™ (APM) add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD
+
+* An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* A [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default certificates while testing
+
+* An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing
+
+## Big-IP deployment methods
+
+There are many methods to deploy BIG-IP for this scenario including a template-driven Guided Configuration, or an advanced configuration. This tutorial covers the Easy Button templates offered by the Guided Configuration 16.1 and upwards.
+
+With the **Easy Button**, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The end-to-end deployment and policy management of applications is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+> [!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform. Registering with Azure AD establishes a trust relationship between your application and the IdP. BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
+
+1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
+2. From the left navigation pane, select the **Azure Active Directory** service
+3. Under Manage, select **App registrations > New registration**
+4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button*
+5. Specify who can use the application > **Accounts in this organizational directory only**
+6. Select **Register** to complete the initial app registration
+7. Navigate to **API permissions** and authorize the following Microsoft Graph permissions:
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+ 8. Grant admin consent for your organization
+9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down
+10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Next, step through the Easy Button configurations, and complete the trust to start publishing the internal application. Start by provisioning your BIG-IP with an X509 certificate that Azure AD can use to sign SAML tokens and claims issued for SHA enabled services.
+
+1. From a browser, sign-in to the F5 BIG-IP management console
+2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import**
+3. Select **PKCS 12 (IIS)** and import your certificate along with its private key
+ Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
+ ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
+
+4. Navigate to **Access > Guided Configuration > Microsoft Integration and select Azure AD Application**
+ You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+
+5. Review the list of configuration steps and select Next
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+
+## Configuration steps
+
+The **Easy Button** template will display the sequence of steps required to publish your application.
+
+![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
+
+Configuration steps flow
+
+### Configuration Properties
+
+These are general and service account properties. The **Configuration Properties tab** creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. The configuration can then be reused for publishing more applications through the Easy Button template.
+
+Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually, in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
+
+Some of these are global settings that can be reused for publishing more applications, further reducing deployment time and effort.
+
+1. Enter **Configuration Name**. A unique name that enables an admin to easily distinguish between Easy Button configurations for published applications
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id**, **Client ID**, and **Client Secret** from your registered application
+
+4. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
+
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-ldap/config-properties.png)
+
+### Service Provider
+
+The Service Provider settings define the SAML SP properties for the APM instance representing the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured. YouΓÇÖll need a corresponding DNS record for clients to resolve this address, but using a localhost record is fine during testing
+
+2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
+
+ Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. Check **Enable Encrypted Assertion (Optional)**. Enable to request Azure AD to encrypt SAML assertions
+
+4. Select **Assertion Decryption Private Key**. The private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+
+5. Select **Assertion Decryption Certificate**. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions. This can be the certificate you provisioned earlier
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
+
+The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP, but weΓÇÖll use the generic SHA template by selecting **F5 BIG-IP APM Azure AD Integration > Add**.
+
+![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see on [MyApps portal](https://myapplications.microsoft.com/)
+
+2. Do not enter anything in the **Sign On URL (optional)** to enable IdP initiated sign-on
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-ldap/azure-configuration-properties.png)
+
+3. Select **Signing key**. The IdP SAML signing certificate you provisioned earlier
+
+4. Select the same certificate for **Singing Certificate**
+
+5. Enter the certificateΓÇÖs password in **Passphrase**
+
+6. Select **Signing Options**. It can be enabled optionally to ensure the BIG-IP only accepts tokens and claims that have been signed by your Azure AD tenant
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+
+7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates, Azure AD issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+
+For this example, you can include one more attribute:
+
+1. Enter **Header Name** as *employeeid
+
+2. Enter **Source Attribute** as *user.employeeid
+
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
+
+#### Additional User Attributes
+
+In the **Additional User Attributes tab**, you can enable session augmentation required by a variety of distributed systems such as Oracle, SAP, and other JAVA based implementations requiring attributes stored in other directories. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes. 
+
+#### Conditional Access Policy
+
+You can further protect the published application with policies returned from your Azure AD tenant. These policies are enforced after the first-factor authentication has been completed and uses signals from conditions like device platform, location, user or group membership, or application to determine access.
+
+The **Available Policies** list, by default, displays a list of policies that target selected apps.
+
+The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list. They are included by default but can be excluded if necessary.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+
+2. Select the right arrow and move it to the **Selected Policies** list
+
+Selected policies should either have an **Include** or **Exclude option** checked. If both options are checked, the selected policy is not enforced. Exclude all policies while testing. You can go back and enable them later.
+
+![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+
+> [!NOTE]
+> The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for clients requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
+
+ ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP that are represented as a pool, containing one or more application servers.
+
+1. Choose from **Select a Pool**. Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. Update **Pool Servers**. Select an existing node or specify an IP and port for the server hosting the header-based application
+
+ ![Screenshot for Application pool](./media/f5-big-ip-easy-button-ldap/application-pool.png)
+
+Our backend application sits on HTTP port 80 but obviously switch to 443 if yours is HTTPS.
+
+#### Single Sign-On & HTTP Headers
+
+Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO, the latter of which weΓÇÖll enable to configure the following.
+
+* **Header Operation:** Insert
+
+* **Header Name:** upn
+
+* **Header Value:** %{session.saml.last.identity}
+
+* **Header Operation:** Insert
+
+* **Header Name:** employeeid
+
+* **Header Value:** %{session.saml.last.attr.name.employeeid}
+
+![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-header/sso-http-headers.png)
+
+>[!NOTE]
+> The APM session variables defined within curly brackets are CASE sensitive. If you enter EmployeeID when the Azure AD attribute name is being sent as employeeid, it will cause an attribute mapping failure. In case of any issues, troubleshoot using the session analysis steps to check how the APM has variables defined.
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. Consult [F5 documentation](https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered there however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the client are terminated after a user has logged out.
+
+When the Easy Button wizard deploys a SAML application to Azure AD, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the MyApps portal also terminate the session between the BIG-IP and a client.
+
+During deployment, the SAML applications federation metadata is also imported, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs also terminate the session between a client and Azure AD. But for this to be truly effective, the APM needs to know exactly when a user signs-out.
+
+Consider a scenario where the BIG-IP web portal isnΓÇÖt used, the user has no way of instructing the APM to sign out. Even if the user signs-out of the application itself, the BIG-IP is technically oblivious to this, so the application session could easily be reinstated through SSO. For this reason, SP initiated sign-out needs careful consideration to ensure sessions are securely terminated when no longer required. One way of achieving this would be to add an SLO function to your applications sign out button, so that it can redirect your client to the Azure AD SAML sign-out endpoint. The SAML sign-out endpoint for your tenant can be found in **App Registrations > Endpoints**.
+
+If making a change to the app is a no go, then consider having the BIG-IP listen for the apps sign-out call, and upon detecting the request have it trigger SLO. More details on using BIG-IP iRules to achieve this is available in the F5 knowledge article [Configuring automatic session termination (logout) based on a URI-referenced file name](https://support.f5.com/csp/article/K42052145) and [Overview of the Logout URI Include option](https://support.f5.com/csp/article/K12056).
+
+## Summary
+
+Select **Deploy** to commit all settings and verify that the application has appeared in your tenant. This last step provides break down of all applied settings before theyΓÇÖre committed.
+
+Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals.
+
+## Next steps
+
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the MyApps portal. After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+This shows the output of the injected headers displayed by our headers-based application.
+
+![Screenshot for App views](./media/f5-big-ip-easy-button-ldap/app-view.png)
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lack the flexibility to achieve a particular set of requirements. Or even a need to fast track a proof of concept. For those scenarios, the BIG-IP offers the ability to disable the Guided ConfigurationΓÇÖs strict management mode. That way the bulk of your configurations can be deployed through the wizard-based templates, and any tweaks or additional settings applied manually.
+
+For those scenarios, go ahead and deploy using the Guided Configuration. Then navigate to **Access > Guided Configuration** and select the small padlock icon on the far right of the row for your applicationsΓÇÖ configs. At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+For more information, see [Advanced Configuration for header-based SSO](./f5-big-ip-header-advanced.md).
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the manual approach for production services.
+
+## Troubleshooting
+
+You can fail to access the SHA protected application due to any number of factors, including a misconfiguration.
+
+BIG-IP logs are a great source of information for isolating all sorts of authentication & SSO issues. When troubleshooting you should increase the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list then **OK**
+
+Reproduce your issue before looking at the logs but remember to switch this back when finished. If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+2. Run the report for the last hour to see logs provide any clues. The **View session** variables link for your session will also help understand if the APM is receiving the expected claims from Azure AD
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. In which case you should head to **Access Policy > Overview > Active Sessions** and select the link for your active session
+
+2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes
+
+For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).
+
+## Additional resources
+
+* [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless)
+
+* [What is Conditional Access?](../conditional-access/overview.md)
+
+* [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
Configuring BIG-IP published applications with Azure AD provides many benefits,
* Improved zero-trust governance through Azure AD pre-authentication and authorization
-* Full Single Sign-on (SSO) between Azure AD and BIG-IP published services
+* Full SSO between Azure AD and BIG-IP published services
-* Manage Identities and access from a single control plane - [The Azure portal](https://portal.azure.com/)
+* Manage identities and access from a single control plane, [The Azure portal](https://portal.azure.com/)
To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
The secure hybrid access solution for this scenario is made up of:
**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the backend application.
-Secure hybrid access for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-ldap/sp-initiated-flow.png)
Next, step through the Easy Button configurations, and complete the trust to sta
![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
-1. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**
+4. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**
You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application. ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
-2. Review the list of configuration steps and select **Next**
+5. Review the list of configuration steps and select **Next**
![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html). - ## Additional resources
-* [The end of passwords, go password-less](https://www.microsoft.com/en-gb/security/business/identity/passwordless)
+* [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless)
* [What is Conditional Access?](../conditional-access/overview.md)
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/advisor-cost-recommendations.md
Azure Advisor helps you optimize and reduce your overall Azure spend by identify
## How to access cost recommendations in Azure Advisor
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
Azure Advisor helps you optimize and reduce your overall Azure spend by identify
Although certain application scenarios can result in low utilization by design, you can often save money by managing the size and number of your virtual machines.
-The recommended actions are shut down or resize, specific to the resource being evaluated.
-
-The advanced evaluation model in Advisor considers shutting down virtual machines when all of these statements are true:
-- P95th of the maximum value of CPU utilization is less than 3%. -- Network utilization is less than 2% over a seven-day period.-- Memory pressure is lower than the threshold values-
-Advisor considers resizing virtual machines when it's possible to fit the current load in a smaller SKU (within the same SKU family) or a smaller number of instances such that:
-- The current load doesnΓÇÖt go above 80% utilization for workloads that aren't user facing. -- The load doesn't go above 40% for user-facing workloads.
+Advisor uses machine-learning algorithms to identify low utilization and to identify the ideal recommendation to ensure optimal usage of virtual machines. The recommended actions are shut down or resize, specific to the resource being evaluated.
+
+### Shutdown recommendations
+
+Advisor identifies resources that have not been used at all over the last 7 days and makes a recommendation to shut them down.
+
+- Metrics considered are CPU and Outbound Network utilization (memory is not considered for shutdown recommendations since weΓÇÖve found that relying on CPU and Network provide enough signals for this recommendation)
+- The last 7 days of utilization data are considered
+- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the average of max values while aggregating to 30 mins)
+- A shutdown recommendation is created if:
+ - P95th of the maximum value of CPU utilization summed across all cores is less than 3%.
+ - P100 of average CPU in last 3 days (sum over all cores) <= 2%
+ - Outbound Network utilization is less than 2% over a seven-day period.
+
+### Resize SKU recommendations
+
+Advisor considers resizing virtual machines when it's possible to fit the current load on a more appropriate SKU, which costs less than the current one (we currently consider retail rates only during recommendation generation).
+
+- Metrics considered are CPU, Memory and Outbound Network utilization
+- The last 7 days of utilization data are considered
+- Metrics are sampled every 30 seconds, aggregated to 1 min and then further aggregated to 30 mins (we take the average of max values while aggregating to 30 mins)
+- An appropriate SKU is determined based on the following criteria:
+ - Performance of the workloads on the new SKU should not be impacted. This is achieved by:
+ - For user-facing workloads: P95 of the CPU and Outbound Network utilization, and P100 of Memory utilization donΓÇÖt go above 80% on the new SKU
+ - For non user-facing workloads:
+ - P95 of CPU and Outbound Network utilization donΓÇÖt go above 40% on the recommended SKU
+ - P100 of Memory utilization doesnΓÇÖt go above 60% on the recommended SKU
+ - The new SKU has the same Accelerated Networking and Premium Storage capabilities
+ - The new SKU is supported in the current region of the Virtual Machine with the recommendation
+ - The new SKU is less expensive
+- Advisor determines the type of workload (user-facing/non user-facing) by analyzing the CPU utilization characteristics of the workload. This is based on some fascinating findings by Microsoft Research. You can find more details here: [Prediction-Based Power Oversubscription in Cloud Platforms - Microsoft Research](https://www.microsoft.com/research/publication/prediction-based-power-oversubscription-in-cloud-platforms/).
+- Advisor recommends not just smaller SKUs in the same family (for example D3v2 to D2v2) but also SKUs in a newer version (for example D3v2 to D2v3) or even a completely different family (for example D3v2 to E3v2) based on the best fit and the cheapest costs with no performance impacts.
+
+### Burstable recommendations
+
+This is a special type of resize recommendation, where Advisor analyzes workloads to determine eligibility to run on specialized SKUs called Burstable SKUs that allow for variable workload performance requirements and are generally cheaper than general purpose SKUs. Learn more about burstable SKUs here: [B-series burstable - Azure Virtual Machines](../virtual-machines/sizes-b-series-burstable.md).
+
+- A burstable SKU recommendation is made if:
+- The average CPU utilization is less than a burstable SKUs' baseline performance
+ - If the P95 of CPU is less than two times the burstable SKUs' baseline performance
+ - If the current SKU does not have accelerated networking enabled (burstable SKUs donΓÇÖt support accelerated networking yet)
+ - If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days
+- The result is a recommendation suggesting that the user resize their current VM to a burstable SKU (with the same number of cores) to take advantage of the low costs and the fact that the workload has low average utilization but high spikes in cases, which is perfect for the B-series SKU.
+
+Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU information.
+To be more selective about the actioning on underutilized virtual machines, you can adjust the CPU utilization rule on a per-subscription basis.
-Here, Advisor determines the type of workload by analyzing the CPU utilization characteristics of the workload.
+There are cases where the recommendations cannot be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
+- Virtual machine has been provisioned to accommodate upcoming traffic
+- Virtual machine uses other resources not considered by the resize algo, i.e. metrics other than CPU, Memory and Network
+- Specific testing being done on the current SKU, even if not utilized efficiently
+- Need to keep VM SKUs homogeneous
+- VM being utilized for disaster recovery purposes
-Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU information.
+In such cases simply use the Dismiss/Postpone options associated with the recommendation.
-If you want to be more aggressive about identifying underutilized virtual machines, you can adjust the CPU utilization rule on a per-subscription basis.
+We are constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
## Optimize spend for MariaDB, MySQL, and PostgreSQL servers by right-sizing Advisor analyses your usage and evaluates whether your MariaDB, MySQL, or PostgreSQL database server resources have been underutilized for an extended time over the past seven days. Low resource utilization results in unwanted expenditure that you can fix without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend that you reduce the compute size (vCores) by half.
It's preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with st
Advisor identifies resources where reducing the table cache policy will free up Azure Data Explorer cluster nodes having low CPU utilization, memory, and a high cache size configuration. ## Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container
-Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%. Cost savings amount represents potential savings from using the recommended manual throughput, based on usage in the past 7 days. Your actual savings may vary depending on the manual throughput you set and whether your average utilization of throughput continues to be similar to the time period analyzed. The estimated savings does not account for any discount that may apply to your account.
+Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%. Cost savings amount represents potential savings from using the recommended manual throughput, based on usage in the past 7 days. Your actual savings may vary depending on the manual throughput you set and whether your average utilization of throughput continues to be similar to the time period analyzed. The estimated savings do not account for any discount that may apply to your account.
## Enable autoscale on your Azure Cosmos DB database or container Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use.
To learn more about Advisor recommendations, see:
* [Advisor score](azure-advisor-score.md) * [Get started with Advisor](advisor-get-started.md) * [Advisor performance recommendations](advisor-performance-recommendations.md)
-* [Advisor high availability recommendations](advisor-high-availability-recommendations.md)
+* [Advisor reliability recommendations](advisor-high-availability-recommendations.md)
* [Advisor security recommendations](advisor-security-recommendations.md) * [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md)
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/advisor/advisor-release-notes.md
+
+ Title: Release notes for Azure Advisor
+description: A description of what's new and changed in Azure Advisor
+ Last updated : 01/03/2022+
+# What's new in Azure Advisor?
+
+Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+
+## January 2022
+
+[**Shutdown/Resize your virtual machines**](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) recommendation was enhanced to increase the quality, robustness, and applicability.
+
+Improvements include:
+
+1. Cross SKU family series resize recommendations are now available.
+
+1. Cross version resize recommendations are now available. In general, newer versions of SKU families are more optimized, provide more features, and have better performance/cost ratios than older versions.
+
+3. For better actionability, we updated recommendation criteria to include other SKU characteristics such as accelerated networking support, premium storage support, availability in a region, inclusion in an availability set, etc.
+
+![vm-right-sizing-recommendation](media/advisor-overview/advisor-vm-right-sizing.png)
+
+Read the [How-to guide](advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances) to learn more.
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/faq.md
Label: ```"admissions.enforcer/disabled": "true"``` or Annotation: ```"admission
## Is Azure Key Vault integrated with AKS?
-AKS isn't currently natively integrated with Azure Key Vault. However, the [Azure Key Vault provider for CSI Secrets Store][csi-driver] enables direct integration from Kubernetes pods to Key Vault secrets.
+[Azure Key Vault Provider for Secrets Store CSI Driver][aks-keyvault-provider] provides native integration of Azure Key Vault into AKS.
## Can I run Windows Server containers on AKS?
AKS doesn't apply Network Security Groups (NSGs) to its subnet and will not modi
[availability-zones]: ./availability-zones.md [az-regions]: ../availability-zones/az-region.md [uptime-sla]: ./uptime-sla.md
+[aks-keyvault-provider]: ./csi-secrets-store-driver.md
<!-- LINKS - external --> [aks-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
aks Howto Deploy Java Liberty App With Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app-with-postgresql.md
+
+ Title: Deploy a Java application with Azure Database for PostgreSQL server to Open Liberty/WebSphere Liberty on an Azure Kubernetes Service(AKS) cluster
+recommendations: false
+description: Deploy a Java application with Azure Database for PostgreSQL server to Open Liberty/WebSphere Liberty on an Azure Kubernetes Service(AKS) cluster
++++ Last updated : 11/19/2021
+keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aks, kubernetes
+++
+# Deploy a Java application with Azure Database for PostgreSQL server to Open Liberty or WebSphere Liberty on an Azure Kubernetes Service (AKS) cluster
+
+This article demonstrates how to:
+
+* Run your Java, Java EE, Jakarta EE, or MicroProfile application on the Open Liberty or WebSphere Liberty runtime with a PostgreSQL DB connection.
+* Build the application Docker image using Open Liberty or WebSphere Liberty container images.
+* Deploy the containerized application to an AKS cluster using the Open Liberty Operator.
+
+The Open Liberty Operator simplifies the deployment and management of applications running on Kubernetes clusters. With Open Liberty Operator, you can also perform more advanced operations, such as gathering traces and dumps.
+
+For more information on Open Liberty, see [the Open Liberty project page](https://openliberty.io/). For more information on IBM WebSphere Liberty, see [the WebSphere Liberty product page](https://www.ibm.com/cloud/websphere-liberty).
+++
+* This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* If running the commands in this guide locally (instead of Azure Cloud Shell):
+ * Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, macOS, Windows Subsystem for Linux).
+ * Install a Java SE implementation (for example, [AdoptOpenJDK OpenJDK 8 LTS/OpenJ9](https://adoptopenjdk.net/?variant=openjdk8&jvmVariant=openj9)).
+ * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
+ * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
+ * Create a user-assigned managed identity and assign `Contributor` role to that identity by following the steps in [Manage user-assigned managed identities](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities). Return to this document after creating the identity and assigning it the necessary role.
+
+## Create a Jakarta EE runtime using the portal
+
+The steps in this section guide you to create a Jakarta EE runtime on AKS. After completing these steps, you will have an Azure Container Registry and an Azure Kubernetes Service cluster for the sample application.
+
+1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type **IBM WebSphere Liberty and Open Liberty on Azure Kubernetes Service**. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section.
+1. Select **Create** to start.
+1. In the **Basics** tab, create a new resource group called *java-liberty-project-rg*.
+1. Select *East US* as **Region**.
+1. Select the user-assigned managed identity you created above.
+1. Leave all other values at the defaults and start creating the cluster by selecting **Review + create**.
+1. When the validation completes, select **Create**. This may take up to ten minutes.
+1. After the deployment is complete, select the resource group into which you deployed the resources.
+ 1. In the list of resources in the resource group, select the resource with **Type** of **Container registry**.
+ 1. Save aside the values for **Registry name**, **Login server**, **Username**, and **password**. You may use the copy icon at the right of each field to copy the value of that field to the system clipboard.
+1. Navigate again to the resource group into which you deployed the resources.
+1. In the **Settings** section, select **Deployments**.
+1. Select the bottom most deployment. The **Deployment name** will match the publisher ID of the offer. It will contain the string **ibm**.
+1. In the left pane, select **Outputs**.
+1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
+
+ - **clusterName**
+ - **appDeploymentTemplateYamlEncoded**
+ - **cmdToConnectToCluster**
+
+ These values will be used later in this article. Note that several other useful commands are listed in the outputs.
+
+## Create an Azure Database for PostgreSQL server
+
+The steps in this section guide you through creating an Azure Database for PostgreSQL server using the Azure CLI for use with your app.
+
+1. Create a resource group
+
+ An Azure resource group is a logical group in which Azure resources are deployed and managed.
+
+ Create a resource group called *java-liberty-project-postgresql* using the [az group create](/cli/azure/group#az_group_create) command in the *eastus* location.
+
+ ```bash
+ RESOURCE_GROUP_NAME=java-liberty-project-postgresql
+ az group create --name $RESOURCE_GROUP_NAME --location eastus
+ ```
+
+1. Create the PostgreSQL server
+
+ Use the [az postgres server create](/cli/azure/postgres/server#az_postgres_server_create) command to create the DB server. The following example creates a DB server named *youruniquedbname*. Make sure *youruniqueacrname* is unique within Azure.
+
+ > [!TIP]
+ > To help ensure a globally unique name, prepend a disambiguation string such as your intitials and the MMDD of today's date.
++
+ ```bash
+ export DB_NAME=youruniquedbname
+ export DB_ADMIN_USERNAME=myadmin
+ export DB_ADMIN_PASSWORD=<server_admin_password>
+ az postgres server create --resource-group $RESOURCE_GROUP_NAME --name $DB_NAME --location eastus --admin-user $DB_ADMIN_USERNAME --admin-password $DB_ADMIN_PASSWORD --sku-name GP_Gen5_2
+ ```
+
+1. Allow Azure Services, such as our Open Liberty and WebSphere Liberty application, to access the Azure PostgreSQL server.
+
+ ```bash
+ az postgres server firewall-rule create --resource-group $RESOURCE_GROUP_NAME \
+ --server-name $DB_NAME \
+ --name "AllowAllWindowsAzureIps" \
+ --start-ip-address "0.0.0.0" \
+ --end-ip-address "0.0.0.0"
+ ```
+
+1. Allow your local IP address to access the Azure PostgreSQL server. This is necessary to allow the `liberty:devc` to access the database.
+
+ ```bash
+ az postgres server firewall-rule create --resource-group $RESOURCE_GROUP_NAME \
+ --server-name $DB_NAME \
+ --name "AllowMyIp" \
+ --start-ip-address YOUR_IP_ADDRESS \
+ --end-ip-address YOUR_IP_ADDRESS
+ ```
+
+If you don't want to use the CLI, you may use the Azure portal by following the steps in [Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal](/azure/postgresql/quickstart-create-server-database-portal). You must also grant access to Azure services by following the steps in [Firewall rules in Azure Database for PostgreSQL - Single Server](/azure/postgresql/concepts-firewall-rules#connecting-from-azure). Return to this document after creating and configuring the database server.
+
+## Configure and deploy the sample application
+
+Follow the steps in this section to deploy the sample application on the Jakarta EE runtime. These steps use Maven and the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin` see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html).
+
+### Check out the application
+
+Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
+There are three samples in the repository. We will use *javaee-app-db-using-actions/postgres*. Here is the file structure of the application.
+
+```
+javaee-app-db-using-actions/postgres
+Γö£ΓöÇ src/main/
+Γöé Γö£ΓöÇ aks/
+Γöé Γöé Γö£ΓöÇ db-secret.yaml
+Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
+Γöé Γö£ΓöÇ docker/
+Γöé Γöé Γö£ΓöÇ Dockerfile
+Γöé Γöé Γö£ΓöÇ Dockerfile-local
+Γöé Γöé Γö£ΓöÇ Dockerfile-wlp
+Γöé Γöé Γö£ΓöÇ Dockerfile-wlp-local
+Γöé Γö£ΓöÇ liberty/config/
+Γöé Γöé Γö£ΓöÇ server.xml
+Γöé Γö£ΓöÇ java/
+Γöé Γö£ΓöÇ resources/
+Γöé Γö£ΓöÇ webapp/
+Γö£ΓöÇ pom.xml
+```
+
+The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
+
+In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
+
+In the *docker* directory, we placed four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an AKS deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an AKS deployment respectively, but instead work with WebSphere Liberty.
+
+In directory *liberty/config*, the *server.xml* is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
+
+### Acquire necessary variables from AKS deployment
+
+After the offer is successfully deployed, an AKS cluster with a namespace will be generated automatically. The AKS cluster is configured to connect to the ACR using a pre-created secret under the generated namespace. Before we get started with the application, we need to extract the namespace and the pull-secret name of the ACR configured for the AKS.
+
+1. Run following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
+
+ ```bash
+ echo <appDeploymentTemplateYamlEncoded> | base64 -d
+ ```
+
+1. Save the `metadata.namespace` and `spec.pullSecret` from this yaml output aside for later use in this article.
+
+### Build the project
+
+Now that you have gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
+
+```bash
+cd <path-to-your-repo>/javaee-app-db-using-actions/postgres
+
+# The following variables will be used for deployment file generation
+export LOGIN_SERVER=<Azure_Container_Registery_Login_Server_URL>
+export REGISTRY_NAME=<Azure_Container_Registery_Name>
+export USER_NAME=<Azure_Container_Registery_Username>
+export PASSWORD=<Azure_Container_Registery_Password>
+export DB_SERVER_NAME=${DB_NAME}.postgres.database.azure.com
+export DB_PORT_NUMBER=5432
+export DB_TYPE=postgres
+export DB_USER=${DB_ADMIN_USERNAME}@${DB_NAME}
+export DB_PASSWORD=${DB_ADMIN_PASSWORD}
+export NAMESPACE=<metadata.namespace>
+export PULL_SECRET=<pullSecret>
+
+mvn clean install
+```
+
+### Test your project locally
+
+Use the `liberty:devc` command to run and test the project locally before dealing with any Azure complexity. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
+In the sample application, we've prepared *Dockerfile-local* and *Dockerfile-wlp-local* for use with `liberty:devc`.
+
+1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system.
+
+1. Start the application in `liberty:devc` mode
+
+ ```bash
+ cd <path-to-your-repo>/javaee-app-db-using-actions/postgres
+
+ # If you are running with Open Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_TYPE} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
+
+ # If you are running with WebSphere Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_TYPE} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
+ ```
+
+1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working.
+
+1. Press `Ctrl+C` to stop `liberty:devc` mode.
+
+### Build image for AKS deployment
+
+After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
+
+```bash
+cd <path-to-your-repo>/javaee-app-db-using-actions/postgres
+
+# Fetch maven artifactId as image name, maven build version as image version
+IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
+IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
+
+cd <path-to-your-repo>/javaee-app-db-using-actions/postgres/target
+
+# If you are running with Open Liberty
+docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
+
+# If you are running with WebSphere Liberty
+docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
+```
+
+### Upload image to ACR
+
+Now, we upload the built image to the ACR created in the offer.
+
+```bash
+docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+docker login -u ${USER_NAME} -p ${PASSWORD} ${LOGIN_SERVER}
+docker push ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+```
+
+### Deploy and test the application
+
+The steps in this section deploy and test the application.
+
+1. Connect to the AKS cluster
+
+ Paste the value of **cmdToConnectToCluster** into a bash shell.
+
+1. Apply the DB secret
+
+ ```bash
+ kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/postgres/target/db-secret.yaml
+ ```
+
+ You will see the output `secret/db-secret-postgres created`.
+
+1. Apply the deployment file
+
+ ```bash
+ kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/postgres/target/openlibertyapplication.yaml
+ ```
+
+1. Wait for the pods to be restarted
+
+ Wait until all pods are restarted successfully using the following command.
+
+ ```bash
+ kubectl get pods -n $NAMESPACE --watch
+ ```
+
+ You should see output similar to the following to indicate that all the pods are running.
+
+ ```bash
+ NAME READY STATUS RESTARTS AGE
+ javaee-cafe-cluster-67cdc95bc-2j2gr 1/1 Running 0 29s
+ javaee-cafe-cluster-67cdc95bc-fgtt8 1/1 Running 0 29s
+ javaee-cafe-cluster-67cdc95bc-h47qm 1/1 Running 0 29s
+ ```
+
+1. Verify the results
+
+ 1. Get endpoint of the deployed service
+
+ ```bash
+ kubectl get service -n $NAMESPACE
+ ```
+
+ 1. Go to `EXTERNAL-IP:9080` to test the application.
+
+## Clean up resources
+
+To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
+
+```azurecli-interactive
+az group delete --name <RESOURCE_GROUP_NAME> --yes --no-wait
+```
+
+## Next steps
+
+* [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/)
+* [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/)
+* [Open Liberty](https://openliberty.io/)
+* [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)
+* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/howto-deploy-java-liberty-app.md
For more details on Open Liberty, see [the Open Liberty project page](https://op
An Azure resource group is a logical group in which Azure resources are deployed and managed.
-Create a resource group called *java-liberty-project* using the [az group create](/cli/azure/group#az_group_create) command in the *eastus* location. This resource group will be used later for creating the Azure Container Registry (ACR) instance and the AKS cluster.
+Create a resource group called *java-liberty-project* using the [az group create](/cli/azure/group#az_group_create) command in the *eastus* location. This resource group will be used later for creating the Azure Container Registry (ACR) instance and the AKS cluster.
```azurecli-interactive RESOURCE_GROUP_NAME=java-liberty-project
NAME STATUS ROLES AGE VERSION
aks-nodepool1-xxxxxxxx-yyyyyyyyyy Ready agent 76s v1.18.10 ```
+## Create an Azure SQL Database
+
+The steps in this section guide you through creating an Azure SQL Database single database for use with your app. If your application doesn't require a database, you can skip this section.
+
+1. Create a single database in Azure SQL Database by following the steps in: [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart). Return to this document after creating and configuring the database server.
+ > [!NOTE]
+ >
+ > * At the **Basics** step, write down **Database name**, ***Server name**.database.windows.net*, **Server admin login** and **Password**.
+ > * At the **Networking** step, set **Connectivity method** to **Public endpoint**, **Allow Azure services and resources to access this server** to **Yes**, and **Add current client IP address** to **Yes**.
+ >
+ > ![Screenshot of configuring SQL database networking](./media/howto-deploy-java-liberty-app/create-sql-database-networking.png)
+
+2. Once your database is created, open **your SQL server** > **Firewalls and virtual networks**. Set **Minimal TLS Version** to **> 1.0** and select **Save**.
+
+ ![Screenshot of configuring SQL database minimum TLS version](./media/howto-deploy-java-liberty-app/sql-database-minimum-TLS-version.png)
+
+3. Open **your SQL database** > **Connection strings** > Select **JDBC**. Write down the **Port number** following sql server address. For example, **1433** is the port number in the example below.
+
+ ![Screenshot of getting SQL server jdbc connection string](./media/howto-deploy-java-liberty-app/sql-server-jdbc-connection-string.png)
++ ## Install Open Liberty Operator After creating and connecting to the cluster, install the [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator/tree/main/deploy/releases/0.8.0#option-2-install-using-kustomize) by running the following commands.
wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/de
kubectl apply -k overlays/watch-all-namespaces ```
-## Build application image
+## Configure and build the application image
To deploy and run your Liberty application on the AKS cluster, containerize your application as a Docker image using [Open Liberty container images](https://github.com/OpenLiberty/ci.docker) or [WebSphere Liberty container images](https://github.com/WASdev/ci.docker).
+# [with DB connection](#tab/with-sql)
+
+Follow the steps in this section to deploy the sample application on the Jakarta EE runtime. These steps use Maven and the `liberty-maven-plugin`. To learn more about the `liberty-maven-plugin`, see [Building a web application with Maven](https://openliberty.io/guides/maven-intro.html).
+
+### Check out the application
+
+Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks).
+There are three samples in the repository. We will use *javaee-app-db-using-actions/mssql*. Here is the file structure of the application.
+
+```
+javaee-app-db-using-actions/mssql
+Γö£ΓöÇ src/main/
+Γöé Γö£ΓöÇ aks/
+Γöé Γöé Γö£ΓöÇ db-secret.yaml
+Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
+Γöé Γö£ΓöÇ docker/
+Γöé Γöé Γö£ΓöÇ Dockerfile
+Γöé Γöé Γö£ΓöÇ Dockerfile-local
+Γöé Γöé Γö£ΓöÇ Dockerfile-wlp
+Γöé Γöé Γö£ΓöÇ Dockerfile-wlp-local
+Γöé Γö£ΓöÇ liberty/config/
+Γöé Γöé Γö£ΓöÇ server.xml
+Γöé Γö£ΓöÇ java/
+Γöé Γö£ΓöÇ resources/
+Γöé Γö£ΓöÇ webapp/
+Γö£ΓöÇ pom.xml
+```
+
+The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
+
+In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
+
+In the *docker* directory, we place four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an AKS deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an AKS deployment respectively, but instead work with WebSphere Liberty.
+
+In the *liberty/config* directory, the *server.xml* is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
+
+### Build project
+
+Now that you have gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
+
+```bash
+cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
+
+# The following variables will be used for deployment file generation
+export LOGIN_SERVER=${LOGIN_SERVER}
+export REGISTRY_NAME=${REGISTRY_NAME}
+export USER_NAME=${USER_NAME}
+export PASSWORD=${PASSWORD}
+export DB_SERVER_NAME=<Server name>.database.windows.net
+export DB_PORT_NUMBER=1433
+export DB_NAME=<Database name>
+export DB_USER=<Server admin login>@<Database name>
+export DB_PASSWORD=<Server admin password>
+export PULL_SECRET=acr-secret
+export NAMESPACE=${OPERATOR_NAMESPACE}
+
+mvn clean install
+```
+### Test your project locally
+Use the `liberty:devc` command to run and test the project locally before dealing with any Azure complexity. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
+In the sample application, we've prepared *Dockerfile-local* and *Dockerfile-wlp-local* for use with `liberty:devc`.
+
+1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system.
+
+1. Start the application in `liberty:devc` mode
+
+ ```bash
+ cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
+
+ # If you are running with Open Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
+
+ # If you are running with WebSphere Liberty
+ mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
+ ```
+
+1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser to verify the application is accessible and all functions are working.
+
+1. Press `Ctrl+C` to stop `liberty:devc` mode.
+
+### Build image for AKS deployment
+
+After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
+
+```bash
+cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
+
+# Fetch maven artifactId as image name, maven build version as image version
+IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
+IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
+
+cd <path-to-your-repo>/javaee-app-db-using-actions/mssql/target
+
+# If you are running with Open Liberty
+docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
+
+# If you are running with WebSphere Liberty
+docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
+```
+
+### Upload image to ACR
+
+Now, we upload the built image to the ACR created in the previous steps.
+
+```bash
+docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+docker login -u ${USER_NAME} -p ${PASSWORD} ${LOGIN_SERVER}
+docker push ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+```
+
+# [without DB connection](#tab/without-sql)
+ 1. Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/open-liberty-on-aks). 1. Change directory to `javaee-app-simple-cluster` of your local clone. 1. Run `mvn clean package` to package the application.
To deploy and run your Liberty application on the AKS cluster, containerize your
az acr build -t ${artifactId}:${version} -r $REGISTRY_NAME --file=Dockerfile-wlp . ``` ++ ## Deploy application on the AKS cluster
+The steps in this section deploy the application.
+
+# [with DB connection](#tab/with-sql)
+
+Follow steps below to deploy the Liberty application on the AKS cluster.
+
+1. Create a pull secret so that the AKS cluster is authenticated to pull image from the ACR instance.
+
+ ```bash
+ kubectl create secret docker-registry ${PULL_SECRET} \
+ --docker-server=${LOGIN_SERVER} \
+ --docker-username=${USER_NAME} \
+ --docker-password=${PASSWORD}
+ ```
+1. Retrieve the value for `artifactId` defined in `pom.xml`.
+
+ ```bash
+ cd <path-to-your-repo>/javaee-app-db-using-actions/mssql
+ artifactId=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
+ ```
+
+1. Apply the DB secret and deployment file by running the following command:
+
+ ```bash
+ cd <path-to-your-repo>/javaee-app-db-using-actions/mssql/target
+
+ # Apply DB secret
+ kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/mssql/target/db-secret.yaml
+
+ # Apply deployment file
+ kubectl apply -f <path-to-your-repo>/javaee-app-db-using-actions/mssql/target/openlibertyapplication.yaml
+
+ # Check if OpenLibertyApplication instance is created
+ kubectl get openlibertyapplication ${artifactId}-cluster
+
+ NAME IMAGE EXPOSED RECONCILED AGE
+ javaee-cafe-cluster youruniqueacrname.azurecr.io/javaee-cafe:1.0.25 True 59s
+
+ # Check if deployment created by Operator is ready
+ kubectl get deployment ${artifactId}-cluster --watch
+
+ NAME READY UP-TO-DATE AVAILABLE AGE
+ javaee-cafe-cluster 0/3 3 0 20s
+ ```
+
+1. Wait until you see `3/3` under the `READY` column and `3` under the `AVAILABLE` column, then use `CTRL-C` to stop the `kubectl` watch process.
+
+# [without DB connection](#tab/without-sql)
+ Follow steps below to deploy the Liberty application on the AKS cluster. 1. Create a pull secret so that the AKS cluster is authenticated to pull image from the ACR instance.
Follow steps below to deploy the Liberty application on the AKS cluster.
1. Wait until you see `3/3` under the `READY` column and `3` under the `AVAILABLE` column, use `CTRL-C` to stop the `kubectl` watch process. ++ ### Test the application When the application runs, a Kubernetes load balancer service exposes the application front end to the internet. This process can take a while to complete.
Open a web browser to the external IP address of your service (`52.152.189.57` f
## Clean up the resources
-To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
+To avoid Azure charges, you should clean up unnecessary resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, container registry, and all related resources.
```azurecli-interactive az group delete --name $RESOURCE_GROUP_NAME --yes --no-wait
applied-ai-services Tutorial Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/applied-ai-services/form-recognizer/tutorial-logic-apps.md
Before we jump into creating the Logic App, we have to set up a OneDrive folder.
:::image border="true" type="content" source="media/logic-apps-tutorial/onedrive-setup.gif" alt-text="GIF showing steps to create a folder in OneDrive.":::
-### Create a Logic App resource
+## Create a Logic App resource
At this point, you should have a Form Recognizer resource and a OneDrive folder all set. Now, it's time to create a Logic App resource.
At this point, you should have a Form Recognizer resource and a OneDrive folder
:::image border="true" type="content" source="media/logic-apps-tutorial/logic-app-connector-demo-nine.png" alt-text="Image of the Logic App Designer.":::
-### Create automation flow
+## Create automation flow
Now that you have the Logic App connector resource set up and configured, the only thing left to do is to create the automation flow and test it out!
Now that you have the Logic App connector resource set up and configured, the on
> * The Logic App designer will automatically add a "for each loop" around the send email action. This is normal due to output format that may return more than one invoice from PDFs in the future. > * The current version only returns a single invoice per PDF.
-### Test automation flow
+## Test automation flow
Let's quickly review what we've done before we test our flow:
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
Title: Upgrade direct mode Azure Arc data controller using the CLI
+ Title: Upgrade directly connected Azure Arc data controller using the CLI
description: Article describes how to upgrade a directly connected Azure Arc data controller using the CLI
Last updated 12/10/2021
-# Upgrade direct mode Azure Arc data controller using the CLI
+# Upgrade a directly connected Azure Arc data controller using the CLI
This article describes how to upgrade a directly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
+During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL Hyperscale server).
+ ## Prerequisites
-You will need a direct mode data controller with the imageTag v1.0.0_2021-07-30 or later.
+You will need a directly connected data controller with the imageTag v1.0.0_2021-07-30 or later.
To check the version, run:
v1.0.0_2021-07-30
## Upgrade data controller
-This section shows how to upgrade a data controller in direct mode.
+This section shows how to upgrade a directly connected data controller.
> [!NOTE] > Some of the data services tiers and modes are generally available and some are in preview.
This section shows how to upgrade a data controller in direct mode.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md).
-### Direct mode
+### Upgrade
You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
azure-arc Upgrade Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md
Title: Upgrade indirect mode Azure Arc data controller using the CLI
-description: Upgrade indirect mode Azure Arc data controller using the CLI
+ Title: Upgrade indirectly connected Azure Arc data controller using the CLI
+description: Article describes how to upgrade an indirectly connected Azure Arc data controller using the CLI
Last updated 11/03/2021
-# Upgrade indirect mode Azure Arc data controller using the CLI
+# Upgrade an indirectly connected Azure Arc data controller using the CLI
This article describes how to upgrade an indirectly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
-> [!IMPORTANT]
-> This article does not apply to a directly connected Azure Arc-enabled data controller. For the latest information about how to upgrade a directly connected data controller, see the [release notes](./release-notes.md#data-controller-upgrade).
+During a data controller upgrade, portions of the data control plane such as Custom Resource Definitions (CRDs) and containers may be upgraded. An upgrade of the data controller will not cause downtime for the data services (SQL Managed Instance or PostgreSQL Hyperscale server).
## Prerequisites
-You will need an indirect mode data controller with the imageTag v1.0.0_2021-07-30 or later.
+You will need an indirectly connected data controller with the imageTag v1.0.0_2021-07-30 or later.
To check the version, run:
v1.0.0_2021-07-30
## Upgrade data controller
-This section shows how to upgrade a data controller in indirect mode.
+This section shows how to upgrade an indirectly connected data controller.
> [!NOTE] > Some of the data services tiers and modes are generally available and some are in preview.
This section shows how to upgrade a data controller in indirect mode.
> To upgrade, delete all non-GA database instances. You can find the list of generally available > and preview services in the [Release Notes](./release-notes.md).
-### Indirect mode
+### Upgrade
You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Title: Upgrade indirect mode Azure Arc data controller using Kubernetes tools
-description: Article explains how to upgrade indirect mode Azure Arc data controller using Kubernetes tools
+ Title: Upgrade indirectly connected Azure Arc data controller using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
Last updated 12/09/2021
-# Upgrade indirect mode Azure Arc data controller using Kubernetes tools
+# Upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools.
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
Title: Upgrade an indirect mode Azure Arc-enabled Managed Instance using the CLI
-description: Upgrade an indirect mode Azure Arc-enabled Managed Instance using the CLI
+ Title: Upgrade an an indirectly connected Azure Arc-enabled Managed Instance using the CLI
+description: Article describes how to upgrade an indirectly connected Azure Arc-enabled Managed Instance using the CLI
Last updated 11/03/2021
-# Upgrade an indirect mode Azure Arc-enabled Managed Instance using the CLI
+# Upgrade an indirectly connected Azure Arc-enabled Managed Instance using the CLI
+
+This article describes how to upgrade a SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
## Prerequisites
Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
### General Purpose
+During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency.
+ To upgrade the Managed Instance, use the following command: ````cli
azure-arc Upgrade Sql Managed Instance Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md
Title: Upgrade a direct mode Azure Arc-enabled Managed Instance using the CLI
-description: Article describes how to upgrade a direct mode Azure Arc-enabled Managed Instance using the CLI
+ Title: Upgrade a directly connected Azure Arc-enabled Managed Instance using the CLI
+description: Article describes how to upgrade a directly connected Azure Arc-enabled Managed Instance using the CLI
Last updated 11/10/2021
-# Upgrade a direct mode Azure Arc-enabled Managed Instance using the CLI
+# Upgrade a directly connected Azure Arc-enabled Managed Instance using the CLI
This article describes how to upgrade a SQL Managed Instance deployed on a directly connected Azure Arc-enabled data controller using the Azure CLI (`az`).
Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
### General Purpose
+During a SQL Managed Instance General Purpose upgrade, the containers in the pod will be upgraded and will be reprovisioned. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read [Overview of the reliability pillar](/azure/architecture/framework/resiliency/overview) for more information on architecting resiliency.
+ To upgrade the Managed Instance, use the following command: ````cli
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
Title: Upgrade indirect mode Azure Arc-enabled Managed Instance - Kubernetes
-description: Describes how to upgrade indirect mode Azure Arc-enabled Managed Instance using Kubernetes
+ Title: Upgrade an indirectly connected Azure Arc-enabled Managed Instance using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected Azure Arc-enabled Managed Instance using Kubernetes tools
Last updated 11/08/2021
-# Upgrade an indirect mode Azure Arc-enabled Managed Instance using Kubernetes tools
-
-This article describes how to upgrade a SQL Managed Instance deployed on a directly connected Azure Arc-enabled data controller using Kubernetes tools.
+# Upgrade an an indirectly connected Azure Arc-enabled Managed Instance using Kubernetes tools
+This article describes how to upgrade a SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using Kubernetes tools.
## Prerequisites
Before you can proceed with the tasks in this article you need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
-You need an indirect mode data controller with the `imageTag v1.0.0_2021-07-30` or greater.
+You need an an indirectly connected data controller with the `imageTag v1.0.0_2021-07-30` or greater.
## Limitations
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/java-standalone-config.md
You can specify your own configuration file path using either
If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.2.4.jar` is located.
+Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration
+via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
+ ## Connection string Connection string is required. You can find your connection string in your Application Insights resource:
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/logs-data-export.md
You need to have 'write' permissions to both workspace and destination to config
Don't use an existing event hub that has other, non-monitoring data stored in it to better control access to the data and prevent reaching event hub namespace ingress rate limit, failures, and latency.
-Data is sent to your event hub as it reaches Azure Monitor and exported to destinations located in workspace region. When specific event hub isn't provided in rule, an event hub is created for each data type that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to an event hub named *am-SecurityEvent*. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
+Data is sent to your event hub as it reaches Azure Monitor and exported to destinations located in workspace region. You can create multiple export rules to the same event hub namespace by providing different `event hub name` in rule.When `event hub name` isn't provided, a default event hub is created for each table that you export with the name *am-* followed by the name of the table. For example, the table *SecurityEvent* would sent to an event hub named *am-SecurityEvent*. The [number of supported event hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different event hub namespaces, or provide an event hub name in the rule to export all tables to that event hub.
> [!NOTE]
-> - 'Basic' event hub tier is limited--it supports lower event size [limit](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-vs-premium-vs-dedicated-tiers) and no the is no [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) option. Since data volume to your workspace increases over time and consequence event hub scaling is required, use 'Standard', 'Premium' or 'Dedicated' event hub tiers with **Auto-inflate** feature enabled to automatically scale up and increase the number of throughput units. See [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md).
+> - 'Basic' event hub tier is limited--it supports [lower event size](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-vs-premium-vs-dedicated-tiers) and no [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) option to automatically scale up and increase the number of throughput units. Since data volume to your workspace increases over time and consequence event hub scaling is required, use 'Standard', 'Premium' or 'Dedicated' event hub tiers with **Auto-inflate** feature enabled. See [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md).
> - Data export can't reach event hub resources when virtual networks are enabled. You have to enable the **Allow trusted Microsoft services** to bypass this firewall setting in event hub, to grant access to your Event Hubs resources. ## Enable data export
azure-monitor Resource Manager Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/resource-manager-workspace.md
The following sample creates a new empty Log Analytics workspace.
"heartbeatTableRetention": { "type": "int", "metadata": {
- "description": "Number of days to retain data in HeartBeat table."
+ "description": "Number of days to retain data in Heartbeat table."
}
+ }
}, "resources": [ {
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
na Previously updated : 08/25/2021 Last updated : 01/07/2022 # Delegate a subnet to Azure NetApp Files
You must delegate a subnet to Azure NetApp Files. When you create a volume, yo
You can have only a single delegated subnet in a VNet. A NetApp account can deploy volumes into multiple VNets, each having its own delegated subnet. * You cannot designate a network security group or service endpoint in the delegated subnet. Doing so causes the subnet delegation to fail. * Access to a volume from a globally peered virtual network is not currently supported.
-* [User-defined routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes) (UDRs) and Network security groups (NSGs) are not supported on delegated subnets for Azure NetApp Files. However, you can apply UDRs and NSGs to other subnets, even within the same VNet as the subnet delegated to Azure NetApp Files.
- Azure NetApp Files creates a system route to the delegated subnet. The route is shown in **Effective routes** in the route table if you need it for troubleshooting.
+* For Azure NetApp Files support of [User-defined routes](../virtual-network/virtual-networks-udr-overview.md#custom-routes) (UDRs) and Network security groups (NSGs), see [Constraints in Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md#constraints).
+ To establish routing or access control ***to*** the Azure NetApp Files delegated subnet, you can apply UDRs and NSGs to other subnets, even within the same VNet as the subnet delegated to Azure NetApp Files.
## Steps
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/create-active-directory-connections.md
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. * **Security privilege users** <!-- SMB CA share feature -->
- You can grant security privilege (`SeSecurityPrivilege`) to users that require elevated privilege to access the Azure NetApp Files volumes. The specified user accounts will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users.
+ You can grant security privilege (`SeSecurityPrivilege`) to AD users or groups that require elevated privilege to access the Azure NetApp Files volumes. The specified AD users or groups will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users.
- For example, user accounts used for installing SQL Server in certain scenarios must be granted elevated security privilege. If you are using a non-administrator (domain) account to install SQL Server and the account does not have the security privilege assigned, you should add security privilege to the account.
+ The following privilege applies when you use the **Security privilege users** setting:
+
+ | Privilege | Description |
+ |||
+ | `SeSecurityPrivilege` | Manage log operations. |
+
+ For example, user accounts used for installing SQL Server in certain scenarios must (temporarily) be granted elevated security privilege. If you are using a non-administrator (domain) account to install SQL Server and the account does not have the security privilege assigned, you should add security privilege to the account.
> [!IMPORTANT] > Using the **Security privilege users** feature requires that you submit a waitlist request through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
This setting is configured in the **Active Directory Connections** under **NetAp
![Screenshot showing the Security privilege users box of Active Directory connections window.](../media/azure-netapp-files/security-privilege-users.png) * **Backup policy users**
- You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. The specified accounts will be allowed to change the NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for migrating data to an SMB file share in Azure NetApp Files.
+ You can grant additional security privileges to AD users or groups that require elevated backup privileges to access the Azure NetApp Files volumes. The specified AD user accounts or groups will have elevated NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for backing up, restoring, or migrating data to an SMB file share in Azure NetApp Files.
+
+ The following privileges apply when you use the **Backup policy users** setting:
+
+ | Privilege | Description |
+ |||
+ | `SeBackupPrivilege` | Back up files and directories, overriding any ACLs. |
+ | `SeRestorePrivilege` | Restore files and directories, overriding any ACLs. <br> Set any valid user or group SID as the file owner. |
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. |
![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
- * **Administrators**
+ * **Administrators privilege users**
+
+ You can grant additional security privileges to AD users or groups that require even more elevated privileges to access the Azure NetApp Files volumes. The specified accounts will have further elevated permissions at the file or folder level.
+
+ The following privileges apply when you use the **Administrators privilege users** setting:
- You can specify users or groups that will be given administrator privileges on the volume.
+ | Privilege | Description |
+ |||
+ | `SeBackupPrivilege` | Back up files and directories, overriding any ACLs. |
+ | `SeRestorePrivilege` | Restore files and directories, overriding any ACLs. <br> Set any valid user or group SID as the file owner. |
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. |
+ | `SeTakeOwnershipPrivilege` | Take ownership of files or other objects. |
+ | `SeSecurityPrivilege` | Manage log operations. |
+ | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. |
![Screenshot that shows the Administrators box of Active Directory connections window.](../media/azure-netapp-files/active-directory-administrators.png)
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-restore-vms.md
Title: Restore VMs by using the Azure portal
description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 12/24/2021 Last updated : 01/08/2022
As one of the [restore options](#restore-options), you can create a VM quickly w
1. In **Restore Virtual Machine** > **Create new** > **Restore Type**, select **Create a virtual machine**. 1. In **Virtual machine name**, specify a VM that doesn't exist in the subscription. 1. In **Resource group**, select an existing resource group for the new VM, or create a new one with a globally unique name. If you assign a name that already exists, Azure assigns the group the same name as the VM.
-1. In **Virtual network**, select the VNet in which the VM will be placed. All VNets associated with the subscription in the same location as the vault, which are active and not attached with any affinity group, are displayed. Select the subnet.
+1. In **Virtual network**, select the VNet in which the VM will be placed. All VNets associated with the subscription in the same location as the vault, which is active and not attached with any affinity group, are displayed. Select the subnet.
The first subnet is selected by default.
If CRR is enabled, you can view the backup items in the secondary region.
The secondary region restore user experience will be similar to the primary region restore user experience. When configuring details in the Restore Configuration pane to configure your restore, you'll be prompted to provide only secondary region parameters.
-Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-objective) is up to 12 hours from the primary region, even though [read-access geo-redundant storage (RA-GRS)](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) replication is 15 minutes.
+Currently, secondary region [RPO](azure-backup-glossary.md#rpo-recovery-point-objective) is _36 hours_. This is because the RPO in the primary region is _24 hours_ and can take up to _12 hours_ to replicate the backup data from the primary to the secondary region.
![Choose VM to restore](./media/backup-azure-arm-restore-vms/sec-restore.png)
In summary, the **Availability Zone** will only appear when
## Restoring unmanaged VMs and disks as managed
-You're provided with an option to restore [unmanaged disks](../storage/common/storage-disaster-recovery-guidance.md#azure-unmanaged-disks) as [managed disks](../virtual-machines/managed-disks-overview.md) during restore. By default, the unmanaged VMs / disks are restored as unmanaged VMs / disks. However, if you choose to restore as managed VMs / disks, it's now possible to do so. These restore aren't triggered from the snapshot phase but only from the vault phase. This feature isn't available for unmanaged encrypted VMs.
+You're provided with an option to restore [unmanaged disks](../storage/common/storage-disaster-recovery-guidance.md#azure-unmanaged-disks) as [managed disks](../virtual-machines/managed-disks-overview.md) during restore. By default, the unmanaged VMs / disks are restored as unmanaged VMs / disks. However, if you choose to restore as managed VMs / disks, it's now possible to do so. These restores aren't triggered from the snapshot phase but only from the vault phase. This feature isn't available for unmanaged encrypted VMs.
![Restore as managed disks](./media/backup-azure-arm-restore-vms/restore-as-managed-disks.png)
backup Tutorial Backup Sap Hana Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/tutorial-backup-sap-hana-db.md
Title: Tutorial - Back up SAP HANA databases in Azure VMs description: In this tutorial, learn how to back up SAP HANA databases running on Azure VM to an Azure Backup Recovery Services vault. Previously updated : 09/27/2021 Last updated : 01/10/2022+++ # Tutorial: Back up SAP HANA databases in an Azure VM
Here's a summary of steps required for completing the pre-registration script ru
After running the pre-registration script successfully and verifying, you can then proceed to check [the connectivity requirements](backup-azure-sap-hana-database.md#establish-network-connectivity) and then [configure backup](#discover-the-databases) from Recovery services vault
-## Create a Recovery Services vault
-A Recovery Services vault is an entity that stores the backups and recovery points created over time. The Recovery Services vault also contains the backup policies that are associated with the protected virtual machines.
-
-To create a Recovery Services vault:
-
-1. Sign in to your subscription in the [Azure portal](https://portal.azure.com/).
-
-2. On the left menu, select **All services**
-
- ![Select All services](./media/tutorial-backup-sap-hana-db/all-services.png)
-
-3. In the **All services** dialog box, enter **Recovery Services**. The list of resources filters according to your input. In the list of resources, select **Recovery Services vaults**.
-
- ![Select Recovery Services vaults](./media/tutorial-backup-sap-hana-db/recovery-services-vaults.png)
-
-4. On the **Recovery Services** vaults dashboard, select **Add**.
+The Recovery Services vault is now created.
- ![Add Recovery Services vault](./media/tutorial-backup-sap-hana-db/add-vault.png)
+## Enable Cross Region Restore
- The **Recovery Services vault** dialog box opens. Provide values for the **Name, Subscription, Resource group,** and **Location**
+At the Recovery Services vault, you can enable Cross Region Restore. You must turn on Cross Region Restore before you configure and protect backups on your HANA databases. Learn about [how to turn on Cross Region Restore](./backup-create-rs-vault.md#set-cross-region-restore).
- ![Create Recovery Services vault](./media/tutorial-backup-sap-hana-db/create-vault.png)
+[Learn more](./backup-azure-recovery-services-vault-overview.md) about Cross Region Restore.
- * **Name**: The name is used to identify the Recovery Services vault and must be unique to the Azure subscription. Specify a name that has at least two, but not more than 50 characters. The name must start with a letter and consist only of letters, numbers, and hyphens. For this tutorial, we've used the name **SAPHanaVault**.
- * **Subscription**: Choose the subscription to use. If you're a member of only one subscription, you'll see that name. If you're not sure which subscription to use, use the default (suggested) subscription. There are multiple choices only if your work or school account is associated with more than one Azure subscription. Here, we've used the **SAP HANA solution lab subscription** subscription.
- * **Resource group**: Use an existing resource group or create a new one. Here, we've used **SAPHANADemo**.<br>
- To see the list of available resource groups in your subscription, select **Use existing**, and then select a resource from the drop-down list box. To create a new resource group, select **Create new** and enter the name. For complete information about resource groups, see [Azure Resource Manager overview](../azure-resource-manager/management/overview.md).
- * **Location**: Select the geographic region for the vault. The vault must be in the same region as the Virtual Machine running SAP HANA. We've used **East US 2**.
+## Discover the databases
-5. Select **Review + Create**.
+1. In the Azure portal, go to **Backup center** and click **+Backup**.
- ![Select Review & Create](./media/tutorial-backup-sap-hana-db/review-create.png)
+ :::image type="content" source="./media/backup-azure-sap-hana-database/backup-center-configure-inline.png" alt-text="Screenshot showing to start checking for SAP HANA databases." lightbox="./media/backup-azure-sap-hana-database/backup-center-configure-expanded.png":::
-The Recovery Services vault is now created.
+1. Select **SAP HANA in Azure VM** as the datasource type, select a Recovery Services vault to use for backup, and then click **Continue**.
-## Enable Cross Region Restore
+ :::image type="content" source="./media/backup-azure-sap-hana-database/hana-select-vault.png" alt-text="Screenshot showing to select an SAP HANA database in Azure VM.":::
-At the Recovery Services vault, you can enable Cross Region Restore. You must turn on Cross Region Restore before you configure and protect backups on your HANA databases. Learn about [how to turn on Cross Region Restore](./backup-create-rs-vault.md#set-cross-region-restore).
+1. Select **Start Discovery**. This initiates discovery of unprotected Linux VMs in the vault region.
-[Learn more](./backup-azure-recovery-services-vault-overview.md) about Cross Region Restore.
+ * After discovery, unprotected VMs appear in the portal, listed by name and resource group.
+ * If a VM isn't listed as expected, check whether it's already backed up in a vault.
+ * Multiple VMs can have the same name but they belong to different resource groups.
-## Discover the databases
+ :::image type="content" source="./media/backup-azure-sap-hana-database/hana-discover-databases.png" alt-text="Screenshot showing to select Start Discovery.":::
-1. In the vault, in **Getting Started**, select **Backup**. In **Where is your workload running?**, select **SAP HANA in Azure VM**.
-2. Select **Start Discovery**. This initiates discovery of unprotected Linux VMs in the vault region. You'll see the Azure VM that you want to protect.
-3. In **Select Virtual Machines**, select the link to download the script that provides permissions for the Azure Backup service to access the SAP HANA VMs for database discovery.
-4. Run the script on the VM hosting SAP HANA database(s) that you want to back up.
-5. After running the script on the VM, in **Select Virtual Machines**, select the VM. Then select **Discover DBs**.
-6. Azure Backup discovers all SAP HANA databases on the VM. During discovery, Azure Backup registers the VM with the vault, and installs an extension on the VM. No agent is installed on the database.
+1. In **Select Virtual Machines**, select the link to download the script that provides permissions for the Azure Backup service to access the SAP HANA VMs for database discovery.
+1. Run the script on each VM hosting SAP HANA databases that you want to back up.
+1. After running the script on the VMs, in **Select Virtual Machines**, select the VMs. Then select **Discover DBs**.
+1. Azure Backup discovers all SAP HANA databases on the VM. During discovery, Azure Backup registers the VM with the vault, and installs an extension on the VM. No agent is installed on the database.
- ![Discover the databases](./media/tutorial-backup-sap-hana-db/database-discovery.png)
+ :::image type="content" source="./media/backup-azure-sap-hana-database/hana-select-virtual-machines-inline.png" alt-text="Screenshot showing the discovered SAP HANA databases." lightbox="./media/backup-azure-sap-hana-database/hana-select-virtual-machines-expanded.png":::
## Configure backup
-Now that the databases we want to back up are discovered, let's enable backup.
-
-1. Select **Configure Backup**.
+Now enable backup.
- ![Configure backup](./media/tutorial-backup-sap-hana-db/configure-backup.png)
+1. In Step 2, select **Configure Backup**.
-2. In **Select items to back up**, select one or more databases that you want to protect, and then select **OK**.
+ :::image type="content" source="./media/backup-azure-sap-hana-database/hana-configure-backups.png" alt-text="Screenshot showing to configure Backup.":::
- ![Select items to back up](./media/tutorial-backup-sap-hana-db/select-items-to-backup.png)
+2. In **Select items to back up**, select all the databases you want to protect > **OK**.
-3. In **Backup Policy > Choose backup policy**, create a new backup policy for the database(s), in accordance with the instructions in the next section.
+ :::image type="content" source="./media/backup-azure-sap-hana-database/hana-select-databases-inline.png" alt-text="Screenshot showing to select databases to back up." lightbox="./media/backup-azure-sap-hana-database/hana-select-databases-expanded.png":::
- ![Choose backup policy](./media/tutorial-backup-sap-hana-db/backup-policy.png)
+3. In **Backup Policy** > **Choose backup policy**, create a new backup policy for the databases, in accordance with the instructions below.
-4. After creating the policy, on the **Backup menu**, select **Enable backup**.
+ :::image type="content" source="./media/backup-azure-sap-hana-database/hana-policy-summary.png" alt-text="Screenshot showing to choose backup policy.":::
- ![Select Enable backup](./media/tutorial-backup-sap-hana-db/enable-backup.png)
+4. After creating the policy, on the **Backup** menu, select **Enable backup**.
-5. Track the backup configuration progress in the **Notifications** area of the portal.
+ ![Screenshot showing how to enable backup.](./media/backup-azure-sap-hana-database/enable-backup.png)
## Creating a backup policy
A backup policy defines when backups are taken, and how long they're retained.
* A policy is created at the vault level. * Multiple vaults can use the same backup policy, but you must apply the backup policy to each vault.
+>[!NOTE]
+>Azure Backup doesnΓÇÖt automatically adjust for daylight saving time changes when backing up an SAP HANA database running in an Azure VM.
+>
+>Modify the policy manually as needed.
+ Specify the policy settings as follows: 1. In **Policy name**, enter a name for the new policy. In this case, enter **SAPHANA**.
cognitive-services Audio Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/audio-processing-overview.md
Previously updated : 09/17/2021 Last updated : 12/27/2021 # Audio processing
-Audio processing refers to enhancements applied to a stream of audio with a goal of improving the audio quality. A set of enhancements combined are often called an audio processing stack. The goal of improving audio quality can be further segmented into different scenarios like speech processing and telecommunications. Examples of common enhancements include automatic gain control (AGC), noise suppression, and acoustic echo cancellation (AEC).
-
-Different scenarios/use-cases require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
-
-## Microsoft Audio Stack
- The Microsoft Audio Stack is a set of enhancements optimized for speech processing scenarios. This includes examples like keyword recognition and speech recognition. It consists of various enhancements/components that operate on the input audio signal: * **Noise suppression** - Reduce the level of background noise.
The Microsoft Audio Stack is a set of enhancements optimized for speech processi
[ ![Block diagram of Microsoft Audio Stack's enhancements.](media/audio-processing/mas-block-diagram.png) ](media/audio-processing/mas-block-diagram.png#lightbox)
-The Microsoft Audio Stack powers a wide range of MicrosoftΓÇÖs products:
+Different scenarios/use-cases require different optimizations that influence the behavior of the audio processing stack. For example, in telecommunications scenarios such as telephone calls, it is acceptable to have minor distortions in the audio signal after processing has been applied. This is because humans can continue to understand the speech with high accuracy. However, it is unacceptable and disruptive for a person to hear their own voice in an echo. This contrasts with speech processing scenarios, where distorted audio can adversely impact a machine-learned speech recognition modelΓÇÖs accuracy, but it is acceptable to have minor levels of echo residual.
+
+Processing is performed fully locally where the Speech SDK is being used. No audio data is streamed to MicrosoftΓÇÖs cloud services for processing by the Microsoft Audio Stack. The only exception to this is for the Conversation Transcription Service, where raw audio is sent to MicrosoftΓÇÖs cloud services for processing.
+
+The Microsoft Audio Stack also powers a wide range of Microsoft products:
* **Windows** - Microsoft Audio Stack is the default speech processing pipeline when using the Speech audio category. * **Microsoft Teams Displays and Microsoft Teams Room devices** - Microsoft Teams Displays and Teams Room devices use the Microsoft Audio Stack to enable high quality hands-free, voice-based experiences with Cortana.
-### Pricing
+## Speech SDK integration
-There is no cost to using the Microsoft Audio Stack with the Speech SDK.
+The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. Some of the key Microsoft Audio Stack features available via the Speech SDK include:
+* **Realtime microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
+* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario does not include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
+* **Custom microphone geometries** - The SDK allows you to provide your own custom microphone geometry information, in addition to supporting preset geometries like linear two-mic, linear four-mic, and circular 7-mic arrays (see more information on supported preset geometries at [Microphone array recommendations](speech-sdk-microphone.md#microphone-geometry)).
+* **Beamforming angles** - Specific beamforming angles can be provided to optimize audio input originating from a predetermined location, relative to the microphones.
-### Minimum requirements to use Microsoft Audio Stack
+## Minimum requirements to use Microsoft Audio Stack
Microsoft Audio Stack can be used by any product or application that can meet the following requirements: * **Raw audio** - Microsoft Audio Stack requires raw (i.e., unprocessed) audio as input to yield the best results. Providing audio that is already processed limits the audio stackΓÇÖs ability to perform enhancements at high quality. * **Microphone geometries** - Geometry information about each microphone on the device is required to correctly perform all enhancements offered by the Microsoft Audio Stack. Information includes the number of microphones, their physical arrangement, and coordinates. Up to 16 input microphone channels are supported. * **Loopback or reference audio** - An audio channel that represents the audio being played out of the device is required to perform acoustic echo cancellation.
-* **Input format** - Microsoft Audio Stack supports downsampling for sample rates that are integral multiples of 16 kHz. A minimum sampling rate of 16 kHz is required. Additionally, the following formats are supported: 32-bit IEEE little endian float, 32-bit little endian signed int, 24-bit little endian signed int, 16-bit little endian signed int, and 8-bit signed int.
+* **Input format** - Microsoft Audio Stack supports down sampling for sample rates that are integral multiples of 16 kHz. A minimum sampling rate of 16 kHz is required. Additionally, the following formats are supported: 32-bit IEEE little endian float, 32-bit little endian signed int, 24-bit little endian signed int, 16-bit little endian signed int, and 8-bit signed int.
## Next steps-
-* [Learn more about the Speech SDK integration of Microsoft Audio Stack.](audio-processing-speech-sdk.md)
+[Use the Speech SDK for audio processing](audio-processing-speech-sdk.md)
cognitive-services Audio Processing Speech Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/audio-processing-speech-sdk.md
Title: Audio processing with Speech SDK - Speech service
+ Title: Using the Microsoft Audio Stack (MAS) - Speech service
description: An overview of the features, capabilities, and restrictions for audio processing using the Speech Software Development Kit (SDK).
Previously updated : 09/17/2021 Last updated : 12/27/2021 ms.devlang: cpp, csharp, java
-# Audio processing with Speech SDK
+# Using the Microsoft Audio Stack (MAS)
-The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. The minimum requirements from [Minimum requirements to use Microsoft Audio Stack](audio-processing-overview.md#minimum-requirements-to-use-microsoft-audio-stack) apply.
+The Speech SDK integrates Microsoft Audio Stack (MAS), allowing any application or product to use its audio processing capabilities on input audio. See the [Audio processing](audio-processing-overview.md) documentation for an overview.
-Key features made available via Speech SDK APIs include:
-* **Realtime microphone input & file input** - Microsoft Audio Stack processing can be applied to real-time microphone input, streams, and file-based input.
-* **Selection of enhancements** - To allow for full control of your scenario, the SDK allows you to disable individual enhancements like dereverberation, noise suppression, automatic gain control, and acoustic echo cancellation. For example, if your scenario does not include rendering output audio that needs to be suppressed from the input audio, you have the option to disable acoustic echo cancellation.
-* **Custom microphone geometries** - The SDK allows you to provide your own custom microphone geometry information, in addition to supporting preset geometries like linear two-mic, linear four-mic, and circular 7-mic arrays (see more information on supported preset geometries at [Microphone array recommendations](speech-sdk-microphone.md#microphone-geometry)).
-* **Beamforming angles** - Specific beamforming angles can be provided to optimize audio input originating from a predetermined location, relative to the microphones.
+In this article, you learn how to use the Speech SDK to leverage the Microsoft Audio Stack (MAS).
-Processing is performed fully locally where the Speech SDK is being used. No audio data is streamed to MicrosoftΓÇÖs cloud services for processing by the Microsoft Audio Stack. The only exception to this is for the Conversation Transcription Service, where raw audio is sent to MicrosoftΓÇÖs cloud services for processing.
-
-## Reference channel for echo cancellation
-
-Microsoft Audio Stack requires the reference channel (also known as loopback channel) to perform echo cancellation. The source of the reference channel varies by platform:
-* **Windows** - The reference channel is automatically gathered by the Speech SDK if the `SpeakerReferenceChannel::LastChannel` option is provided when creating `AudioProcessingOptions`.
-* **Linux** - ALSA (Advanced Linux Sound Architecture) will need to be configured to provide the reference audio stream as the last channel for the audio input device that will be used. This is in addition to providing the `SpeakerReferenceChannel::LastChannel` option when creating `AudioProcessingOptions`.
-
-## Language and platform support
-
-| Language | Platform(s) | Reference docs |
-||-|-|
-| C++ | Windows, Linux | [C++ docs](/cpp/cognitive-services/speech/) |
-| C# | Windows, Linux | [C# docs](/dotnet/api/microsoft.cognitiveservices.speech) |
-| Java | Windows, Linux | [Java docs](/java/api/com.microsoft.cognitiveservices.speech) |
-
-## Sample code
-
-### Using Microsoft Audio Stack with all default options
+## Default options
This sample shows how to use MAS with all default enhancement options on input from the device's default microphone.
-#### [C#](#tab/csharp)
+### [C#](#tab/csharp)
```csharp var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
var recognizer = new SpeechRecognizer(speechConfig, audioInput); ```
-#### [C++](#tab/cpp)
+### [C++](#tab/cpp)
```cpp auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
```java SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
```
-### Using Microsoft Audio Stack with a preset microphone geometry
+## Preset microphone geometry
This sample shows how to use MAS with a predefined microphone geometry on a specified audio input device. In this example: * **Enhancement options** - The default enhancements will be applied on the input audio stream. * **Preset geometry** - The preset geometry represents a linear 2-microphone array. * **Audio input device** - The audio input device ID is `hw:0,1`. For more information on how to select an audio input device, see [How to: Select an audio input device with the Speech SDK](how-to-select-audio-input-devices.md).
-#### [C#](#tab/csharp)
+### [C#](#tab/csharp)
```csharp var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
var audioInput = AudioConfig.FromMicrophoneInput("hw:0,1", audioProcessingOption
var recognizer = new SpeechRecognizer(speechConfig, audioInput); ```
-#### [C++](#tab/cpp)
+### [C++](#tab/cpp)
```cpp auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
auto audioInput = AudioConfig::FromMicrophoneInput("hw:0,1", audioProcessingOpti
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
```java SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
```
-### Using Microsoft Audio Stack with custom microphone geometry
+## Custom microphone geometry
This sample shows how to use MAS with a custom microphone geometry on a specified audio input device. In this example: * **Enhancement options** - The default enhancements will be applied on the input audio stream. * **Custom geometry** - A custom microphone geometry for a 7-microphone array is provided by specifying the microphone coordinates. The units for coordinates are millimeters. * **Audio input** - The audio input is from a file, where the audio within the file is expected to be captured from an audio input device corresponding to the custom geometry specified.
-#### [C#](#tab/csharp)
+### [C#](#tab/csharp)
```csharp var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
var audioInput = AudioConfig.FromWavFileInput("katiesteve.wav", audioProcessingO
var recognizer = new SpeechRecognizer(speechConfig, audioInput); ```
-#### [C++](#tab/cpp)
+### [C++](#tab/cpp)
```cpp auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
auto audioInput = AudioConfig::FromWavFileInput("katiesteve.wav", audioProcessin
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
```java SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
```
-### Using Microsoft Audio Stack with select enhancements
+## Select enhancements
This sample shows how to use MAS with a custom set of enhancements on the input audio. By default, all enhancements are enabled but there are options to disable dereverberation, noise suppression, automatic gain control, and echo cancellation individually by using `AudioProcessingOptions`.
In this example:
* **Enhancement options** - Echo cancellation and noise suppression will be disabled, while all other enhancements remain enabled. * **Audio input device** - The audio input device is the default microphone of the device.
-#### [C#](#tab/csharp)
+### [C#](#tab/csharp)
```csharp var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
var audioInput = AudioConfig.FromDefaultMicrophoneInput(audioProcessingOptions);
var recognizer = new SpeechRecognizer(speechConfig, audioInput); ```
-#### [C++](#tab/cpp)
+### [C++](#tab/cpp)
```cpp auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
```java SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput);
```
-### Using Microsoft Audio Stack to specify beamforming angles
+## Specify beamforming angles
This sample shows how to use MAS with a custom microphone geometry and beamforming angles on a specified audio input device. In this example: * **Enhancement options** - The default enhancements will be applied on the input audio stream.
This sample shows how to use MAS with a custom microphone geometry and beamformi
* **Beamforming angles** - Beamforming angles are specified to optimize for audio originating in that range. The units for angles are degrees. In the sample code below, the start angle is set to 70 degrees and the end angle is set to 110 degrees. * **Audio input** - The audio input is from a push stream, where the audio within the stream is expected to be captured from an audio input device corresponding to the custom geometry specified.
-#### [C#](#tab/csharp)
+### [C#](#tab/csharp)
```csharp var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
var audioInput = AudioConfig.FromStreamInput(pushStream, audioProcessingOptions)
var recognizer = new SpeechRecognizer(speechConfig, audioInput); ```
-#### [C++](#tab/cpp)
+### [C++](#tab/cpp)
```cpp auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
auto audioInput = AudioConfig::FromStreamInput(pushStream, audioProcessingOption
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, audioInput); ```
-#### [Java](#tab/java)
+### [Java](#tab/java)
```java SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
AudioConfig audioInput = AudioConfig.fromStreamInput(pushStream, audioProcessing
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, audioInput); ``` +
+## Reference channel for echo cancellation
+
+Microsoft Audio Stack requires the reference channel (also known as loopback channel) to perform echo cancellation. The source of the reference channel varies by platform:
+* **Windows** - The reference channel is automatically gathered by the Speech SDK if the `SpeakerReferenceChannel::LastChannel` option is provided when creating `AudioProcessingOptions`.
+* **Linux** - ALSA (Advanced Linux Sound Architecture) will need to be configured to provide the reference audio stream as the last channel for the audio input device that will be used. This is in addition to providing the `SpeakerReferenceChannel::LastChannel` option when creating `AudioProcessingOptions`.
+
+## Language and platform support
+
+| Language | Platform(s) | Reference docs |
+||-|-|
+| C++ | Windows, Linux | [C++ docs](/cpp/cognitive-services/speech/) |
+| C# | Windows, Linux | [C# docs](/dotnet/api/microsoft.cognitiveservices.speech) |
+| Java | Windows, Linux | [Java docs](/java/api/com.microsoft.cognitiveservices.speech) |
+
+## Next steps
+[Setup development environment](quickstarts/setup-platform.md)
cognitive-services Get Started Intent Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-intent-recognition.md
Previously updated : 05/04/2021 Last updated : 01/08/2022 ms.devlang: cpp, csharp, java, javascript, python
cognitive-services Get Started Speaker Recognition https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speaker-recognition.md
Previously updated : 09/02/2020 Last updated : 01/08/2022 ms.devlang: cpp, csharp, javascript
keywords: speaker recognition, voice biometry
## Next steps
-* See the Speaker Recognition [reference documentation](/rest/api/speakerrecognition/) for detail on classes and functions.
-
-* See [C#](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/dotnet/speaker-recognition) and [C++](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/cpp/windows/speaker-recognition) samples on GitHub.
+> [!div class="nextstepaction"]
+> [See the quickstart samples on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart)
cognitive-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speech-to-text.md
Previously updated : 09/15/2020 Last updated : 01/08/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
keywords: speech to text, speech to text software
## Next steps
-* [Use codec compressed audio formats](how-to-use-codec-compressed-audio-input-streams.md)
-* See the [quickstart samples](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart) on GitHub
+> [!div class="nextstepaction"]
+> [See the quickstart samples on GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart)
cognitive-services Get Started Speech Translation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-speech-translation.md
Previously updated : 09/01/2020 Last updated : 01/08/2022 ms.devlang: cpp, csharp, java, javascript, python
cognitive-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/get-started-text-to-speech.md
Previously updated : 05/17/2021 Last updated : 01/08/2022 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/language-support.md
Previously updated : 01/07/2021 Last updated : 01/07/2022
Below table lists out the prebuilt neural voices supported in each language. You
> The English (United Kingdom) voice `en-GB-MiaNeural` retired on **30 October 2021**. All service requests to `en-GB-MiaNeural` now will be re-directed to `en-GB-SoniaNeural` automatically since **30 October 2021**. > If you are using container Neural TTS, please [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version, starting from **30 October 2021**, all requests with previous versions will be rejected.
-#### Prebuilt neural voices in preview
+### Prebuilt neural voices in preview
Below neural voices are in public preview.
To learn how you can configure and adjust neural voices, such as Speaking Styles
> [!TIP] > You can continue to use the full service name mapping like "Microsoft Server Speech Text to Speech Voice (en-US, AriaNeural)" in your speech synthesis requests.
+### Voice styles and roles
+
+In some cases you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles the same voice can act as a different age and gender.
+
+To learn how you can configure and adjust neural voice styles and roles see [Speech Synthesis Markup Language](speech-synthesis-markup.md#adjust-speaking-styles).
+
+Use this table to determine supported styles and roles for each neural voice.
+
+|Voice|Styles|Style degree|Roles|
+|--|--|--|--|
+|en-US-AriaNeural|`chat`, `cheerful`, `customerservice`, `empathetic`, `narration-professional`, `newscast-casual`, `newscast-formal`|||
+|en-US-GuyNeural|`newscast`|||
+|en-US-JennyNeural|`assistant`, `chat`,`customerservice`, `newscast`|||
+|en-US-SaraNeural|`angry`, `cheerful`, `sad`|||
+|ja-JP-NanamiNeural|`chat`, `cheerful`, `customerservice`|||
+|pt-BR-FranciscaNeural|`calm`|||
+|zh-CN-XiaohanNeural|`affectionate`, `angry`, `cheerful`, `customerservice`, `disgruntled`, `embarrassed`, `fearful`, `gentle`, `sad`, `serious`|Supported|Supported|
+|zh-CN-XiaomoNeural|`angry`, `calm`, `cheerful`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported|Supported|
+|zh-CN-XiaoruiNeural|`angry`, `fearful`, `sad`|Supported||
+|zh-CN-XiaoshuangNeural|`chat`|Supported||
+|zh-CN-XiaoxiaoNeural|`affectionate`, `angry`, `assistant`, `calm`, `chat`, `cheerful`, `customerservice`, `fearful`, `gentle`, `lyrical`, `newscast`, `sad`, `serious`|Supported||
+|zh-CN-XiaoxuanNeural|`angry`, `calm`, `cheerful`, `customerservice`, `depressed`, `disgruntled`, `fearful`, `gentle`, `serious`|Supported||
+|zh-CN-YunxiNeural|`angry`, `assistant`, `cheerful`, `customerservice`, `depressed`, `disgruntled`, `embarrassed`, `fearful`, `sad`, `serious`|Supported|Supported|
+|zh-CN-YunyangNeural|`customerservice`|Supported||
+|zh-CN-YunyeNeural|`angry`, `calm`, `cheerful`, `disgruntled`, `fearful`, `sad`, `serious`|Supported|Supported|
+ ### Custom neural voice Custom neural voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data.
cognitive-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/regions.md
Title: Regions - Speech service
description: A list of available regions and endpoints for the Speech service, including speech-to-text, text-to-speech, and speech translation. -+ Previously updated : 10/13/2021- Last updated : 01/08/2022+
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
Title: Release notes - Speech Service
description: A running log of Speech Service feature releases, improvements, bug fixes, and known issues. -+++
cognitive-services Speaker Recognition Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speaker-recognition-overview.md
Previously updated : 09/02/2020 Last updated : 01/08/2022 keywords: speaker recognition, voice biometry
As with all of the Cognitive Services resources, developers who use the Speaker
| What scenarios can Speaker Recognition be used for? | Call center customer verification, voice-based patient check-in, meeting transcription, multi-user device personalization| | What is the difference between Identification and Verification? | Identification is the process of detecting which member from a group of speakers is speaking. Verification is the act of confirming that a speaker matches a known, or **enrolled** voice.| | What's the difference between text-dependent and text-independent verification? | Text-dependent verification requires a specific pass-phrase for both enrollment and recognition. Text-independent verification requires a longer voice sample that must start with a particular activation phrase for enrollment, but anything can be spoken, including during recognition.|
-| What languages are supported? | English, French, Spanish, Chinese, German, Italian, Japanese, and Portuguese |
-| What Azure regions are supported? | Speaker Recognition is a preview service, and currently only available in the West US region.|
+| What languages are supported? | See [Speaker recognition language support](language-support.md#speaker-recognition) |
+| What Azure regions are supported? | See [Speaker recognition region support](regions.md#speaker-recognition)|
| What audio formats are supported? | Mono 16 bit, 16 kHz PCM-encoded WAV | | **Accept** and **Reject** responses aren't accurate, how do you tune the threshold? | Since the optimal threshold varies highly with scenarios, the service decides whether to accept or reject based on a default threshold of 0.5. You should override the default decision and fine tune the result based on your own scenario. | | Can you enroll one speaker multiple times? | Yes, for text-dependent verification, you can enroll a speaker up to 50 times. For text-independent verification or speaker identification, you can enroll with up to 300 seconds of audio. |
cognitive-services Speech Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-devices.md
Previously updated : 11/18/2021 Last updated : 12/27/2021 # Speech devices overview
-The [Speech service](overview.md) works with a wide variety of devices and audio sources. Now, you can take your speech applications to the next level with matched hardware and software.
+The [Speech service](overview.md) works with a wide variety of devices and audio sources. You can use the default audio processing available on a device. Otherwise, the Speech SDK has an option for you to use our advanced audio processing algorithms that are designed to work well with the [Speech service](overview.md). It provides accurate far-field [speech recognition](speech-to-text.md) via noise suppression, echo cancellation, beamforming, and dereverberation.
-The Speech SDK can help you:
--- Rapidly test new voice scenarios.-- More easily integrate the cloud-based Speech service into your device.-- Create an exceptional user experience for your customers.-
-The Speech SDK uses our advanced audio processing algorithms with the device's microphone array to send the audio to the [Speech service](overview.md). It provides accurate far-field [speech recognition](speech-to-text.md) via noise suppression, echo cancellation, beamforming, and dereverberation.
-
-You can also use the Speech SDK to build ambient devices that have your own [customized keyword](./custom-keyword-basics.md). A Custom Keyword provides a cue that starts a user interaction which is unique to your brand.
-
-The Speech SDK enables a variety of voice-enabled scenarios, such as [voice assistants](./voice-assistants.md), drive-thru ordering systems, [conversation transcription](./conversation-transcription.md), and smart speakers. You can respond to users with text, speak back to them in a default or [custom voice](./how-to-custom-voice-create-voice.md), provide search results, [translate](speech-translation.md) to other languages, and more. We look forward to seeing what you build!
+## Audio processing
+Audio processing is enhancements applied to a stream of audio to improve the audio quality. Examples of common enhancements include automatic gain control (AGC), noise suppression, and acoustic echo cancellation (AEC). The Speech SDK integrates [Microsoft Audio Stack (MAS)](audio-processing-overview.md), allowing any application or product to use its audio processing capabilities on input audio.
+## Microphone array recommendations
+The Speech SDK works best with a microphone array that has been designed according to our recommended guidelines. For details, see [Microphone array recommendations](speech-sdk-microphone.md).
## Device development kits The Speech SDK is designed to work with purpose-built development kits, and varying microphone array configurations. For example, you can use one of these Azure development kits. -- [Azure Percept DK](../../azure-percept/overview-azure-percept-dk.md) contains a preconfigured audio processor and a four-microphone linear array and audio processing via XMOS Codec. You can use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services.
+- [Azure Percept DK](../../azure-percept/overview-azure-percept-dk.md) contains a preconfigured audio processor and a four-microphone linear array. You can use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services.
- [Azure Kinect DK](../../kinect-dk/about-azure-kinect-dk.md) is a spatial computing developer kit with advanced AI sensors that provide sophisticated computer vision and speech models. As an all-in-one small device with multiple modes, it contains a depth sensor, spatial microphone array with a video camera, and orientation sensor. ## Next steps
cognitive-services Speech Sdk Microphone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-sdk-microphone.md
Previously updated : 07/16/2019 Last updated : 12/27/2021 # Microphone array recommendations
-In this article, you learn how to design a microphone array for the Speech SDK.
+In this article, you learn how to design a microphone array customized for use with the Speech SDK. This is most pertinent if you are selecting, specifying, or building hardware for speech solutions.
-The Speech SDK works best with a microphone array that has been designed according to the following guidelines, including the microphone geometry and component selection. Guidance is also given on integration and electrical considerations.
+The Speech SDK works best with a microphone array that has been designed according to these guidelines, including the microphone geometry, component selection, and architecture.
## Microphone geometry The following array geometries are recommended for use with the Microsoft Audio Stack. Location of sound sources and rejection of ambient noise is improved with greater number of microphones with dependencies on specific applications, user scenarios, and the device form factor.
-| Mics & Geometry | Circular Array | Circular Array | Linear Array | Linear Array |
-| | -- | | | |
-| | <img src="media/speech-devices-sdk/7-mic-c.png" alt="7 mic circular array" width="150"/> | <img src="media/speech-devices-sdk/4-mic-c.png" alt="4 mic circular array" width="150"/> | <img src="media/speech-devices-sdk/4-mic-l.png" alt="4 mic linear array" width="150"/> | <img src="media/speech-devices-sdk/2-mic-l.png" alt="2 mic linear array" width="150"/> |
-| \# Mics | 7 | 4 | 4 | 2 |
-| Geometry | 6 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced | 3 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced | Length = 120 mm, Spacing = 40 mm | Spacing = 40 mm |
+| Array |Microphones| Geometry |
+| -- | -- | -- |
+|Circular - 7 Microphones|<img src="media/speech-devices-sdk/7-mic-c.png" alt="7 mic circular array" width="150"/>|6 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced|
+|Circular - 4 Microphones|<img src="media/speech-devices-sdk/4-mic-c.png" alt="4 mic circular array" width="150"/>|3 Outer, 1 Center, Radius = 42.5 mm, Evenly Spaced|
+|Linear - 4 Microphones|<img src="media/speech-devices-sdk/4-mic-l.png" alt="4 mic linear array" width="150"/>|Length = 120 mm, Spacing = 40 mm|
+|Linear - 2 Microphones|<img src="media/speech-devices-sdk/2-mic-l.png" alt="2 mic linear array" width="150"/>|Spacing = 40 mm|
-Microphone channels should be ordered according to the numbering depicted for each above array, increasing from 0. The Microsoft Audio Stack will require an additional reference stream of audio playback to perform echo cancellation.
+Microphone channels should be ordered ascending from 0, according to the numbering depicted above for each array. The Microsoft Audio Stack will require an additional reference stream of audio playback to perform echo cancellation.
## Component selection
The following guidelines for architecture are necessary when integrating microph
## Electrical architecture considerations
-Where applicable, arrays may be connected to a USB host (such as a SoC that runs the Microsoft Audio Stack) and interfaces to Speech services or other applications.
+Where applicable, arrays may be connected to a USB host (such as a SoC that runs the [Microsoft Audio Stack (MAS)](audio-processing-overview.md)) and interfaces to Speech services or other applications.
Hardware components such as PDM-to-TDM conversion should ensure that the dynamic range and SNR of the microphones is preserved within re-samplers.
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Previously updated : 03/23/2020 Last updated : 01/07/2022 ms.devlang: cpp, csharp, java, javascript, objective-c, python
Within the `speak` element, you can specify multiple voices for Text-to-Speech o
## Adjust speaking styles
-By default, the Text-to-Speech service synthesizes text using a neutral speaking style for neural voices. You can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscast, and voice assistant, using the `mstts:express-as` element. This is an optional element unique to the Speech service.
-
-Currently, speaking style adjustments are supported for the following neural voices:
-* `en-US-AriaNeural`
-* `en-US-JennyNeural`
-* `en-US-GuyNeural`
-* `en-US-SaraNeural`
-* `ja-JP-NanamiNeural`
-* `pt-BR-FranciscaNeural`
-* `zh-CN-XiaoxiaoNeural`
-* `zh-CN-YunyangNeural`
-* `zh-CN-YunyeNeural`
-* `zh-CN-YunxiNeural`
-* `zh-CN-XiaohanNeural`
-* `zh-CN-XiaomoNeural`
-* `zh-CN-XiaoxuanNeural`
-* `zh-CN-XiaoruiNeural`
-* `zh-CN-XiaoshuangNeural`
+By default, the Text-to-Speech service synthesizes text using a neutral speaking style for neural voices. You can adjust the speaking style, style degree, and role at the sentence level.
-The intensity of speaking style can be further changed to better fit your use case. You can specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued. Currently, speaking style adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
+Styles, style degree, and roles are supported for a subset of neural voices. If a style or role isn't supported, the service will use the default neutral speech. There are multiple ways to determine what styles and roles are supported for each voice.
+- The [Voice styles and roles](language-support.md#voice-styles-and-roles) table
+- The [voice list API](rest-text-to-speech.md#get-a-list-of-voices)
+- The code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) portal
-Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice will imitate a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. Currently, role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
-* `zh-CN-XiaomoNeural`
-* `zh-CN-XiaoxuanNeural`
-* `zh-CN-YunxiNeural`
-* `zh-CN-YunyeNeural`
+| Attribute | Description | Required / Optional |
+|--|-||
+| `style` | Specifies the speaking style. Speaking styles are voice-specific. | Required if adjusting the speaking style for a neural voice. If using `mstts:express-as`, then style must be provided. If an invalid value is provided, this element will be ignored. |
+| `styledegree` | Specifies the intensity of speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional. If you don't set the `style` attribute the `styledegree` will be ignored. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.|
+| `role` | Specifies the speaking role-play. The voice will act as a different age and gender, but the voice name won't be changed. | Optional. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices: `zh-CN-XiaomoNeural`, `zh-CN-XiaoxuanNeural`, `zh-CN-YunxiNeural` and `zh-CN-YunyeNeural`.|
-Above changes are applied at the sentence level, and styles and role-plays vary by voice. If a style or role-play isn't supported, the service will return speech in the default neutral speaking way. You can see what styles and roles are supported for each voice through the [voice list API](rest-text-to-speech.md#get-a-list-of-voices) or through the code-free [Audio Content Creation](https://aka.ms/audiocontentcreation) platform.
+
+### Style
+
+You use the `mstts:express-as` element to express different emotions like cheerfulness, empathy, and calm, or optimize the voice for different scenarios like customer service, newscast, and voice assistant.
**Syntax** ```xml <mstts:express-as style="string"></mstts:express-as> ```
-```xml
-<mstts:express-as style="string" styledegree="value"></mstts:express-as>
-```
-```xml
-<mstts:express-as role="string" style="string"></mstts:express-as>
-```
-> [!NOTE]
-> At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices. `role` only supports zh-CN-XiaomoNeural, zh-CN-XiaoxuanNeural, zh-CN-YunxiNeural, and zh-CN-YunyeNeural.
-
-**Attributes**
-
-| Attribute | Description | Required / Optional |
-|--|-||
-| `style` | Specifies the speaking style. Currently, speaking styles are voice-specific. | Required if adjusting the speaking style for a neural voice. If using `mstts:express-as`, then style must be provided. If an invalid value is provided, this element will be ignored. |
-| `styledegree` | Specifies the intensity of speaking style. **Accepted values**: 0.01 to 2 inclusive. The default value is 1, which means the predefined style intensity. The minimum unit is 0.01, which results in a slight tendency for the target style. A value of 2 results in a doubling of the default style intensity. | Optional (At the moment, `styledegree` only supports Chinese (Mandarin, Simplified) neural voices.)|
-| `role` | Specifies the speaking role-play. The voice will act as a different age and gender, but the voice name won't be changed. | Optional (At the moment, `role` only supports zh-CN-XiaomoNeural, zh-CN-XiaoxuanNeural, zh-CN-YunxiNeural, and zh-CN-YunyeNeural.)|
-
-Use this table to determine which speaking styles are supported for each neural voice.
-
-| Voice | Style | Description |
-|-||-|
-| `en-US-AriaNeural` | `style="newscast-formal"` | Expresses a formal, confident, and authoritative tone for news delivery |
-| | `style="newscast-casual"` | Expresses a versatile and casual tone for general news delivery |
-| | `style="narration-professional"` | Express a professional, objective tone for content reading |
-| | `style="customerservice"` | Expresses a friendly and helpful tone for customer support |
-| | `style="chat"` | Expresses a casual and relaxed tone |
-| | `style="cheerful"` | Expresses a positive and happy tone |
-| | `style="empathetic"` | Expresses a sense of caring and understanding |
-| `en-US-JennyNeural` | `style="customerservice"` | Expresses a friendly and helpful tone for customer support |
-| | `style="chat"` | Expresses a casual and relaxed tone |
-| | `style="assistant"` | Expresses a warm and relaxed tone for digital assistants |
-| | `style="newscast"` | Expresses a versatile and casual tone for general news delivery |
-| `en-US-GuyNeural` | `style="newscast"` | Expresses a formal and professional tone for narrating news |
-| `en-US-SaraNeural` | `style="cheerful"` | Expresses a positive and happy tone |
-| | `style="sad"` | Expresses a sorrowful tone |
-| | `style="angry"` | Expresses an angry and annoyed tone |
-| `ja-JP-NanamiNeural` | `style="cheerful"` | Expresses a positive and happy tone |
-| | `style="chat"` | Expresses a casual and relaxed tone |
-| | `style="customerservice"` | Expresses a friendly and helpful tone for customer support |
-| `pt-BR-FranciscaNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
-| `zh-CN-XiaoxiaoNeural` | `style="newscast"` | Expresses a formal and professional tone for narrating news |
-| | `style="customerservice"` | Expresses a friendly and helpful tone for customer support |
-| | `style="assistant"` | Expresses a warm and relaxed tone for digital assistants |
-| | `style="chat"` | Expresses a casual and relaxed tone for chit-chat |
-| | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
-| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
-| | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. |
-| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| | `style="affectionate"` | Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The ΓÇ£personalityΓÇ¥ of the speaker is often endearing in nature. |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| | `style="lyrical"` | Expresses emotions in a melodic and sentimental way |
-| `zh-CN-YunyangNeural` | `style="customerservice"` | Expresses a friendly and helpful tone for customer support |
-| `zh-CN-YunyeNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
-| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
-| | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. |
-| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| `zh-CN-YunxiNeural` | `style="assistant"` | Expresses a warm and relaxed tone for digital assistants |
-| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
-| | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. |
-| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| | `style="depressed"` | Expresses a melancholic and despondent tone with lower pitch and energy |
-| | `style="embarrassed"` | Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable |
-| `zh-CN-XiaohanNeural` | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
-| | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. |
-| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| | `style="embarrassed"` | Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable |
-| | `style="affectionate"` | Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The ΓÇ£personalityΓÇ¥ of the speaker is often endearing in nature. |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| `zh-CN-XiaomoNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
-| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
-| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| | `style="depressed"` | Expresses a melancholic and despondent tone with lower pitch and energy |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| `zh-CN-XiaoxuanNeural` | `style="calm"` | Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech. |
-| | `style="cheerful"` | Expresses an upbeat and enthusiastic tone, with higher pitch and vocal energy |
-| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| | `style="disgruntled"` | Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt. |
-| | `style="serious"` | Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence. |
-| | `style="depressed"` | Expresses a melancholic and despondent tone with lower pitch and energy |
-| | `style="gentle"` | Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy |
-| `zh-CN-XiaoruiNeural` | `style="sad"` | Expresses a sorrowful tone, with higher pitch, less intensity, and lower vocal energy. Common indicators of this emotion would be whimpers or crying during speech. |
-| | `style="angry"` | Expresses an angry and annoyed tone, with lower pitch, higher intensity, and higher vocal energy. The speaker is in a state of being irate, displeased, and offended. |
-| | `style="fearful"` | Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness. |
-| `zh-CN-XiaoshuangNeural` | `style="chat"` | Expresses a casual and relaxed tone. |
-
-Use this table to check the supported roles and their definitions.
-
-|Role | Description |
-|-|-|
-|`role="Girl"` | The voice imitates to a girl. |
-|`role="Boy"` | The voice imitates to a boy. |
-|`role="YoungAdultFemale"`| The voice imitates to a young adult female.|
-|`role="YoungAdultMale"` | The voice imitates to a young adult male.|
-|`role="OlderAdultFemale"`| The voice imitates to an older adult female.|
-|`role="OlderAdultMale"` | The voice imitates to an older adult male.|
-|`role="SeniorFemale"` | The voice imitates to a senior female.|
-|`role="SeniorMale"` | The voice imitates to a senior male.|
- **Example**
This SSML snippet illustrates how the `<mstts:express-as>` element is used to ch
</speak> ```
-This SSML snippet illustrates how the `styledegree` attribute is used to change the intensity of speaking style for XiaoxiaoNeural.
+The table below has descriptions of each supported style.
+
+|Style|Description|
+|--|-|
+|`style="affectionate"`|Expresses a warm and affectionate tone, with higher pitch and vocal energy. The speaker is in a state of attracting the attention of the listener. The personality of the speaker is often endearing in nature.|
+|`style="angry"`|Expresses an angry and annoyed tone.|
+|`style="assistant"`|Expresses a warm and relaxed tone for digital assistants.|
+|`style="calm"`|Expresses a cool, collected, and composed attitude when speaking. Tone, pitch, prosody is much more uniform compared to other types of speech.|
+|`style="chat"`|Expresses a casual and relaxed tone.|
+|`style="cheerful"`|Expresses a positive and happy tone.|
+|`style="customerservice"`|Expresses a friendly and helpful tone for customer support.|
+|`style="depressed"`|Expresses a melancholic and despondent tone with lower pitch and energy.|
+|`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.|
+|`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.|
+|`style="empathetic"`|Expresses a sense of caring and understanding.|
+|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tenseness and uneasiness.|
+|`style="gentle"`|Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy.|
+|`style="lyrical"`|Expresses emotions in a melodic and sentimental way.|
+|`style="narration-professional"`|Expresses a professional, objective tone for content reading.|
+|`style="newscast"`|Expresses a formal and professional tone for narrating news.|
+|`style="newscast-casual"`|Expresses a versatile and casual tone for general news delivery.|
+|`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.|
+|`style="sad"`|Expresses a sorrowful tone.|
+|`style="serious"`|Expresses a strict and commanding tone. Speaker often sounds stiffer and much less relaxed with firm cadence.|
++
+### Style degree
+
+The intensity of speaking style can be adjusted to better fit your use case. You specify a stronger or softer style with `styledegree` to make the speech more expressive or subdued. Speaking style degree adjustments are supported for Chinese (Mandarin, Simplified) neural voices.
+
+**Syntax**
+
+```xml
+<mstts:express-as style="string" styledegree="value"></mstts:express-as>
+```
+
+**Example**
+
+This SSML snippet illustrates how the `styledegree` attribute is used to change the intensity of speaking style for `zh-CN-XiaomoNeural`.
+ ```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
This SSML snippet illustrates how the `styledegree` attribute is used to change
</speak> ```
-This SSML snippet illustrates how the `role` attribute is used to change the role-play for XiaomoNeural.
+### Role
+
+Apart from adjusting the speaking styles and style degree, you can also adjust the `role` parameter so that the voice will imitate a different age and gender. For example, a male voice can raise the pitch and change the intonation to imitate a female voice, but the voice name won't be changed. Role adjustments are supported for these Chinese (Mandarin, Simplified) neural voices:
+
+* `zh-CN-XiaomoNeural`
+* `zh-CN-XiaoxuanNeural`
+* `zh-CN-YunxiNeural`
+* `zh-CN-YunyeNeural`
+
+**Syntax**
+
+```xml
+<mstts:express-as role="string" style="string"></mstts:express-as>
+```
+
+**Example**
+
+This SSML snippet illustrates how the `role` attribute is used to change the role-play for `zh-CN-XiaomoNeural`.
+ ```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="https://www.w3.org/2001/mstts" xml:lang="zh-CN">
This SSML snippet illustrates how the `role` attribute is used to change the rol
</speak> ```
+The table below has descriptions of each supported role.
+
+|Role | Description |
+|-|-|
+|`role="Girl"` | The voice imitates to a girl. |
+|`role="Boy"` | The voice imitates to a boy. |
+|`role="YoungAdultFemale"`| The voice imitates to a young adult female.|
+|`role="YoungAdultMale"` | The voice imitates to a young adult male.|
+|`role="OlderAdultFemale"`| The voice imitates to an older adult female.|
+|`role="OlderAdultMale"` | The voice imitates to an older adult male.|
+|`role="SeniorFemale"` | The voice imitates to a senior female.|
+|`role="SeniorMale"` | The voice imitates to a senior male.|
++ ## Adjust speaking languages
-You can adjust speaking languages for neural voices.
+You can adjust speaking languages for neural voices at the sentence level and word level.
+ Enable one voice to speak different languages fluently (like English, Spanish, and Chinese) using the `<lang xml:lang>` element. This is an optional element unique to the Speech service. Without this element, the voice will speak its primary language.
-Currently, speaking language adjustments are supported for these neural voices: `en-US-JennyMultilingualNeural`. Above changes are applied at the sentence level and word level. If a language isn't supported, the service will return no audio stream.
+
+Speaking language adjustments are only supported for the `en-US-JennyMultilingualNeural` neural voice. Above changes are applied at the sentence level and word level. If a language isn't supported, the service will return no audio stream.
> [!NOTE]
-> Currently, the `<lang xml:lang>` element is incompatible with `prosody` and `break` element, you cannot adjust pause and prosody like pitch, contour, rate, volume in this element.
+> The `<lang xml:lang>` element is incompatible with `prosody` and `break` element, you cannot adjust pause and prosody like pitch, contour, rate, or volume in this element.
**Syntax**
Currently, speaking language adjustments are supported for these neural voices:
| Attribute | Description | Required / Optional | |--|-||
-| `lang` | Specifies the speaking languages. Currently, speaking different languages are voice-specific. | Required if adjusting the speaking language for a neural voice. If using `lang xml:lang`, then locale must be provided. |
+| `lang` | Specifies the speaking languages. Speaking different languages are voice-specific. | Required if adjusting the speaking language for a neural voice. If using `lang xml:lang`, then locale must be provided. |
-Use this table to determine which speaking languages are supported for each neural voice.
+Use this table to determine which speaking languages are supported for each neural voice. If a language isn't supported, the service will return no audio stream.
| Voice | Locale language | Description | |-||-|
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/costs/quick-create-budget-template.md
Title: Quickstart - Create a budget with an Azure Resource Manager template
description: Quickstart showing how to Create a budget with an Azure Resource Manager template. + tags: azure-resource-manager Previously updated : 10/07/2021 Last updated : 01/07/2022 # Quickstart: Create a budget with an ARM template
-Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. When the budget thresholds you've created are exceeded, notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. This quickstart shows you how to create a budget using a Azure Resource Manager template (ARM template).
+Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. When the budget thresholds you've created are exceeded, notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. This quickstart shows you how to create a budget using three different Azure Resource Manager templates (ARM template).
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button for one of the following templates. The template will open in the Azure portal.
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget%2Fazuredeploy.json)
+| Template | Deployment button |
+| | |
+| No filter | [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget-simple%2Fazuredeploy.json) |
+| One filter | [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget-onefilter%2Fazuredeploy.json) |
+| Two or more filters | [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget%2Fazuredeploy.json) |
## Prerequisites
The following Azure permissions, or scopes, are supported per subscription for b
For more information about assigning permission to Cost Management data, see [Assign access to Cost Management data](assign-access-acm-data.md).
-## Review the template
+Use one of the following templates, based on your needs.
+
+| Template | Description |
+| | |
+| No filter | The ARM template doesn't have any filters. |
+| One filter | The ARM template has a filter for resource groups. |
+| Two or more filters | The ARM template has a filter for resource groups and a filter for meter categories. |
+
+## Review and deploy the template
+
+## [No filter](#tab/no-filter)
+
+### Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-budget-simple).
++
+One Azure resource is defined in the template:
+
+* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+
+### Deploy the template
+
+1. Select the following image to sign in to Azure and open a template. The template creates a budget without any filters.
+
+ [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget-simple%2Fazuredeploy.json)
+
+2. Select or enter the following values.
+
+ :::image type="content" source="./media/quick-create-budget-template/create-budget-simple-image.png" alt-text="Resource Manager template, Create budget without a filter, deploy portal." lightbox="./media/quick-create-budget-template/create-budget-simple-image.png" :::
+
+ * **Subscription**: select an Azure subscription.
+ * **Resource group**: if required, select an existing resource group, or **Create new**.
+ * **Region**: select an Azure region. For example, **Central US**.
+ * **Budget Name**: enter a name for the budget. It should be unique within a resource group. Only alphanumeric, underscore, and hyphen characters are allowed.
+ * **Amount**: enter the total amount of cost to track with the budget.
+ * **Time Grain**: enter the time covered by a budget. Allowed values are Monthly, Quarterly, or Annually. The budget resets at the end of the time grain.
+ * **Start Date**: enter the start date with the first day of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months from today. You can specify a past start date with the Time Grain period.
+ * **End Date**: enter the end date for the budget in YYYY-MM-DD format.
+ * **First Threshold**: enter a threshold value for the first notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0.01 and 1000.
+ * **Second Threshold**: enter a threshold value for the second notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0.01 and 1000.
+ * **Contact Emails** enter a list of email addresses to send the budget notification to when a threshold is exceeded. It accepts an array of strings. Expected format is `["user1@domain.com","user2@domain.com"]`.
+
+3. Depending on your Azure subscription type, do one of the following actions:
+ - Select **Review + create**.
+ - Review the terms and conditions, select **I agree to the terms and conditions stated above**, and then select **Purchase**.
+
+4. If you selected **Review + create**, your template is validated. Select **Create**.
+
+ ![Resource Manager template, budget no filters, deploy portal notification.](./media/quick-create-budget-template/resource-manager-template-portal-deployment-notification.png)
+
+The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use Azure PowerShell, Azure CLI, and REST API. To learn about other deployment templates, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md).
+
+### [One filter](#tab/one-filter)
+
+### Review the template
+
+The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-budget-onefilter).
++
+One Azure resource is defined in the template:
+
+* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+
+### Deploy the template
+
+1. Select the following image to sign in to Azure and open a template. The template creates a budget with a filter for resource groups.
+
+ [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget-onefilter%2Fazuredeploy.json)
+
+2. Select or enter the following values.
+
+ :::image type="content" source="./media/quick-create-budget-template/create-budget-one-filter-image.png" alt-text="Resource Manager template, Create budget with one filter, deploy portal]" lightbox="./media/quick-create-budget-template/create-budget-one-filter-image.png" :::
+
+ * **Subscription**: select an Azure subscription.
+ * **Resource group**: if required, select an existing resource group, or **Create new**.
+ * **Region**: select an Azure region. For example, **Central US**.
+ * **Budget Name**: enter a name for the budget. It should be unique within a resource group. Only alphanumeric, underscore, and hyphen characters are allowed.
+ * **Amount**: enter the total amount of cost to track with the budget.
+ * **Time Grain**: enter the time covered by a budget. Allowed values are Monthly, Quarterly, or Annually. The budget resets at the end of the time grain.
+ * **Start Date**: enter the start date with the first day of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months from today. You can specify a past start date with the Time Grain period.
+ * **End Date**: enter the end date for the budget in YYYY-MM-DD format.
+ * **First Threshold**: enter a threshold value for the first notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0.01 and 1000.
+ * **Second Threshold**: enter a threshold value for the second notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0.01 and 1000.
+ * **Contact Emails** enter a list of email addresses to send the budget notification to when a threshold is exceeded. It accepts an array of strings. Expected format is `["user1@domain.com","user2@domain.com"]`.
+ * **Resource Group Filter Values** enter a list of resource group names to filter. It accepts an array of strings. Expected format is `["Resource Group Name1","Resource Group Name2"]`. The array can't be empty.
+
+3. Depending on your Azure subscription type, do one of the following actions:
+ - Select **Review + create**.
+ - Review the terms and conditions, select **I agree to the terms and conditions stated above**, and then select **Purchase**.
+
+4. If you selected **Review + create**, your template is validated. Select **Create**.
+
+ ![Resource Manager template, budget one filter, deploy portal notification](./media/quick-create-budget-template/resource-manager-template-portal-deployment-notification.png)
+
+The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use Azure PowerShell, Azure CLI, and REST API. To learn about other deployment templates, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md).
+
+### [Two or more filters](#tab/two-filters)
+
+### Review the template
The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-budget).
One Azure resource is defined in the template:
* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
-## Deploy the template
+### Deploy the template
-1. Select the following image to sign in to Azure and open a template. The template creates a budget.
+1. Select the following image to sign in to Azure and open a template. The template creates a budget with a filter for resource groups and a filter for meter categories.
[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.consumption%2Fcreate-budget%2Fazuredeploy.json) 2. Select or enter the following values.
- :::image type="content" source="./media/quick-create-budget-template/create-budget-using-template-portal.png" alt-text="Resource Manager template, Create budget, deploy portal]" lightbox="./media/quick-create-budget-template/create-budget-using-template-portal.png" :::
+ :::image type="content" source="./media/quick-create-budget-template/create-budget-two-filters-image.png" alt-text="Resource Manager template, Create budget with two filters, deploy portal]" lightbox="./media/quick-create-budget-template/create-budget-two-filters-image.png" :::
* **Subscription**: select an Azure subscription. * **Resource group**: if required, select an existing resource group, or **Create new**.
One Azure resource is defined in the template:
* **Time Grain**: enter the time covered by a budget. Allowed values are Monthly, Quarterly, or Annually. The budget resets at the end of the time grain. * **Start Date**: enter the start date with the first day of the month in YYYY-MM-DD format. A future start date shouldn't be more than three months from today. You can specify a past start date with the Time Grain period. * **End Date**: enter the end date for the budget in YYYY-MM-DD format.
- * **First Threshold**: enter a threshold value for the first notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0 and 1000.
- * **Second Threshold**: enter a threshold value for the second notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0 and 1000.
+ * **First Threshold**: enter a threshold value for the first notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0.01 and 1000.
+ * **Second Threshold**: enter a threshold value for the second notification. A notification is sent when the cost exceeds the threshold. It's always percent and has to be between 0.01 and 1000.
* **Contact Roles** enter the list of contact roles to send the budget notification to when the threshold is exceeded. Default values are Owner, Contributor, and Reader. Expected format is `["Owner","Contributor","Reader"]`.
- * **Contact Emails** enter a list of email addresses to send the budget notification to when a threshold is exceeded. Expected format is `["user1@domain.com","user2@domain.com"]`.
- * **Contact Groups** enter a list of action group resource IDs, as a full resource URIs, to send the budget notification to when the threshold is exceeded. It accepts array of strings. Expected format is `["action group resource ID1","action group resource ID2"]`. If don't want to use action groups, enter `[]`.
- * **Resource Group Filter Values** enter a list of resource group names to filter. Expected format is `["Resource Group Name1","Resource Group Name2"]`. If you don't want to apply a filter, enter `[]`.
- * **Meter Category Filter Values** enter a list of Azure service meter categories. Expected format is `["Meter Category1","Meter Category2"]`. If you didn't want to apply a filter, enter `[]`.
+ * **Contact Emails** enter a list of email addresses to send the budget notification to when a threshold is exceeded. It accepts an array of strings. Expected format is `["user1@domain.com","user2@domain.com"]`.
+ * **Contact Groups** enter a list of action group resource IDs, as a full resource URIs, to send the budget notification to when the threshold is exceeded. It accepts an array of strings. Expected format is `["action group resource ID1","action group resource ID2"]`. If don't want to use action groups, enter `[]`.
+ * **Resource Group Filter Values** enter a list of resource group names to filter. It accepts an array of strings. Expected format is `["Resource Group Name1","Resource Group Name2"]`. The array can't be empty.
+ * **Meter Category Filter Values** enter a list of Azure service meter categories. It accepts an array of strings. Expected format is `["Meter Category1","Meter Category2"]`. The array can't be empty.
3. Depending on your Azure subscription type, do one of the following actions: - Select **Review + create**.
One Azure resource is defined in the template:
4. If you selected **Review + create**, your template is validated. Select **Create**.
- ![Resource Manager template, budget, deploy portal notification](./media/quick-create-budget-template/resource-manager-template-portal-deployment-notification.png)
+ ![Resource Manager template, budget two or more filters, deploy portal notification](./media/quick-create-budget-template/resource-manager-template-portal-deployment-notification.png)
The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use Azure PowerShell, Azure CLI, and REST API. To learn about other deployment templates, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md). ++ ## Validate the deployment
-You can use the Azure portal to verify that the budget is created by navigating to **Cost Management + Billing** > select a scope > **Budgets**. Or, use the following Azure CLI or Azure PowerShell scripts to view the budget.
+Use one of the following ways to verify that the budget is created.
+
+# [Azure portal](#tab/portal)
+
+Navigate to **Cost Management + Billing** > select a scope > **Budgets**.
# [CLI](#tab/CLI)
Get-AzConsumptionBudget
When you no longer need a budget, delete it by using one the following methods:
-### Azure portal
+# [Azure portal](#tab/portal)
Navigate to **Cost Management + Billing** > select a billing scope > **Budgets** > select a budget > then select **Delete budget**.
-### Command line
-
-You can remove the budget using Azure CLI or Azure PowerShell.
- # [CLI](#tab/CLI) ```azurecli-interactive
Write-Host "Press [ENTER] to continue..."
## Next steps
-In this quickstart, you created an Azure budget the deployment. To learn more about Cost Management and Billing and Azure Resource Manager, continue on to the articles below.
+In this quickstart, you created an Azure budget and deployed it. To learn more about Cost Management and Billing and Azure Resource Manager, continue on to the articles below.
- Read the [Cost Management and Billing](../cost-management-billing-overview.md) overview - [Create budgets](tutorial-acm-create-budgets.md) in the Azure portal
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 12/14/2021 Last updated : 01/08/2022 # Overview of Microsoft Defender for Containers
On this page, you'll learn how how you can use Defender for Containers to improv
||:| | Release state: | General availability (GA)<br>Where indicated, specific features are in preview. [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)] | | Pricing: | **Microsoft Defender for Containers** is free for the month of December 2021. After that, it will be billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/security-center/) (which will be updated at the end of December 2021) |
-| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
+| Registries and images: | **Supported**<br> ΓÇó Linux images in Azure Container Registry (ACR) registries accessible from the public internet with shell access<br> ΓÇó Private registries with access granted to [Trusted Services](../container-registry/allow-access-trusted-services.md#trusted-services)<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md)<br><br>**Unsupported**<br> ΓÇó Windows images<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md)<br> ΓÇó Nodes with taints applied |
| Kubernetes distributions: | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br><br>**Tested on**<br> ΓÇó [Azure Kubernetes Service](../aks/intro-kubernetes.md)<br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Azure Kubernetes Service on Azure Stack HCI](/azure-stack/aks-hci/overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/) | | Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) (Except for preview features)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts (Preview) |
defender-for-cloud Recommendations Reference Aws https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/recommendations-reference-aws.md
Title: Reference table for all Microsoft Defender for Cloud recommendations for AWS resources description: This article lists Microsoft Defender for Cloud's security recommendations that help you harden and protect your AWS resources. Previously updated : 12/26/2021 Last updated : 01/08/2022 # Security recommendations for AWS resources - a reference guide
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-cloud/recommendations-reference.md
description: This article lists Microsoft Defender for Cloud's security recommen
Previously updated : 12/28/2021 Last updated : 01/08/2022
devtest-labs Create Lab Windows Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/create-lab-windows-vm-template.md
Title: Create a lab in Azure DevTest Labs by using an Azure Resource Manager template
-description: In this quickstart, you create a lab in Azure DevTest Labs by using an Azure Resource Manager template (ARM template). A lab admin sets up a lab, creates VMs in the lab, and configures policies.
+description: Use an Azure Resource Manager (ARM) template to create a lab that has a virtual machine in Azure DevTest Labs.
Previously updated : 12/10/2021 Last updated : 01/03/2022 # Quickstart: Use an ARM template to create a lab in DevTest Labs
-This quickstart describes how to use an Azure Resource Manager template (ARM template) to create a lab in Azure DevTest Labs. The lab contains a Windows Server 2019 Datacenter virtual machine (VM).
+This quickstart uses an Azure Resource Manager (ARM) template to create a lab in Azure DevTest Labs that has one Windows Server 2019 Datacenter virtual machine (VM) in it.
-
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+In this quickstart, you take the following actions:
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.devtestlab%2Fdtl-create-lab-windows-vm-claimed%2Fazuredeploy.json)
+> [!div class="checklist"]
+> * Review the ARM template.
+> * Deploy the ARM template to create a lab and VM.
+> * Verify the deployment.
+> * Clean up resources.
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Devtestlab).
+DevTest Labs can use ARM templates for many tasks, from creating and provisioning labs to adding users. This quickstart uses the [Creates a lab with a claimed VM](https://azure.microsoft.com/resources/templates/dtl-create-lab-windows-vm-claimed) ARM template from the [Azure Quickstart Templates gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Devtestlab). The template defines the following resource types:
-Three Azure resources are defined in the template:
+- [Microsoft.DevTestLab/labs](/azure/templates/microsoft.devtestlab/labs) creates the lab.
+- [Microsoft.DevTestLab/labs/virtualnetworks](/azure/templates/microsoft.devtestlab/labs/virtualnetworks) creates a virtual network.
+- [Microsoft.DevTestLab/labs/virtualmachines](/azure/templates/microsoft.devtestlab/labs/virtualmachines) creates the lab VM.
-- [Microsoft.DevTestLab/labs](/azure/templates/microsoft.devtestlab/labs): create a DevTest Labs lab.-- [Microsoft.DevTestLab labs/virtualnetworks](/azure/templates/microsoft.devtestlab/labs/virtualnetworks): create a DevTest Labs virtual network.-- [Microsoft.DevTestLab labs/virtualmachines](/azure/templates/microsoft.devtestlab/labs/virtualmachines): create a DevTest Labs virtual machine.+
+The [Azure Quickstart Templates gallery](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Devtestlab) and [Azure Quickstart Templates public GitHub repository](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.devtestlab) have several other DevTest Labs ARM quickstart templates.
-For more ARM template samples, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Devtestlab).
+The [Azure Lab Services Community public GitHub repository](https://github.com/Azure/azure-devtestlab/tree/master) also has many DevTest Labs [artifacts](https://github.com/Azure/azure-devtestlab/tree/master/Artifacts), [environments](https://github.com/Azure/azure-devtestlab/tree/master/Environments), [PowerShell scripts](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/Scripts), and [quickstart ARM templates](https://github.com/Azure/azure-devtestlab/tree/master/samples/DevTestLabs/QuickStartTemplates) you can use or customize for your needs.
## Deploy the template
-1. Select the **Deploy to Azure** button below to sign in to Azure and open the ARM template.
+1. Select the following **Deploy to Azure** button to sign in to the Azure portal and open the quickstart ARM template:
+
+ [![Button that deploys the ARM template to Azure.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.devtestlab%2Fdtl-create-lab-windows-vm-claimed%2Fazuredeploy.json)
- [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.devtestlab%2Fdtl-create-lab-windows-vm-claimed%2Fazuredeploy.json)
+1. On the **Creates a lab in Azure DevTest Labs with a claimed VM** screen, complete the following items:
-1. Enter or select the following values:
+ - **Resource group**: Select an existing resource group from the dropdown list, or create a new resource group so it's easy to clean up later.
+ - **Region**: If you created a new resource group, select a location for the resource group and lab.
+ - **Lab Name**: Enter a name for the new lab.
+ - **Vm Name**: Enter a name for the new VM.
+ - **User Name**: Enter a name for the user who can access the VM.
+ - **Password**: Enter a password for the VM user.
- |Property | Description |
- |||
- |Subscription| From the drop-down list, select the Azure subscription to be used for the lab.|
- |Resource group| From the drop-down list, select your existing resource group, or select **Create new**.|
- |Region | The value will autopopulate with the location used for the resource group.|
- |Lab Name| Enter a lab name unique for the subscription.|
- |Location| Leave as is. |
- |Vm Name| Enter a VM name unique for the subscription.|
- |Vm Size | Leave as is. |
- |User Name | Enter a user name for the VM.|
- |Password| Enter a password between 8 and 123 characters long.|
+1. Select **Review + create**, and when validation passes, select **Create**.
:::image type="content" source="./media/create-lab-windows-vm-template/deploy-template-page.png" alt-text="Screenshot of the Create a lab page.":::
-1. Select **Review + create**, and when validation passes, select **Create**. You'll then be taken to the deployment **Overview** page where you can monitor progress. Deployment times will vary based on the selected hardware, base image, and artifacts. The deployment time for the configurations used in this template is approximately 12 minutes.
+1. During the deployment, you can select the **Notifications** icon at the top of the screen to see deployment progress on the template **Overview** page. Deployment, especially creating a VM, takes a while.
## Validate the deployment
-1. Once the deployment is complete, select **Go to resource group** from either the template **Overview** page or from **Notifications**.
+1. When the deployment is complete, select **Go to resource group** from the template **Overview** page or from **Notifications**.
:::image type="content" source="./media/create-lab-windows-vm-template/navigate-resource-group.png" alt-text="Screenshot that shows deployment complete and the Go to resource group button.":::
-1. The **Resource group** page lists the resources in the resource group, including your lab and its dependent resources like virtual networks and VMs. Select your **DevTest Lab** resource to go to your lab's **Overview** page.
+1. The **Resource group** page lists the resources in the resource group, including your lab and its dependent resources like virtual networks and VMs. Select the **DevTest Lab** resource to go to the lab's **Overview** page.
:::image type="content" source="./media/create-lab-windows-vm-template/resource-group-overview.png" alt-text="Screenshot of resource group overview.":::
-1. On your lab's **Overview** page, you can see your VM under section **My virtual machines**.
+1. On the lab **Overview** page, you can see the VM under **My virtual machines**.
:::image type="content" source="./media/create-lab-windows-vm-template/lab-home-page.png" alt-text="Screenshot that shows the lab Overview page with the virtual machine.":::
-1. Step back and list the resource groups for your subscription. Observe that the deployment created a new resource group to hold the VM. The syntax is the lab name + VM name + random numbers. Based on the values used in this article, the autogenerated name is `MyOtherLab-myVM-173385`.
-
- :::image type="content" source="./media/create-lab-windows-vm-template/resource-group-list.png" alt-text="Screenshot of resource group list.":::
+> [!NOTE]
+> The deployment also creates a resource group for the VM. The resource group contains VM resources like the IP address, network interface, and disk. The resource group appears in your subscription's **Resource groups** list with the name **\<lab name>-\<vm name>-\<numerical string>**.
## Clean up resources
-Delete resources to avoid charges for running the lab and VM on Azure. If you plan to go through the next tutorial to access the VM in the lab, you can clean up the resources after you finish that tutorial. Otherwise, follow these steps:
-
-1. Return to the home page for the lab you created.
+When you're done using these lab resources, delete them to avoid further charges. You can't delete a resource group that has a lab in it, so delete the lab first:
-1. From the top menu, select **Delete**.
+1. On the lab overview page, select **Delete** from the top menu.
:::image type="content" source="./media/create-lab-windows-vm-template/portal-lab-delete.png" alt-text="Screenshot of lab delete button.":::
-1. On the **Are you sure you want to delete it** page, enter the lab name in the text box and then select **Delete**.
+1. On the **Are you sure you want to delete it** page, enter the lab name, and then select **Delete**.
+
+ During the deletion, you can select **Notifications** at the top of your screen to view progress. Deleting the lab takes a while.
-1. During the deletion, you can select **Notifications** at the top of your screen to view progress. Deleting the lab takes a while. Continue to the next step once the lab is deleted.
+You can now delete the resource group that contained the lab, which deletes all resources in the resource group.
-1. If you created the lab in an existing resource group, then all of the lab resources have been removed. If you created a new resource group for this tutorial, it's now empty and can be deleted. It wouldn't have been possible to have deleted the resource group earlier while the lab was still in it.
+1. Select the resource group that contained the lab from your subscription's **Resource groups** list.
+
+1. At the top of the page, select **Delete resource group**.
+
+1. On the **Are you sure you want to delete "\<resource group name>"** page, enter the resource group name, and then select **Delete**.
## Next steps
-In this quickstart, you created a lab that has a VM. To learn how to access the lab and VM, advance to the next tutorial:
+
+In this quickstart, you created a lab that has a Windows VM. To learn how to connect to and manage lab VMs, see the next tutorial:
> [!div class="nextstepaction"]
-> [Tutorial: Access the lab](tutorial-use-custom-lab.md)
+> [Tutorial: Work with lab VMs](tutorial-use-custom-lab.md)
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data-legacy.md
description: How to export data from your Azure IoT Central application to Azure
Previously updated : 08/30/2021 Last updated : 01/06/2022
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
Title: Use the REST API to manage jobs in Azure IoT Central
description: How to use the IoT Central REST API to create and manage jobs in an application Previously updated : 08/30/2021 Last updated : 01/05/2022
iot-central Howto Manage Preferences https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-preferences.md
Title: Manage your personal preferences on IoT Central | Microsoft Docs
description: How to manage your personal application preferences such as changing language, theme, and default organization in your IoT Central application. Previously updated : 08/30/2021 Last updated : 01/04/2022
iot-central Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/tutorial-connect-device.md
Title: Tutorial - Connect a generic client app to Azure IoT Central | Microsoft
description: This tutorial shows you how to connect a device running either a C, C#, Java, JavaScript, or Python client app to your Azure IoT Central application. You modify the automatically generated device template by adding views that let an operator interact with a connected device. Previously updated : 08/31/2021 Last updated : 01/04/2022
iot-central Tutorial In Store Analytics Export Data Visualize Insights https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
Now you have an event hub, you can configure your **In-store analytics - checkou
1. Sign in to your **In-store analytics - checkout** IoT Central application. 1. Select **Data export** in the left pane.
-1. Select **New > Azure Event Hubs**.
-1. Enter _Telemetry export_ as the **Display Name**.
+1. Enter _Telemetry export_ as the **export Name**.
+1. Select **Telemetry** as type of data to export.
+1. Select **create new one** under Destinations.
+1. Enter **Destination name**
1. Select your **Event Hubs namespace**. 1. Select the **store-telemetry** event hub. 1. Switch off **Devices** and **Device Templates** in the **Data to export** section.
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
Previously updated : 08/17/2021 Last updated : 01/06/2022
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
Previously updated : 08/17/2021 Last updated : 01/06/2022 # Tutorial: Deploy and walk through the digital distribution center application template
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
description: How to configure an IoT Edge device to connect to Azure IoT Edge ga
Previously updated : 03/01/2021 Last updated : 01/09/2022
The API proxy module was designed to be customized to handle most common gateway
1. Select **Review + create** to go to the final step. 1. Select **Create** to deploy to your device.
+## Integrate Microsoft Defender for IoT with IoT Edge gateway
+
+Leaf devices can be used to integrate the Microsoft Defender for IoT's micro agent with the IoT Edge gateway using leaf device proxying.
+
+Learn more about the [Defender for IoT micro agent](../defender-for-iot/device-builders/overview.md#defender-for-iot-micro-agent).
+
+**To integrate Microsoft Defender for IoT with IoT Edge using leaf device proxying**:
+
+1. Sign in to the Azure portal.
+
+1. Navigate to **IoT Hub** > **`Your Hub`** > **Device management** > **Devices**
+
+1. Select your device.
+
+ :::image type="content" source="media/how-to-connect-downstream-iot-edge-device/select-device.png" alt-text="Screenshot showing where your device is located for selection.":::
+
+1. Select the `DefenderIotMicroAgent` module twin that you created from [these instructions](../defender-for-iot/device-builders/quickstart-create-micro-agent-module-twin.md#create-defenderiotmicroagent-module-twin).
+
+ :::image type="content" source="media/how-to-connect-downstream-iot-edge-device/defender-micro-agent.png" alt-text="Screenshot showing the location of the DefenderIotMicroAgent.":::
+
+1. Select the :::image type="icon" source="media/how-to-connect-downstream-iot-edge-device/copy-icon.png" border="false"::: button to copy your Connection string (primary key).
+
+1. Paste the Connection string into a text editing application, and add the GatewayHostName to the string. For example, `HostName=nested11.azure-devices.net;DeviceId=leaf1;ModuleId=module1;SharedAccessKey=xxx;GatewayHostName=10.16.7.4`.
+
+1. Open a terminal on the leaf device.
+
+1. Use the following command to place the connection string encoded in utf-8 in the Defender for Cloud agent directory into the file `connection_string.txt` in the following path: `/var/defender_iot_micro_agent/connection_string.txt`:
+
+ ```bash
+ sudo bash -c 'echo "<connection string>" > /var/defender_iot_micro_agent/connection_string.txt'
+ ```
+
+ The `connection_string.txt` should now be located in the following path location `/var/defender_iot_micro_agent/connection_string.txt`.
+
+1. Restart the service using this command:
+
+ ```bash
+ sudo systemctl restart defender-iot-micro-agent.service
+ ```
+
+1. Navigate back to the device.
+
+ :::image type="content" source="media/how-to-connect-downstream-iot-edge-device/device.png" alt-text="Screenshot showing how to navigate back to your device.":::
+
+1. Enable the connection to the IoT Hub, and select the gear icon.
+
+ :::image type="content" source="media/how-to-connect-downstream-iot-edge-device/gear-icon.png" alt-text="Screenshot showing what to select to set a parent device.":::
+
+1. Select the parent device from the displayed list.
+
+1. Ensure that port 8883 (MQTT) between the leaf device and the IoT Edge device is open.
+ ## Next steps [How an IoT Edge device can be used as a gateway](iot-edge-as-gateway.md)
-[Configure the API proxy module for your gateway hierarchy scenario](how-to-configure-api-proxy-module.md)
+[Configure the API proxy module for your gateway hierarchy scenario](how-to-configure-api-proxy-module.md)
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
+
+ Title: Deploy single-tenant logic apps to private storage accounts
+description: How to deploy Standard logic app workflows to Azure storage accounts that use private endpoints and deny public access.
+
+ms.suite: integration
++ Last updated : 01/06/2022+
+# As a developer, I want to deploy my single-tenant logic apps to Azure storage accounts using private endpoints
++
+# Deploy single-tenant Standard logic apps to private storage accounts using private endpoints
+
+When you create a single-tenant Standard logic app resource, you're required to have a storage account for storing logic app artifacts. You can restrict access to this storage account so that only the resources inside a virtual network can connect to your logic app workflow. Azure Storage supports adding private endpoints to your storage account.
+
+This article describes the steps to follow for deploying such logic apps to protected private storage accounts. For more information, review [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md).
+
+<a name="deploy-with-portal-or-visual-studio-code"></a>
+
+## Deploy using Azure portal or Visual Studio Code
+
+This deployment method requires that temporary public access to your storage account. If you can't enable public access due to your organization's policies, you can still deploy your logic app to a private storage account. However, you have to [deploy with an Azure Resource Manager template (ARM template)](#deploy-arm-template), which is described in a later section.
+
+1. Create different private endpoints for each of the Table, Queue, Blob, and File storage services.
+
+1. Enable temporary public access on your storage account when you deploy your logic app.
+
+ 1. In the [Azure portal](https://portal.azure.com), open your storage account resource.
+
+ 1. On the storage account resource menu, under **Security + networking**, select **Networking**.
+
+ 1. On the **Networking** pane, on the **Firewalls and virtual networks** tab, under **Allow access from**, select **All networks**.
+
+1. Deploy your logic app resource by using either the Azure portal or Visual Studio Code.
+
+1. After deployment finishes, enable VNet integration between your logic app and the private endpoints on the virtual network that connects to your storage account.
+
+ 1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
+
+ 1. On the logic app resource menu, under **Settings**, select **Networking**.
+
+ 1. Select **VNet integration** on **Outbound Traffic** card to enable integration with a virtual network connecting to your storage account.
+
+ 1. To access your logic app workflow data over the virtual network, in your logic app resource settings, set the `WEBSITE_CONTENTOVERVNET` setting to `1`.
+
+ If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your secondary DNS.
+
+1. After you apply these app settings, you can remove public access from your storage account.
+
+ 1. In the [Azure portal](https://portal.azure.com), open your storage account resource.
+
+ 1. On the storage account resource menu, under **Security + networking**, select **Networking**.
+
+ 1. On the **Networking** pane, on the **Firewalls and virtual networks** tab, under **Allow access from**, clear **Selected networks**, and add virtual networks as necessary.
+
+ > [!NOTE]
+ > Your logic app might experience an interruption because the connectivity switch between public and private endpoints might take time.
+ > This disruption might result in your workflows temporarily disappearing. If this behavior happens, you can try to reload your workflows
+ > by restarting the logic app and waiting several minutes.
+
+<a name="deploy-arm-template"></a>
+
+## Deploy using an Azure Resource Manager template
+
+This deployment method doesn't require public access to the storage account. For an example ARM template, review [Deploy logic app using secured storage account with private endpoints](https://github.com/VeeraMS/LogicApp-deployment-with-Secure-Storage). The example template creates the following resources:
+
+- A storage account that denies the public traffic
+- An Azure VNet and subnets
+- Private DNS zones and private endpoints for Blob, File, Queue, and Table services
+- A file share for the Azure Logic Apps runtime directories and files. For more information, review [Host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md).
+- An App Service plan (Workflow Standard WS1) for hosting Standard logic app resources
+- A Standard logic app resource with a network configuration that's set up to use VNet integration. This configuration enables the logic app to access the storage account through private endpoints.
+
+## Troubleshoot common errors
+
+The following errors commonly happen with a private storage account that's behind a firewall and indicate that the logic app can't access the storage account services.
+
+| Problem | Error |
+||-|
+| Access to the `host.json` file is denied | `"System.Private.CoreLib: Access to the path 'C:\\home\\site\\wwwroot\\host.json' is denied."` |
+| Can't load workflows in the logic app resource | `"Encountered an error (ServiceUnavailable) from host runtime."` |
+|||
+
+As the logic app isn't running when these errors occur, you can't use the Kudu console debugging service on the Azure platform to troubleshoot these errors. However, you can use the following methods instead:
+
+- Create an Azure virtual machine (VM) inside a different subnet within the same VNet that's integrated with your logic app. Try to connect from the VM to the storage account.
+
+- Check access to the storage account services by using the [Storage Explorer tool](https://azure.microsoft.com/features/storage-explorer/#overview).
+
+ If you find any connectivity issues using this tool, continue with the following steps:
+
+ 1. From the command prompt, run `nslookup` to check whether the storage services resolve to the private IP addresses for the virtual network:
+
+ `C:\>nslookup {storage-account-host-name} [optional-DNS-server]`
+
+ 1. Check all the storage
+
+ `C:\nslookup {storage-account-host-name}.blob.core.windows.net`
+
+ `C:\nslookup {storage-account-host-name}.file.core.windows.net`
+
+ `C:\nslookup {storage-account-host-name}.queue.core.windows.net`
+
+ `C:\nslookup {storage-account-host-name}.table.core.windows.net`
+
+ 1. If these DNS queries resolve, run `psping` or `tcpping` to check traffic to the storage account over port 443:
+
+ `C:\psping {storage-account-host-name} {port} [optional-DNS-server]`
+
+ 1. Check all the storage
+
+ `C:\psping {storage-account-host-name}.blob.core.windows.net:443`
+
+ `C:\psping {storage-account-host-name}.file.core.windows.net:443`
+
+ `C:\psping {storage-account-host-name}.queue.core.windows.net:443`
+
+ `C:\psping {storage-account-host-name}.table.core.windows.net:443`
+
+ 1. If the queries resolve from the VM, continue with the following steps:
+
+ 1. In the VM, find the DNS server that's used for resolution.
+
+ 1. In your logic app, [find and set the `WEBSITE_DNS_SERVER` app setting](edit-app-settings-host-settings.md?tabs=azure-portal?tabs=azure-portal#manage-app-settingslocalsettingsjson) to the same DNS server value that you found in the previous step.
+
+ 1. Check that the VNet integration is set up correctly with the appropriate VNET and subnet in your logic app.
+
+## Next steps
+
+- [Logic Apps Anywhere: Networking possibilities with Logic Apps (single-tenant)](https://techcommunity.microsoft.com/t5/integrations-on-azure/logic-apps-anywhere-networking-possibilities-with-logic-app/ba-p/2105047)
logic-apps Secure Single Tenant Workflow Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md
Title: Secure traffic between single-tenant workflows and virtual networks
-description: Secure traffic between virtual networks, storage accounts, and single-tenant workflows in Azure Logic Apps.
+description: Secure traffic between Standard logic app workflows and virtual networks in Azure using private endpoints.
ms.suite: integration Previously updated : 08/31/2021 Last updated : 01/06/2022
-# As a developer, I want to connect to my single-tenant workflows from virtual networks using private endpoints.
+# As a developer, I want to connect to my single-tenant logic app workflows with virtual networks using private endpoints and VNet integration.
-# Secure traffic between virtual networks and single-tenant workflows in Azure Logic Apps using private endpoints
+# Secure traffic between single-tenant Standard logic apps and Azure virtual networks using private endpoints and VNet integration
-To securely and privately communicate between your logic app workflow and a virtual network, you can set up *private endpoints* for inbound traffic and use virtual network integration for outbound traffic.
+To securely and privately communicate between your workflow in a Standard logic app and an Azure virtual network, you can set up *private endpoints* for inbound traffic and use VNet integration for outbound traffic.
A private endpoint is a network interface that privately and securely connects to a service powered by Azure Private Link. This service can be an Azure service such as Azure Logic Apps, Azure Storage, Azure Cosmos DB, SQL, or your own Private Link Service. The private endpoint uses a private IP address from your virtual network, which effectively brings the service into your virtual network.
-This article shows how to set up access through private endpoints for inbound traffic, outbound traffic, and connection to storage accounts.
+This article shows how to set up access through private endpoints for inbound traffic and VNet integration for outbound traffic.
For more information, review the following documentation: -- [What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md)-
+- [What is Azure Private Endpoint?](../private-link/private-endpoint-overview.md) and [Private endpoints - Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md#private-endpoints)
- [What is Azure Private Link?](../private-link/private-link-overview.md)--- [What is single-tenant logic app workflow in Azure Logic Apps?](single-tenant-overview-compare.md)
+- [What is Vnet integration?](../app-service/networking-features.md#regional-vnet-integration)
## Prerequisites
You need to have a new or existing Azure virtual network that includes a subnet
For more information, review the following documentation: - [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md)- - [What is subnet delegation?](../virtual-network/subnet-delegation-overview.md)- - [Add or remove a subnet delegation](../virtual-network/manage-subnet-delegation.md) <a name="set-up-inbound"></a>
To secure inbound traffic to your workflow, complete these high-level steps:
1. Start your workflow with a built-in trigger that can receive and handle inbound requests, such as the Request trigger or the HTTP + Webhook trigger. This trigger sets up your workflow with a callable endpoint.
-1. Add a private endpoint to your virtual network.
+1. Add a private endpoint for your logic app resource to your virtual network.
1. Make test calls to check access to the endpoint. To call your logic app workflow after you set up this endpoint, you must be connected to the virtual network.
For example, the Request trigger creates an endpoint on your workflow that can r
For more information, review the following documentation: - [Create single-tenant logic app workflows in Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)- - [Receive and respond to inbound HTTP requests using Azure Logic Apps](../connectors/connectors-native-reqres.md) ### Create the workflow
For more information, review the following documentation:
> [!NOTE] > You can call Request triggers and webhook triggers only from inside your virtual network.
- > Managed API webhook triggers and actions won't work because they require a public endpoint to receive calls.
+ > Managed API webhook triggers and actions won't work because they require a public endpoint to receive calls.
1. Based on your scenario requirements, add other actions that you want to run in your workflow.
For more information, review [Create single-tenant logic app workflows in Azure
1. On your logic app menu, under **Settings**, select **Networking**.
-1. On the **Networking** page, under **Private Endpoint connections**, select **Configure your private endpoint connections**.
+1. On the **Networking** page, on the **Inbound traffic** card, select **Private endpoints**.
-1. On the **Private Endpoint connections page**, select **Add**.
+1. On the **Private Endpoint connections**, select **Add**.
1. On the **Add Private Endpoint** pane that opens, provide the requested information about the endpoint.
For more information, review [Create single-tenant logic app workflows in Azure
<a name="set-up-outbound"></a>
-## Set up outbound traffic through private endpoints
+## Set up outbound traffic using VNet integration
-To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. By default, outbound traffic from your logic app is only affected by network security groups (NSGs) and user-defined routes (UDRs) when going to a private address, such as `10.0.0.0/8`, `172.16.0.0/12`, and `192.168.0.0/16`.
+To secure outbound traffic from your logic app, you can integrate your logic app with a virtual network. First, create and test an example workflow. You can then set up VNet integration.
-If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your DNS. Also, update your DNS records to point your private endpoints at your internal IP address. Private endpoints work by sending the DNS lookup to the private address, not the public address for the specific resource. For more information, review [Private endpoints - Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md#private-endpoints).
+### Create and test the workflow
-> [!IMPORTANT]
-> For the Azure Logic Apps runtime to work, you need to have an uninterrupted connection to the backend storage.
-> For Azure-hosted managed connectors to work, you need to have an uninterrupted connection to the managed API service.
+1. If you haven't already, in the [Azure portal](https://portal.azure.com), create a single-tenant based logic app, and a blank workflow.
-### Considerations for outbound traffic through private endpoints
-
-Setting up virtual network integration affects only outbound traffic. To secure inbound traffic, which continues to use the App Service shared endpoint, review [Set up inbound traffic through private endpoints](#set-up-inbound).
-
-For more information, review the following documentation:
--- [Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md)--- [Network security groups](../virtual-network/network-security-groups-overview.md)
+1. After the designer opens, add the Request trigger as the first step in your workflow.
-- [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md)
+1. Add an HTTP action to call an internal service that's unavailable through the Internet and runs with a private IP address such as `10.0.1.3`.
-## Connect to storage account with private endpoints
+1. When you're done, save your workflow.
-You can restrict storage account access so that only resources inside a virtual network can connect. Azure Storage supports adding private endpoints to your storage account. Your logic app workflows can then use these endpoints to communicate with the storage account. For more information, review [Use private endpoints for Azure Storage](../storage/common/storage-private-endpoints.md).
+1. From the designer, manually run the workflow.
-> [!NOTE]
-> The following steps require temporarily enabling public access on your storage account. If you can't enable public
-> access due to your organization's policies, you can still deploy your logic app using a private storage account. However,
-> you have to use an Azure Resource Manager template (ARM template) for deployment. For an example ARM template, review
-> [Deploy logic app using secured storage account with private endpoints](https://github.com/VeeraMS/LogicApp-deployment-with-Secure-Storage).
+ The HTTP action fails, which is by design and expected because the workflow runs in the cloud and can't access your internal service.
-1. Create different private endpoints for each of the Table, Queue, Blob, and File storage services.
+### Set up VNet integration
-1. Enable temporary public access on your storage account when you deploy your logic app.
+1. In the Azure portal, on the logic app resource menu, under **Settings**, select **Networking**.
- 1. In the [Azure portal](https://portal.azure.com), open your storage account resource.
+1. On the **Networking** pane, on the **Outbound traffic** card, select **VNet integration**.
- 1. On the storage account resource menu, under **Security + networking**, select **Networking**.
+1. On the **VNet Integration** pane, select **Add Vnet**.
- 1. On the **Networking** pane, on the **Firewalls and virtual networks** tab, under **Allow access from**, select **All networks**.
+1. On the **Add VNet Integration** pane, select the subscription and the virtual network that connects to your internal service.
-1. Deploy your logic app resource by using either the Azure portal or Visual Studio Code.
+1. If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your DNS.
-1. After deployment finishes, enable integration between your logic app and the private endpoints on the virtual network or subnet that connects to your storage account.
+1. After Azure successfully provisions the VNet integration, try to run the workflow again.
- 1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
+ The HTTP action now runs successfully.
- 1. On the logic app resource menu, under **Settings**, select **Networking**.
+> [!IMPORTANT]
+> For the Azure Logic Apps runtime to work, you need to have an uninterrupted connection to the backend storage.
+> For Azure-hosted managed connectors to work, you need to have an uninterrupted connection to the managed API service.
- 1. Set up the necessary connections between your logic app and the IP addresses for the private endpoints.
+### Considerations for outbound traffic through private endpoints
- 1. To access your logic app workflow data over the virtual network, in your logic app resource settings, set the `WEBSITE_CONTENTOVERVNET` setting to `1`.
+Setting up virtual network integration affects only outbound traffic. To secure inbound traffic, which continues to use the App Service shared endpoint, review [Set up inbound traffic through private endpoints](#set-up-inbound).
- If you use your own domain name server (DNS) with your virtual network, set your logic app resource's `WEBSITE_DNS_SERVER` app setting to the IP address for your DNS. If you have a secondary DNS, add another app setting named `WEBSITE_DNS_ALT_SERVER`, and set the value also to the IP for your DNS. Also, update your DNS records to point your private endpoints at your internal IP address. Private endpoints work by sending the DNS lookup to the private address, not the public address for the specific resource. For more information, review [Private endpoints - Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md#private-endpoints).
+For more information, review the following documentation:
-1. After you apply these app settings, you can remove public access from your storage account.
+- [Integrate your app with an Azure virtual network](../app-service/overview-vnet-integration.md)
+- [Network security groups](../virtual-network/network-security-groups-overview.md)
+- [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md)
## Next steps
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/register-scan-amazon-s3.md
Previously updated : 09/27/2021 Last updated : 12/07/2021 # Customer intent: As a security officer, I need to understand how to use the Azure Purview connector for Amazon S3 service to set up, configure, and scan my Amazon S3 buckets.
For this service, use Purview to provide a Microsoft account with secure access
|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**| |||||||| | Yes | Yes | Yes | Yes | Yes | No | Limited** |
+|
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
Ensure that you've performed the following prerequisites before adding your Amaz
> * [Create a new AWS role for use with Purview](#create-a-new-aws-role-for-purview) > * [Create a Purview credential for your AWS bucket scan](#create-a-purview-credential-for-your-aws-s3-scan) > * [Configure scanning for encrypted Amazon S3 buckets](#configure-scanning-for-encrypted-amazon-s3-buckets), if relevant
+> * Make sure that your bucket policy does not block the connection. For more information, see [Bucket policy requirements](#confirm-your-bucket-policy-access) and [SCP policy requirements](#confirm-your-scp-policy-access). For these items, you may need to consult with an AWS expert to ensure that your policies allow required access.
> * When adding your buckets as Purview resources, you'll need the values of your [AWS ARN](#retrieve-your-new-role-arn), [bucket name](#retrieve-your-amazon-s3-bucket-name), and sometimes your [AWS account ID](#locate-your-aws-account-id). + ### Create a Purview account - **If you already have a Purview account,** you can continue with the configurations required for AWS S3 support. Start with [Create a Purview credential for your AWS bucket scan](#create-a-purview-credential-for-your-aws-s3-scan).
Ensure that you've performed the following prerequisites before adding your Amaz
### Create a new AWS role for Purview
-This procedure describes how to locate the values for your Azure Account ID and External ID, create your AWS role, and then enter the value for your role ARN in Purview.
+The Purview scanner is deployed in a Microsoft account in AWS. To allow the Purview scanner to read your S3 data, you must create a dedicated role in the AWS portal, in the IAM area, to be used by the scanner.
+
+This procedure describes how to create the AWS role, with the required Microsoft Account ID and External ID from Purview, and then enter the Role ARN value in Purview.
+ **To locate your Microsoft Account ID and External ID**:
This procedure describes how to locate the values for your Azure Account ID and
1. Select **New** to create a new credential.
- In the **New credential** pane that appears on the right, in the **Authentication method** dropdown, select **Role ARN**.
-
+ In the **New credential** pane that appears on the right, in the **Authentication method** dropdown, select **Role ARN**.
+ Then copy the **Microsoft account ID** and **External ID** values that appear to a separate file, or have them handy for pasting into the relevant field in AWS. For example: [ ![Locate your Microsoft account ID and External ID values.](./media/register-scan-amazon-s3/locate-account-id-external-id.png) ](./media/register-scan-amazon-s3/locate-account-id-external-id.png#lightbox)
This procedure describes how to locate the values for your Azure Account ID and
- In the **Role description** box, enter an optional description to identify the role's purpose - In the **Policies** section, confirm that the correct policy (**AmazonS3ReadOnlyAccess**) is attached to the role.
- Then select **Create role** to complete the process.
-
- For example:
+ Then select **Create role** to complete the process. For example:
![Review details before creating your role.](./media/register-scan-amazon-s3/review-role.png)
+**Extra required configurations**:
+
+- For buckets that use **AWS-KMS** encryption, [special configuration](#configure-scanning-for-encrypted-amazon-s3-buckets) is required to enable scanning.
+
+- Make sure that your bucket policy does not block the connection. For more information, see:
+
+ - [Confirm your bucket policy access](#confirm-your-bucket-policy-access)
+ - [Confirm your SCP policy access](#confirm-your-scp-policy-access)
### Create a Purview credential for your AWS S3 scan
AWS buckets support multiple encryption types. For buckets that use **AWS-KMS**
![View an updated Summary page with the new policy attached to your role.](./media/register-scan-amazon-s3/attach-policy-role.png)
+### Confirm your bucket policy access
+
+Make sure that the S3 bucket [policy](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-iam-policies.html) does not block the connection:
+
+1. In AWS, navigate to your S3 bucket, and then select the **Permissions** tab > **Bucket policy**.
+1. Check the policy details to make sure that it doesn't block the connection from the Purview scanner service.
+
+### Confirm your SCP policy access
+
+Make sure that there is no [SCP policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that blocks the connection to the S3 bucket.
+
+For example, your SCP policy might block read API calls from the [AWS scanning region](#storage-and-scanning-regions).
+
+- Required API calls, which must be allowed by your SCP policy, include: `AssumeRole`, `GetBucketLocation`, `GetObject`, `ListBucket`, `GetBucketPublicAccessBlock`.
+- Your SCP policy must also allow calls to the **us-east-1** AWS Region, which is the default Region for API calls. For more information, see the [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html).
+
+Follow the [SCP documentation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_create.html), review your organizationΓÇÖs SCP policies, and make sure all the [permissions required for the Purview scanner](#create-a-new-aws-role-for-purview) are available.
++ ### Retrieve your new Role ARN You'll need to record your AWS Role ARN and copy it in to Purview when [creating a scan for your Amazon S3 bucket](#create-a-scan-for-one-or-more-amazon-s3-buckets).
Make sure to define your resource with a wildcard. For example:
} ```
+## Troubleshooting
+
+Scanning Amazon S3 resources requires [creating a role in AWS IAM](#create-a-new-aws-role-for-purview) to allow the Purview scanner service running in a Microsoft account in AWS to read the data.
+
+Configuration errors in the role can lead to connection failure. This section describes some examples of connection failures that may occur while setting up the scan, and the troubleshooting guidelines for each case.
+
+If all of the items described in the following sections are properly configured, and scanning S3 buckets still fails with errors, contact Microsoft support.
+
+> [!NOTE]
+> For policy access issues, make sure that neither your bucket policy, nor your SCP policy are blocking access to your S3 bucket from Purview.
+>
+>For more information, see [Confirm your bucket policy access](#confirm-your-bucket-policy-access) and [Confirm your SCP policy access](#confirm-your-scp-policy-access).
+>
+### Bucket is encrypted with KMS
+
+Make sure that the AWS role has **KMS Decrypt** permissions. For more information, see [Configure scanning for encrypted Amazon S3 buckets](#configure-scanning-for-encrypted-amazon-s3-buckets).
+
+### AWS role is missing an external ID
+
+Make sure that the AWS role has the correct external ID:
+
+1. In the AWS IAM area, select the **Role > Trust relationships** tab.
+1. Follow the steps in [Create a new AWS role for Purview](#create-a-new-aws-role-for-purview) again to verify your details.
+
+### Error found with the role ARN
+
+This is a general error that indicates an issue when using the Role ARN. For example, you may want to troubleshoot as follows:
+
+- Make sure that the AWS role has the required permissions to read the selected S3 bucket. Required permissions include `AmazonS3ReadOnlyAccess` or the [minimum read permissions](#minimum-permissions-for-your-aws-policy), and `KMS Decrypt` for encrypted buckets.
+
+- Make sure that the AWS role has the correct Microsoft account ID. In the AWS IAM area, select the **Role > Trust relationships** tab and then follow the steps in [Create a new AWS role for Purview](#create-a-new-aws-role-for-purview) again to verify your details.
+
+For more information, see [Cannot find the specified bucket](#cannot-find-the-specified-bucket),
+
+### Cannot find the specified bucket
+
+Make sure that the S3 bucket URL is properly defined:
+
+1. In AWS, navigate to your S3 bucket, and copy the bucket name.
+1. In Purview, edit the Amazon S3 data source, and update the bucket URL to include your copied bucket name, using the following syntax: `s3://<BucketName>`
++ ## Next steps Learn more about Azure Purview Insight reports:
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-create-indexers.md
Previously updated : 01/05/2022 Last updated : 01/07/2022 # Creating indexers in Azure Cognitive Search
There are several ways to run an indexer:
+ Send an HTTP request for [Create Indexer](/rest/api/searchservice/create-indexer) or [Update indexer](/rest/api/searchservice/update-indexer) to add or change the definition, and run the indexer.
-+ Send an HTTP request for [Run Indexer](/rest/api/searchservice/run-indexer) to execute an indexer with no changes to the definition.
++ Send an HTTP request for [Run Indexer](/rest/api/searchservice/run-indexer) to execute an indexer with no changes to the definition. For more information, see [Run or reset indexers](search-howto-run-reset-indexers.md). + Run a program that calls SearchIndexerClient methods for create, update, or run. Alternatively, put the indexer [on a schedule](search-howto-schedule-indexers.md) to invoke processing at regular intervals.
-Scheduled processing usually coincides with a need for incremental indexing of changed content. Change detection logic is a capability that's built into source platforms. Changes in a blob container are detected by the indexer automatically. For guidance on leveraging change detection in other data sources, refer to the indexer docs for specific data sources:
+Scheduled execution is usually implemented when you have a need for incremental indexing so that you can pick up the latest changes. As such, scheduling has a dependency on change detection. Change detection logic is a capability that's built into source platforms. If you're using a blob data source, changes in a blob container are detected automatically because Azure Storage exposes a LastModified property. Other data sources require explicit configuration. For guidance on leveraging change detection in other data sources, refer to the indexer docs for those sources:
+ [Azure SQL database](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md) + [Azure Data Lake Storage Gen2](search-howto-index-azure-data-lake-storage.md)
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-run-reset-indexers.md
- Previously updated : 11/02/2021+ Last updated : 01/07/2022 # Run or reset indexers, skills, or documents
-Indexers can be invoked in three ways: on demand, on a schedule, or when the [indexer is created](/rest/api/searchservice/create-indexer). After the initial run, an indexer keeps track of which search documents have been indexed through an internal "high water mark". The marker is never exposed in the API, but internally the indexer knows where indexing stopped so that it can pick up where it left off on the next run.
+Indexers can be invoked in three ways: on demand, on a schedule, or when the [indexer is created](/rest/api/searchservice/create-indexer), assuming it's not created in "disabled" mode.
-You can clear the high water mark by resetting the indexer if you want to reprocess from scratch. Reset APIs are available at decreasing levels in the object hierarchy:
+After the initial run, an indexer keeps track of which search documents have been indexed through an internal *high-water mark*. The marker is never exposed, but internally the indexer knows where it last stopped, so that it can pick up where it left off on the next run.
-+ The entire search corpus (use [Reset Indexers](#reset-indexers))
-+ A specific document or list of documents (use [Reset Documents - preview](#reset-docs))
-+ A specific skill or enrichment (use [Reset Skills - preview](#reset-skills))
+If you need to rebuild all or part of an index, you can clear the indexer's high-water mark through a reset. Reset APIs are available at decreasing levels in the object hierarchy:
-The Reset APIs are used to refresh cached content (applicable in [AI enrichment](cognitive-search-concept-intro.md) scenarios), or to clear the high water mark and rebuild the index.
++ [Reset Indexers](#reset-indexers) clears the high-water mark and performs a full reindex of all documents++ [Reset Documents (preview)](#reset-docs) reindexes a specific document or list of documents++ [Reset Skills (preview)](#reset-skills) invokes skill processing for a specific skill
-Reset, followed by run, can reprocess existing documents and new documents, but does not remove orphaned search documents in the search index that were created on previous runs. For more information about deletion, see [Add, Update or Delete Documents](/rest/api/searchservice/addupdate-or-delete-documents).
+After reset, follow with a Run command to reprocess new and existing documents. Orphaned search documents having no counterpart in the data source cannot be removed through reset/run. If you need to delete documents, see [Add, Update or Delete Documents](/rest/api/searchservice/addupdate-or-delete-documents) instead.
-## How to run indexers
+## Indexer execution
-[Create Indexer](/rest/api/searchservice/create-indexer) creates and runs the indexer unless you create it in a disabled state ("disabled": true). The first run takes a bit longer because its covering object creation as well.
+Indexing does not run in the background. Instead, the search service will balance all indexing jobs against ongoing queries and object management actions (such as creating or updating indexes). When running indexers, you should expect to see [some query latency](search-performance-analysis.md#impact-of-indexing-on-queries) if indexing volumes are large.
-[Run indexer](/rest/api/searchservice/run-indexer) will detect and process only what it necessary to synchronize the search index with the data source. Blob storage has built-in change detection. Other data sources, such as Azure SQL or Cosmos DB, have to be configured for change detection before the indexer can read just the new and updated rows.
+You can run multiple indexers at one time, but each indexer itself is single-instance. Starting a new instance while the indexer is already in execution produces this error: `"Failed to run indexer "<indexer name>" error: "Another indexer invocation is currently in progress; concurrent invocations are not allowed."`
-You can run an indexer using any of these approaches:
+Indexer limits vary by the workload. For each workload, the following job limits apply.
-+ Azure portal, using the **Run** command on the indexer page
-+ [Run Indexer (REST)](/rest/api/searchservice/run-indexer)
-+ [RunIndexers method](/dotnet/api/azure.search.documents.indexes.searchindexerclient.runindexer) in the Azure .NET SDK (or using the equivalent RunIndexer method in another SDK)
+| Workload | Maximum duration | Maximum jobs | Execution environment <sup>1</sup> |
+|-|||--|
+| Text-based indexing | 24 hours | One per search unit <sup>2</sup> | Typically runs on the search service. |
+| Skills-based indexing | 2 hours | Indeterminate | Typically runs on an internally-managed, multi-tenant cluster. If a skills-based indexing is executed off the search service, the number of concurrent jobs can exceed the maximum of one per search unit. |
-## Indexer execution
+<sup>1</sup> For optimum processing, a search service will determine an internal execution environment for the indexer operation. You cannot control or configure the environment, but depending on the number and complexity of tasks, the search service will either run the job itself, or offload computationally-intensive tasks to an internally-managed cluster, leaving more service-specific resources available for routine operations. The multi-tenant environment used for performing computationally-intensive tasks is managed and secured by Microsoft, at no extra cost to the customer.
-Indexer execution is subject to the following limits:
+<sup>2</sup> Search units can be [flexible combinations](search-capacity-planning.md#partition-and-replica-combinations) of partitions and replicas, and maximum indexer jobs are not tied to one or the other. In other words, if you have four units, you can have four text-based indexer jobs running concurrently, no matter how the search units are deployed.
-+ Maximum number of indexer jobs is 1 per replica.
+> [!TIP]
+> If you are [indexing a large data set](search-howto-large-index.md), you can stretch processing out by putting the indexer [on a schedule](search-howto-schedule-indexers.md). For the full list of all indexer-related limits, see [indexer limits](search-limits-quotas-capacity.md#indexer-limits)
- If indexer execution is already at capacity, you will get this notification: "Failed to run indexer '\<indexer-name\>', error: "Another indexer invocation is currently in progress; concurrent invocations are not allowed."
+## Run without reset
-+ Maximum running time is 2 hours if using a skillset, or 24 hours without.
+[Run Indexer](/rest/api/searchservice/run-indexer) will detect and process only what it necessary to synchronize the search index with changes in the underlying data source. Incremental indexing starts by locating an internal high-water mark to find the last updated search document, which becomes the starting point for indexer execution over new and updated documents in the data source.
- If you are [indexing a large data set](search-howto-large-index.md), you can stretch out processing by putting the indexer on a schedule. The Free tier has lower run time limits. For the full list, see [indexer limits](search-limits-quotas-capacity.md#indexer-limits)
+Change detection is essential for determining what's new or updated in the data source. If the content is unchanged, Run has no effect. Blob storage has built-in change detection through its LastModified property. Other data sources, such as Azure SQL or Cosmos DB, have to be configured for change detection before the indexer can read new and updated rows.
<a name="reset-indexers"></a>
-## Reset an indexer
+## How to reset and run indexers
+
+Reset clears the high-water mark. All documents in the search index will be flagged for full overwrite, without inline updates or merging into existing content. For indexers with a skillset and [enrichment caching](cognitive-search-incremental-indexing-conceptual.md), resetting the index will also implicitly reset the skillset.
+
+The actual work occurs when you follow a reset with a Run command:
+++ All new documents found the underlying source will be added to the search index. ++ All documents that exist in both the data source and search index will be overwritten in the search index. ++ Any enriched content created from skillsets will be rebuilt. The enrichment cache, if one is enabled, is refreshed.+
+As previously noted, reset is a passive operation: you must follow up a Run request to rebuild the index.
+
+Reset/run operations apply to a search index or a knowledge store, to specific documents or projections, and to cached enrichments if a reset explicitly or implicitly includes skills.
+
+Reset also applies to just new and update operations. It will not trigger deletion or clean up of orphaned documents in the search index. For more information about deleting documents, see [Add, Update or Delete Documents](/rest/api/searchservice/AddUpdate-or-Delete-Documents).
+
+Once you reset an indexer, you cannot undo the action.
+
+### [**Azure portal**](#tab/reset-indexer-portal)
+
+1. [Sign in to Azure portal](https://portal.azure.com) and open the search service page.
+1. On the **Overview** page, select the **Indexers** tab.
+1. Select an indexer.
+1. Select the **Reset** command, and then select **Yes** to confirm the action.
+1. Refresh the page to show the status. You can select the item to view its details.
+1. Select **Run** to start indexer processing, or wait for the next scheduled execution.
+
+ :::image type="content" source="media/search-howto-run-reset-indexers/portal-reset.png" alt-text="Screenshot of indexer execution portal page, with Reset command highlighted." border="true":::
+
+### [**REST**](#tab/reset-indexer-rest)
+
+The following example illustrates [**Reset Indexer**](/rest/api/searchservice/reset-indexer) and [**Run Indexer**](/rest/api/searchservice/run-indexer) REST calls. Use [**Get Indexer Status**](/rest/api/searchservice/get-indexer-status) to check results.
-Resetting an indexer is all encompassing. Within the search index, any search document that was originally populated by the indexer is marked for full processing. Any new documents found the underlying source will be added to the index as search documents. If the indexer is configured to use a skillset and [caching](search-howto-incremental-index.md), the skillset is rerun and the cache is refreshed.
+There are no parameters or properties for any of these calls.
+
+```http
+POST /indexers/[indexer name]/reset?api-version=[api-version]
+```
+
+```http
+POST /indexers/[indexer name]/run?api-version=[api-version]
+```
+
+```http
+GET /indexers/[indexer name]/status?api-version=[api-version]
+```
+
+### [**.NET SDK (C#)**](#tab/reset-indexer-csharp)
+
+The following example (from [azure-search-dotnet-samples/multiple-data-sources/](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/multiple-data-sources/v11/src/Program.cs)) illustrates the [**ResetIndexers**](/dotnet/api/azure.search.documents.indexes.searchindexerclient.resetindexer) and [**RunIndexers**](/dotnet/api/azure.search.documents.indexes.searchindexerclient.runindexer) methods in the Azure .NET SDK.
+
+```csharp
+// Reset the indexer if it already exists
+try
+{
+ await indexerClient.GetIndexerAsync(blobIndexer.Name);
+ //Rest the indexer if it exsits.
+ await indexerClient.ResetIndexerAsync(blobIndexer.Name);
+}
+catch (RequestFailedException ex) when (ex.Status == 404) { }
-You can reset an indexer using any of these approaches, followed by an indexer run using one of the methods discussed above.
+await indexerClient.CreateOrUpdateIndexerAsync(blobIndexer);
-+ Azure portal, using the **Reset** command on the indexer page
-+ [Reset Indexer (REST)](/rest/api/searchservice/reset-indexer)
-+ [ResetIndexers method](/dotnet/api/azure.search.documents.indexes.searchindexerclient.resetindexer) in the Azure .NET SDK (or using the equivalent RunIndexer method in another SDK)
+// Run indexer
+Console.WriteLine("Running Blob Storage indexer...\n");
-A reset flag is cleared after the run is finished. Any regular change detection logic that is operative for your data source will resume on the next run, picking up any other new or updated values in the rest of the data set.
+try
+{
+ await indexerClient.RunIndexerAsync(blobIndexer.Name);
+}
+catch (RequestFailedException ex) when (ex.Status == 429)
+{
+ Console.WriteLine("Failed to run indexer: {0}", ex.Message);
+}
+```
-> [!NOTE]
-> A reset request determines what is reprocessed (indexer, skill, or document), but does not otherwise affect indexer runtime behavior. If the indexer has run time parameters, field mappings, caching, batch options, and so forth, those settings are all in effect when you run an indexer after having reset it.
+ <a name="reset-skills"></a>
-## Reset skills (preview)
+## How to reset skills (preview)
-For indexers that have skillsets, you can reset specific skills to force processing of that skill and any downstream skills that depend on its output. [Cached enrichments](search-howto-incremental-index.md) are also refreshed. Resetting skills invalidates the cached skill results, which is useful when a new version of a skill is deployed and you want the indexer to rerun that skill for all documents.
+For indexers that have skillsets, you can reset individual skills to force processing of just that skill and any downstream skills that depend on its output. The [enrichment cache](search-howto-incremental-index.md), if you enabled it, is also refreshed.
-[Reset Skills](/rest/api/searchservice/preview-api/reset-skills) is available through REST **`api-version=2020-06-30-Preview`** or later.
+[Reset Skills](/rest/api/searchservice/preview-api/reset-skills) is currently REST-only, available through `api-version=2020-06-30-Preview` or later.
```http
-POST https://[service name].search.windows.net/skillsets/[skillset name]/resetskills?api-version=2020-06-30-Preview
+POST /skillsets/[skillset name]/resetskills?api-version=2020-06-30-Preview
{ "skillNames" : [ "#1",
You can specify individual skills, as indicated in the example above, but if any
If no skills are specified, the entire skillset is executed and if caching is enabled, the cache is also refreshed.
+Remember to follow up with Run Indexer to invoke actual processing.
+ <a name="reset-docs"></a>
-## Reset docs (preview)
+## How to reset docs (preview)
-The [Reset documents API](/rest/api/searchservice/preview-api/reset-documents) accepts a list of document keys so that you can refresh specific documents. If specified, the reset parameters become the sole determinant of what gets processed, regardless of other changes in the underlying data. For example, if 20 blobs were added or updated since the last indexer run, but you only reset one document, only that one document will be processed.
+The [Reset Documents API](/rest/api/searchservice/preview-api/reset-documents) accepts a list of document keys so that you can refresh specific documents. If specified, the reset parameters become the sole determinant of what gets processed, regardless of other changes in the underlying data. For example, if 20 blobs were added or updated since the last indexer run, but you only reset one document, only that one document will be processed.
On a per-document basis, all fields in that search document are refreshed with values from the data source. You cannot pick and choose which fields to refresh.
-If the document is enriched through a skillset and has cached data, the skillset is invoked for just the specified documents, and the cached is updated for the reprocessed documents.
+If the document is enriched through a skillset and has cached data, the skillset is invoked for just the specified documents, and the cache is updated for the reprocessed documents.
When testing this API for the first time, the following APIs will help you validate and test the behaviors:
-+ [Get Indexer Status](/rest/api/searchservice/get-indexer-status) with API version **`api-version=2020-06-30-Preview`** or later, to check reset status and execution status. You can find information about the reset request at the end of the status response.
-+ [Reset Documents](/rest/api/searchservice/preview-api/reset-documents) with API version **`api-version=2020-06-30-Preview`** or later, to specify which documents to process.
-+ [Run Indexer](/rest/api/searchservice/run-indexer) to run the indexer (any API version).
-+ [Search Documents](/rest/api/searchservice/search-documents) to check for updated values, and also to return document keys if you are unsure of the value. Use `"select": "<field names>"` if you want to limit which fields appear in the response.
+1. Call [Get Indexer Status](/rest/api/searchservice/get-indexer-status) with API version `api-version=2020-06-30-Preview` or later, to check reset status and execution status. You can find information about the reset request at the end of the status response.
-### Formulate and send the reset request
+1. Call [Reset Documents](/rest/api/searchservice/preview-api/reset-documents) with API version `api-version=2020-06-30-Preview` or later, to specify which documents to process.
-```http
-POST https://[service name].search.windows.net/indexers/[indexer name]/resetdocs?api-version=2020-06-30-Preview
-{
- "documentKeys" : [
- "1001",
- "4452"
- ]
-}
-```
+ ```http
+ POST https://[service name].search.windows.net/indexers/[indexer name]/resetdocs?api-version=2020-06-30-Preview
+ {
+ "documentKeys" : [
+ "1001",
+ "4452"
+ ]
+ }
+ ```
+
+ + The document keys provided in the request are values from the search index, which can be different from the corresponding fields in the data source. If you are unsure of the key value, [send a query](search-query-create.md) to return the value.You can use `select` to return just the document key field.
+
+ + For blobs that are parsed into multiple search documents (for example, if you used [jsonLines or jsonArrays](search-howto-index-json-blobs.md), or [delimitedText](search-howto-index-csv-blobs.md)) as a parsing mode, the document key is generated by the indexer and might be unknown to you. In this situation, a query for the document key will be instrumental in providing the correct value.
+
+1. Call [Run Indexer](/rest/api/searchservice/run-indexer) (any API version) to process the documents you specified. Only those specific documents are indexed.
-The document keys provided in the request are values from the search index, which can be different from the corresponding fields in the data source. If you are unsure of the key value, [send a query](search-query-create.md) to return the value.You can use `select` to return just the document key field.
+1. Call [Run Indexer](/rest/api/searchservice/run-indexer) a second time to process from the last high-water mark.
-For blobs that are parsed into multiple search documents (for example, if you used [jsonLines or jsonArrays](search-howto-index-json-blobs.md), or [delimitedText](search-howto-index-csv-blobs.md)) as a parsing mode, the document key is generated by the indexer and might be unknown to you. In this situation, a query for the document key will be instrumental in providing the correct value.
+1. Call [Search Documents](/rest/api/searchservice/search-documents) to check for updated values, and also to return document keys if you are unsure of the value. Use `"select": "<field names>"` if you want to limit which fields appear in the response.
-Calling the API multiple times with different keys appends the new keys to the list of document keys reset. Calling the API with the **`overwrite`** parameter set to true will overwrite the current list of document keys to be reset with the request's payload:
+### Overwriting the document key list
+
+Calling Reset Documents API multiple times with different keys appends the new keys to the list of document keys reset. Calling the API with the **`overwrite`** parameter set to true will overwrite the current list with the new one:
```http POST https://[service name].search.windows.net/indexers/[indexer name]/resetdocs?api-version=2020-06-30-Preview
POST https://[service name].search.windows.net/indexers/[indexer name]/resetdocs
} ```
-## Check reset status
+## Check reset status "currentState"
-To check the status of a reset and to see which document keys are queued up for processing, use [Get Indexer Status](/rest/api/searchservice/get-indexer-status) with **`api-version=06-30-2020-Preview`** or later. The preview API will return the **`currentState`** section, which you can find at the end of the Get Indexer Status response.
+To check reset status and to see which document keys are queued up for processing, following these steps.
-The "mode" will be **`indexingAllDocs`** for Reset Skills (because potentially all documents are affected, for the fields that are populated through AI enrichment).
+1. Call [Get Indexer Status](/rest/api/searchservice/get-indexer-status) with `api-version=06-30-2020-Preview` or later.
-For Reset Documents, the mode is set to **`indexingResetDocs`**. The indexer retains this status until all the document keys provided in the reset documents call are processed and no other indexer jobs will execute while the operation is progressing. Finding all of the documents in the document keys list requires cracking each document to locate and match on the key, and this can take a while if the data set is large. If a blob container contains hundreds of blobs, and the docs you want to reset are at the end, the indexer won't find the matching blobs until all of the others have been checked first.
+ The preview API will return the **`currentState`** section, found at the end of the response.
-```json
-"currentState": {
- "mode": "indexingResetDocs",
- "allDocsInitialTrackingState": "{\"LastFullEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"LastAttemptedEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"NameHighWaterMark\":null}",
- "allDocsFinalTrackingState": "{\"LastFullEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"LastAttemptedEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"NameHighWaterMark\":null}",
- "resetDocsInitialTrackingState": null,
- "resetDocsFinalTrackingState": null,
- "resetDocumentKeys": [
- "200",
- "630"
- ]
-}
-```
+ ```json
+ "currentState": {
+ "mode": "indexingResetDocs",
+ "allDocsInitialTrackingState": "{\"LastFullEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"LastAttemptedEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"NameHighWaterMark\":null}",
+ "allDocsFinalTrackingState": "{\"LastFullEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"LastAttemptedEnumerationStartTime\":\"2021-02-06T19:02:07.0323764+00:00\",\"NameHighWaterMark\":null}",
+ "resetDocsInitialTrackingState": null,
+ "resetDocsFinalTrackingState": null,
+ "resetDocumentKeys": [
+ "200",
+ "630"
+ ]
+ }
+ ```
+
+1. Check the "mode":
+
+ For Reset Skills, "mode" should be set to **`indexingAllDocs`** (because potentially all documents are affected, in terms of the fields that are populated through AI enrichment).
+
+ For Reset Documents, "mode" should be set to **`indexingResetDocs`**. The indexer retains this status until all the document keys provided in the reset documents call are processed, during which time no other indexer jobs will execute while the operation is progressing. Finding all of the documents in the document keys list requires cracking each document to locate and match on the key, and this can take a while if the data set is large. If a blob container contains hundreds of blobs, and the docs you want to reset are at the end, the indexer won't find the matching blobs until all of the others have been checked first.
-After the documents are reprocessed, the indexer returns to the **`indexingAllDocs`** mode and will process any other new or updated documents on the next run.
+1. After the documents are reprocessed, run Get Indexer Status again. The indexer returns to the **`indexingAllDocs`** mode and will process any new or updated documents on the next run.
## Next steps
sentinel Multiple Workspace View https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/multiple-workspace-view.md
Title: Work with Microsoft Sentinel incidents in many workspaces at once | Micro
description: How to view incidents in multiple workspaces concurrently in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 01/09/2022
-# Work with incidents in many workspaces at once
+# Work with incidents in many workspaces at once
[!INCLUDE [Banner for top of topics](./includes/banner.md)]
-To take full advantage of Microsoft SentinelΓÇÖs capabilities, Microsoft recommends using a single-workspace environment. However, there are some use cases that require having several workspaces, in some cases ΓÇô for example, that of a [Managed Security Service Provider (MSSP)](./multiple-tenants-service-providers.md) and its customers ΓÇô across multiple tenants. **Multiple Workspace View** lets you see and work with security incidents across several workspaces at the same time, even across tenants, allowing you to maintain full visibility and control of your organizationΓÇÖs security responsiveness.
+To take full advantage of Microsoft SentinelΓÇÖs capabilities, Microsoft recommends using a single-workspace environment. However, there are some use cases that require having several workspaces, in some cases ΓÇô for example, that of a [Managed Security Service Provider (MSSP)](./multiple-tenants-service-providers.md) and its customers ΓÇô across multiple tenants. **Multiple workspace view** lets you see and work with security incidents across several workspaces at the same time, even across tenants, allowing you to maintain full visibility and control of your organizationΓÇÖs security responsiveness.
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-## Entering Multiple Workspace View
+## Entering multiple workspace view
-When you open Microsoft Sentinel, you are presented with a list of all the workspaces to which you have access rights, across all selected tenants and subscriptions. To the left of each workspace name is a checkbox. Clicking the name of a single workspace will bring you into that workspace. To choose multiple workspaces, click all the corresponding checkboxes, and then click the **Multiple Workspace View** button at the top of the page.
+When you open Microsoft Sentinel, you are presented with a list of all the workspaces to which you have access rights, across all selected tenants and subscriptions. To the left of each workspace name is a checkbox. Selecting the name of a single workspace will bring you into that workspace. To choose multiple workspaces, select all the corresponding checkboxes, and then select the **View incidents** button at the top of the page.
> [!IMPORTANT]
-> Multiple Workspace View currently supports a maximum of 10 concurrently displayed workspaces.
->
+> Multiple workspace view currently supports a maximum of 10 concurrently displayed workspaces.
+>
> If you check more than 10 workspaces, a warning message will appear. Note that in the list of workspaces, you can see the directory, subscription, location, and resource group associated with each workspace. The directory corresponds to the tenant.
- ![Choose multiple workspaces](./media/multiple-workspace-view/workspaces.png)
## Working with incidents
-In **Multiple Workspace View**, only the **Incidents** screen is available for now. It looks and functions in most ways like the regular **Incidents** screen. There are a few important differences, though:
+Multiple workspace view is currently available only for incidents. This page looks and functions in most ways like the regular [Incidents](investigate-cases.md) page, with the following important differences:
- ![View incidents in multiple workspaces](./media/multiple-workspace-view/incidents.png)
-- The counters at the top of the page - *Open incidents*, *New incidents*, *In progress*, etc. - show the numbers for all of the selected workspaces collectively.+
+- The counters at the top of the page - *Open incidents*, *New incidents*, *Active incidents*, etc. - show the numbers for all of the selected workspaces collectively.
- You'll see incidents from all of the selected workspaces and directories (tenants) in a single unified list. You can filter the list by workspace and directory, in addition to the filters from the regular **Incidents** screen.
In **Multiple Workspace View**, only the **Incidents** screen is available for n
- If you choose a single incident and click **View full details** or **Actions** > **Investigate**, you will from then on be in the data context of that incident's workspace and no others. ## Next steps
-In this document, you learned how to view and work with incidents in multiple Microsoft Sentinel workspaces concurrently. To learn more about Microsoft Sentinel, see the following articles:
+
+In this article, you learned how to view and work with incidents in multiple Microsoft Sentinel workspaces concurrently. To learn more about Microsoft Sentinel, see the following articles:
+ - Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
vpn-gateway Vpn Gateway Howto Aws Bgp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-howto-aws-bgp.md
+
+ Title: 'Tutorial - Configure a BGP-enabled connection between Azure and Amazon Web Services (AWS) using the portal'
+description: In this tutorial, learn how to connect Azure and AWS using an active-active VPN Gateway and two site-to-site connections on AWS.
+++++ Last updated : 12/2/2021+++
+# How to connect AWS and Azure using a BGP-enabled VPN gateway
+
+This article walks you through the setup of a BGP-enabled connection between Azure and Amazon Web Services (AWS). You'll use an Azure VPN gateway with BGP and active-active enabled and an AWS virtual private gateway with two site-to-site connections.
+
+## <a name="architecture"></a>Architecture
+In this setup, you'll create the following resources:
+
+Azure
+* One virtual network
+* One virtual network gateway with active-active and BGP enabled
+* Four local network gateways
+* Four site-to-site connections
+
+AWS
+* One virtual private cloud (VPC)
+* One virtual private gateway
+* Two customer gateways
+* Two site-to-site connections, each with two tunnels (total of four tunnels)
+
+A site-to-site connection on AWS has two tunnels, each with their own outside IP address and inside IPv4 CIDR (used for BGP APIPA). An active-passive VPN gateway only supports **one** custom BGP APIPA. You'll need to enable **active-active** on your Azure VPN gateway to connect to multiple AWS tunnels.
+
+On the AWS side, you'll create a customer gateway and site-to-site connection for **each of the two Azure VPN gateway instances** (total of four outgoing tunnels). In Azure, you'll need to create four local network gateways and four connections to receive these four AWS tunnels.
+++
+## <a name="apipa-config"></a> Choosing BGP APIPA Addresses
+
+You can use the values below for your BGP APIPA configuration throughout the tutorial.
+
+| **Tunnel** | **Azure Custom Azure APIPA BGP IP Address** | **AWS BGP Peer IP Address** | **AWS Inside IPv4 CIDR** |
+|--||-- | --|
+| **AWS Tunnel 1 to Azure Instance 0** | 169.254.21.2 | 169.254.21.1 | 169.254.21.0/30 |
+| **AWS Tunnel 2 to Azure Instance 0** | 169.254.22.2 | 169.254.22.1 | 169.254.22.0/30 |
+| **AWS Tunnel 1 to Azure Instance 1** | 169.254.21.6 | 169.254.21.5 | 169.254.21.4/30 |
+| **AWS Tunnel 2 to Azure Instance 1** | 169.254.22.6 | 169.254.22.5 | 169.254.22.4/30 |
+
+You can also set up your own custom APIPA addresses. AWS requires a /30 **Inside IPv4 CIDR** in the APIPA range of **169.254.0.0/16** for each tunnel. This CIDR must also be in the Azure-reserved APIPA range for VPN, which is from **169.254.21.0** to **169.254.22.255**. AWS will use the first IP address of your /30 inside CIDR and Azure will use the second. This means you'll need to reserve space for two IP addresses in your AWS /30 CIDR.
+
+For example, if you set your AWS **Inside IPv4 CIDR** to be **169.254.21.0/30**, AWS will use the BGP IP address **169.254.21.1** and Azure will use the IP address **169.254.21.2**.
+ >
+ > [!IMPORTANT]
+ >
+ > Your APIPA addresses must not overlap between the on-premises VPN devices and all connected Azure VPN gateways.
+ >
+
+## Prerequisites
+
+You must have both an Azure account and AWS account with an active subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
+
+## <a name ="part-1"></a> Part 1: Create an active-active VPN gateway in Azure
+
+### <a name ="create-vnet"></a> Create a VNet
+Create a virtual network with the following values by following the steps in the [create a gateway tutorial](./tutorial-create-gateway-portal.md#CreatVNet).
+
+* **Subscription**: If you have more than one subscription, verify that you're using the correct one.
+* **Resource group**: TestRG1
+* **Name**: VNet1
+* **Location**: East US
+* **IPv4 address space**: 10.1.0.0/16
+* **Subnet name**: FrontEnd
+* **Subnet address range**: 10.1.0.0/24
+
+### <a name ="create-gateway"></a> Create an active-active VPN gateway with BGP
+Create a VPN gateway using the following values:
+* **Name**: VNet1GW
+* **Region**: East US
+* **Gateway type**: VPN
+* **VPN type**: Route-based
+* **SKU**: VpnGw2
+* **Generation**: Generation 2
+* **Virtual network**: VNet1
+* **Gateway subnet address range**: 10.1.1.0/24
+* **Public IP address**: Create new
+* **Public IP address name**: VNet1GWpip
+* **Enable active-active mode**: Enabled
+* **SECOND PUBLIC IP ADDRESS**: Create new
+* **Public IP address 2 name**: VNet1GWpip2
+* **Configure BGP**: Enabled
+* **Autonomous system number (ASN)**: 65000
+* **Custom Azure APIPA BGP IP address**: 169.254.21.2, 169.254.22.2
+* **Second Custom Azure APIPA BGP IP address**: 169.254.21.6, 169.254.22.6
+
+1. In the Azure portal, navigate to the **Virtual network gateway** resource from the Marketplace, and select **Create**.
+2. Fill in the parameters as shown below.
+
+ :::image type="content" source="./media/vpn-gateway-howto-aws-bgp/create-gw-config.png" alt-text="Parameters for creating gateway" :::
+3. Enable active-active mode
+
+ :::image type="content" source="./media/vpn-gateway-howto-aws-bgp/create-gw-active-active.png" alt-text="Active active for creating gateway" :::
+ - Under Public IP Address, select **Enabled** for **Enable active-active mode**.
+ - Specify names for the first and second **Public IP address name**. These settings specify the public IP address object that gets associated to the VPN gateway. The public IP address is dynamically assigned to this object when the VPN gateway is created.
+4. Configure BGP
+
+ :::image type="content" source="./media/vpn-gateway-howto-aws-bgp/create-gw-bgp.png" alt-text="BGP for creating gateway" :::
+ - Select **Enabled** for **Configure BGP** to show the BGP configuration section.
+ - Fill in a **ASN (Autonomous System Number)**. This ASN must be different than the ASN used by AWS.
+ - Add two addresses to **Custom Azure APIPA BGP IP address**. Include the IP addresses for **AWS Tunnel 1 to Azure Instance 0** and **AWS Tunnel 2 to Azure Instance 0** from the [APIPA configuration you chose](#apipa-config). The second input will only appear after you add your first APIPA BGP IP address.
+ - Add two addresses to **Second Custom Azure APIPA BGP IP address**. Include the IP addresses for **AWS Tunnel 1 to Azure Instance 1** and **AWS Tunnel 2 to Azure Instance 1** from the [APIPA configuration you chose](#apipa-config). The second input will only appear after you add your first APIPA BGP IP address.
+6. Select **Review + create** to run validation. Once validation passes, select **Create** to deploy the VPN gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. You can see the deployment status on the Overview page for your gateway.
+
+## <a name ="part-2"></a> Part 2: Connect to your VPN gateway from AWS
+In this section, you'll connect to your Azure VPN gateway from AWS. For updated instructions, always refer to the [official AWS documentation](https://docs.aws.amazon.com/vpn/https://docsupdatetracker.net/index.html).
+
+### <a name ="create-vpc"></a> Create a VPC
+Create a VPC using the values below and the [most recent AWS documentation](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/gsg_create_vpc.html#create_vpc).
+* **Name**: VPC1
+* **CIDR block**: 10.2.0.0/16
+
+Make sure that your CIDR block does not overlap with the virtual network you created in Azure.
+
+### <a name ="create-vpg"></a> Create a virtual private gateway
+Create a virtual private gateway using the values below and the [most recent AWS documentation](https://docs.aws.amazon.com/directconnect/latest/UserGuide/virtualgateways.html#create-virtual-private-gateway).
+* **Name**: AzureGW
+* **ASN**: Amazon default ASN (64512)
+* **VPC**: Attached to VPC1
+
+If you choose to use a custom ASN, make sure it's different than the ASN you used in Azure.
+
+### <a name ="enable-route-propagation"></a> Enable route propagation
+Enable route propagation on your virtual private gateway using the [most recent AWS documentation](https://docs.aws.amazon.com/vpc/latest/userguide/WorkWithRouteTables.html#EnableDisableRouteProp).
+
+### <a name ="create-customer-gateways"></a> Create customer gateways
+Create two customer gateways using the values below and the [most recent AWS documentation](https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html#vpn-create-cgw).
+
+Customer gateway 1 settings
+
+* **Name**: ToAzureInstance0
+* **Routing**: Dynamic
+* **BGP ASN**: 65000 (the ASN for your Azure VPN gateway)
+* **IP Address**: the first public IP address of your Azure VPN gateway
+
+Customer gateway 2 settings
+
+* **Name**: ToAzureInstance1
+* **Routing**: Dynamic
+* **BGP ASN**: 65000 (the ASN for your Azure VPN gateway)
+* **IP Address**: the second public IP address of your Azure VPN gateway
+
+You can locate your **Public IP address** and your **Second Public IP address** on Azure in the **Configuration** section of your virtual network gateway.
+
+### <a name ="create-aws-connections"></a> Create site-to-site VPN connections
+Create two site-to-site VPN connections using the values below and the [most recent AWS documentation](https://docs.aws.amazon.com/vpn/latest/s2svpn/SetUpVPNConnections.html#vpn-create-vpn-connection).
+
+Site-to-site connection 1 settings
+* **Name**: ToAzureInstance0
+* **Target Gateway Type**: Virtual Private Gateway
+* **Virtual Private Gateway**: AzureGW
+* **Customer Gateway**: Existing
+* **Customer Gateway**: ToAzureInstance0
+* **Routing Options**: Dynamic (requires BGP)
+* **Local IPv4 Network CIDR**: 0.0.0.0/0
+* **Tunnel Inside Ip Version**: IPv4
+* **Inside IPv4 CIDR for Tunnel 1**: 169.254.21.0/30
+* **Pre-Shared Key for Tunnel 1**: choose a secure key
+* **Inside IPv4 CIDR for Tunnel 2**: 169.254.22.0/30
+* **Pre-Shared Key for Tunnel 2**: choose a secure key
+* **Startup Action**: Start
+
+Site-to-site connection 2 settings
+* **Name**: ToAzureInstance1
+* **Target Gateway Type**: Virtual Private Gateway
+* **Virtual Private Gateway**: AzureGW
+* **Customer Gateway**: Existing
+* **Customer Gateway**: ToAzureInstance1
+* **Routing Options**: Dynamic (requires BGP)
+* **Local IPv4 Network CIDR**: 0.0.0.0/0
+* **Tunnel Inside Ip Version**: IPv4
+* **Inside IPv4 CIDR for Tunnel 1**: 169.254.21.4/30
+* **Pre-Shared Key for Tunnel 1**: choose a secure key
+* **Inside IPv4 CIDR for Tunnel 2**: 169.254.22.4/30
+* **Pre-Shared Key for Tunnel 2**: choose a secure key
+* **Startup Action**: Start
+
+For **Inside IPv4 CIDR for Tunnel 1** and **Inside IPv4 CIDR for Tunnel 2** for both connections, refer to the APIPA configuration you [chose](#apipa-config).
+
+## <a name ="part-3"></a> Part 3: Connect to your AWS customer gateways from Azure
+Next, you'll connect your AWS tunnels to Azure. For each of the four tunnels, you'll have both a local network gateway and a site-to-site connection.
+
+ >
+ > [!IMPORTANT]
+ >
+ > Repeat the following sections for **each of your four AWS tunnels**, using their respective **outside IP address**
+ >
+
+### <a name ="create-local-network-gateways"></a> Create local network gateways
+1. In the Azure portal, navigate to the **Local network gateway** resource from the Marketplace, and select **Create**.
+2. Select the same **Subscription**, **Resource Group**, and **Region** you used to create your virtual network gateway.
+3. Enter a name for your local network gateway.
+4. Leave **IP Address** as the value for **Endpoint**.
+5. For **IP Address**, enter the **Outside IP Address** (from AWS) for the tunnel you're creating.
+6. Leave **Address Space** as blank and select **Advanced**.
++
+7. Select **Yes** for **Configure BGP settings**.
+8. For **Autonomous system number (ASN)**, enter the ASN for your AWS Virtual Private Network. Use the ASN **64512** if you left your ASN as the AWS default value.
+9. For **BGP peer IP address**, enter the AWS BGP Peer IP Address based on the [APIPA configuration you chose](#apipa-config).
+++
+### <a name ="create-azure-connections"></a> Create connections
+1. Open the page for your **virtual network gateway**, navigate to the **connections** page, then select **Add**.
+2. Enter a name for your connection.
+3. Select **Site-to-Site** as the **Connection type**.
+4. Select the **local network gateway** you created.
+5. Enter the **Shared key (PSK)** that matches the pre-shared key you entered when making the AWS connections.
+6. Select **Enable BGP**, then **Enable Custom BGP Addresses**.
+7. Under **Custom BGP Addresses**
+ * Enter the Custom BGP Address based on the [APIPA configuration you chose](#apipa-config).
+ * The **Custom BGP Address** (Inside IPv4 CIDR in AWS) must match with the **IP Address** (Outside IP Address in AWS) that you specified in the local network gateway you're using for this connection.
+ * Only one of the two custom BGP addresses will be used, depending on the tunnel you're specifying it for.
+ * For making a connection from AWS to the **first public IP address** of your VPN gateway (instance 0), **only the Primary Custom BGP Address** will be used.
+ * For making a connection from AWS to the **second public IP address** of your VPN gateway (instance 1), **only the Secondary Custom BGP Address** will be used.
+ * Leave the other **Custom BGP Address** as default.
+
+ If you used the [default APIPA configuration](#apipa-config), you can use the addresses below.
+
+ | Tunnel | Primary Custom BGP Address | Secondary Custom BGP Address |
+ |-|--|-|
+ | AWS Tunnel 1 to Azure Instance 0 | 169.254.21.2 | Not used (select 169.254.21.6)|
+ | AWS Tunnel 2 to Azure Instance 0 | 169.254.22.2 | Not used (select 169.254.21.6)|
+ | AWS Tunnel 1 to Azure Instance 1 | Not used (select 169.254.21.2) | 169.254.21.6 |
+ | AWS Tunnel 2 to Azure Instance 1 | Not used (select 169.254.21.2) | 169.254.22.6 |
+8. Leave the rest of the fields as their default values and select **Ok**.
+
+ :::image type="content" source="./media/vpn-gateway-howto-aws-bgp/create-connection.png" alt-text="Modifying connection" :::
+
+9. From the **Connections** page for your VPN gateway, select the connection you created and navigate to the **Configuration** page.
+10. Select **ResponderOnly** for the **Connection Mode** and select **Save**.
+ :::image type="content" source="./media/vpn-gateway-howto-aws-bgp/responder-only.png" alt-text="Make connections ResponderOnly" :::
++
+Verify that you have a **local network gateway** and **connection** for **each of your four AWS tunnels**.
+
+## <a name ="part-4"></a> Part 4: (Optional) Check the status of your connections
+### <a name ="verify-azure"></a> Check your connections status on Azure
+1. Open the page for your **virtual network gateway**, navigate to the **Connections** page.
+2. Verify that all 4 connections show as **Connected**.
+
+ :::image type="content" source="./media/vpn-gateway-howto-aws-bgp/verify-connections.png" alt-text="Verify Azure connections" :::
+
+### <a name ="verify-bgp-peers"></a> Check your BGP peers status on Azure
+1. Open the page for your **virtual network gateway**, navigate to the **BGP Peers** page.
+2. In the **BGP Peers** table, verify that all of the connections with the **Peer address** you specified show as **Connected** and are exchanging routes.
+
+ :::image type="content" source="./media/vpn-gateway-howto-aws-bgp/verify-bgp-peers.png" alt-text="Verify BGP Peers" :::
+
+### <a name ="verify-aws-status"></a> Check your connections status on AWS
+1. Open the [Amazon VPC console](https://console.aws.amazon.com/vpc/)
+2. In the navigation pane, click **Site-to-Site VPN Connections**.
+3. Select the first connection you made and then select the **Tunnel Details** tab.
+4. Verify that the **Status** of both tunnels shows as **UP**.
+5. Verify that the **Details** of both tunnels shows one or more BGP routes.