Updates from: 04/07/2022 01:09:45
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Get-AzureADKerberosServer -Domain $domain -CloudCredential $cloudCred -DomainCre
This command outputs the properties of the Azure AD Kerberos Server. You can review the properties to verify that everything is in good order. > [!NOTE]
-> Running against another domain by supplying the credential will connect over NTLM, and then it fails. If the users are in the Protected Users security group in Active Directory, complete these steps to resolve the issue: Sign in as another domain user in **ADConnect** and donΓÇÖt supply "-domainCredential". The Kereberos ticket of the user that's currently signed in is used. You can confirm by executing `whoami /groups` to validate whether the user has the required permissions in Active Directory to execute the preceding command.
+> Running against another domain by supplying the credential will connect over NTLM, and then it fails. If the users are in the Protected Users security group in Active Directory, complete these steps to resolve the issue: Sign in as another domain user in **ADConnect** and donΓÇÖt supply "-domainCredential". The Kerberos ticket of the user that's currently signed in is used. You can confirm by executing `whoami /groups` to validate whether the user has the required permissions in Active Directory to execute the preceding command.
| Property | Description | | | |
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
If you need to create and configure a test account, use the following steps:
> Combined security registration can be enabled that configures SSPR and Azure AD Multi-Factor Authentication at the same time. For more information, see [Enable combined security information registration in Azure Active Directory](howto-registration-mfa-sspr-combined.md). > > You can also [force users to re-register authentication methods](howto-mfa-userdevicesettings.md#manage-user-authentication-options) if they previously only enabled SSPR.
+>
+> Users who connect to the NPS server using username and password will be required to complete a multi-factor authentication prompt.
## Install the NPS extension
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
The following requirements apply to the Azure AD Password Protection DC agent:
* All machines where the Azure AD Password Protection DC agent software will be installed must run Windows Server 2012 or later, including Windows Server Core editions. * The Active Directory domain or forest doesn't need to be at Windows Server 2012 domain functional level (DFL) or forest functional level (FFL). As mentioned in [Design Principles](concept-password-ban-bad-on-premises.md#design-principles), there's no minimum DFL or FFL required for either the DC agent or proxy software to run.
-* All machines where the Azure AD Password Protection proxy service will be installed must have .NET 4.7.2 installed.
+* All machines where the Azure AD Password Protection DC agent will be installed must have .NET 4.7.2 installed.
* If .NET 4.7.2 is not already installed, download and run the installer found at [The .NET Framework 4.7.2 offline installer for Windows](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2). * Any Active Directory domain that runs the Azure AD Password Protection DC agent service must use Distributed File System Replication (DFSR) for sysvol replication. * If your domain isn't already using DFSR, you must migrate before installing Azure AD Password Protection. For more information, see [SYSVOL Replication Migration Guide: FRS to DFS Replication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd640019(v=ws.10))
active-directory Concept Condition Filters For Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md
description: Use filter for devices in Conditional Access to enhance security po
Previously updated : 02/28/2022 Last updated : 04/05/2022
The filter for devices condition in Conditional Access evaluates policy based on
| Include/exclude mode with positive operators (Equals, StartsWith, EndsWith, Contains, In) and use of any attributes | Unregistered device | No | | Include/exclude mode with positive operators (Equals, StartsWith, EndsWith, Contains, In) and use of attributes excluding extensionAttributes1-15 | Registered device | Yes, if criteria are met | | Include/exclude mode with positive operators (Equals, StartsWith, EndsWith, Contains, In) and use of attributes including extensionAttributes1-15 | Registered device managed by Intune | Yes, if criteria are met |
-| Include/exclude mode with positive operators (Equals, StartsWith, EndsWith, Contains, In) and use of attributes including extensionAttributes1-15 | Registered device not managed by Intune | Yes, if criteria are met and if device is compliant or Hybrid Azure AD joined |
+| Include/exclude mode with positive operators (Equals, StartsWith, EndsWith, Contains, In) and use of attributes including extensionAttributes1-15 | Registered device not managed by Intune | Yes, if criteria are met. When extensionAttributes1-15 are used, the policy will apply if device is compliant or Hybrid Azure AD joined |
| Include/exclude mode with negative operators (NotEquals, NotStartsWith, NotEndsWith, NotContains, NotIn) and use of any attributes | Unregistered device | Yes | | Include/exclude mode with negative operators (NotEquals, NotStartsWith, NotEndsWith, NotContains, NotIn) and use of any attributes excluding extensionAttributes1-15 | Registered device | Yes, if criteria are met | | Include/exclude mode with negative operators (NotEquals, NotStartsWith, NotEndsWith, NotContains, NotIn) and use of any attributes including extensionAttributes1-15 | Registered device managed by Intune | Yes, if criteria are met |
-| Include/exclude mode with negative operators (NotEquals, NotStartsWith, NotEndsWith, NotContains, NotIn) and use of any attributes including extensionAttributes1-15 | Registered device not managed by Intune | Yes, if criteria are met and if device is compliant or Hybrid Azure AD joined |
+| Include/exclude mode with negative operators (NotEquals, NotStartsWith, NotEndsWith, NotContains, NotIn) and use of any attributes including extensionAttributes1-15 | Registered device not managed by Intune | Yes, if criteria are met. When extensionAttributes1-15 are used, the policy will apply if device is compliant or Hybrid Azure AD joined |
## Next steps
active-directory Concept Conditional Access Conditions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md
Previously updated : 03/10/2022 Last updated : 04/05/2022
Azure AD Conditional Access supports the following device platforms:
- iOS - Windows - macOS-- Linux (Preview)
+- Linux
If you block legacy authentication using the **Other clients** condition, you can also set the device platform condition.
By selecting **Other clients**, you can specify a condition that affects apps th
## Device state (preview) > [!CAUTION]
-> **This preview feature has being deprecated.** Customers should use **Filter for devices** condition in Conditional Access to satisfy scenarios, previously achieved using device state (preview) condition.
+> **This preview feature has been deprecated.** Customers should use **Filter for devices** condition in Conditional Access to satisfy scenarios, previously achieved using device state (preview) condition.
The device state condition was used to exclude devices that are hybrid Azure AD joined and/or devices marked as compliant with a Microsoft Intune compliance policy from an organization's Conditional Access policies.
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Previously updated : 12/3/2021 Last updated : 04/04/2022 -+
The set of optional claims available by default for applications to use are list
| `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. | | `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value is not guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md#validate-the-user-has-permission-to-access-this-data). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or pre-fill in your UX. | | `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) |
-| `groups`| Optional formatting for group claims |JWT, SAML| |Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well. For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md)
+| `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well.
| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This is the most accurate way for an API to determine if a token is an app token or an app+user token.| | `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user clicks on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you are operating in a guest scenario, where the user is from another tenant, then you must still provide a tenant identifier in the sign in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. | | `sid` | Session ID, used for per-session user sign-out. | JWT | Personal and Azure AD accounts. | |
Within the SAML tokens, these claims will be emitted with the following URI form
This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the UI or application manifest. > [!IMPORTANT]
-> For more details including important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
+> Azure AD limits the number of groups emitted in a token to 150 for SAML assertions and 200 for JWT, including nested groups. For more details on group limits and important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
**Configuring groups optional claims through the UI:**
This section covers the configuration options under optional claims for changing
1. Select the application you want to configure optional claims for in the list. 1. Under **Manage**, select **Token configuration**. 1. Select **Add groups claim**.
-1. Select the group types to return (**Security groups**, or **Directory roles**, **All groups**, and/or **Groups assigned to the application**). The **Groups assigned to the application** option includes only groups assigned to the application. The **All Groups** option includes **SecurityGroup**, **DirectoryRole**, and **DistributionList**, but not **Groups assigned to the application**.
+1. Select the group types to return (**Security groups**, or **Directory roles**, **All groups**, and/or **Groups assigned to the application**):
+ - The **Groups assigned to the application** option includes only groups assigned to the application. The **Groups assigned to the application** option is recommended for large organizations due to the group number limit in token. To change the groups assigned to the application, select the application from the **Enterprise applications** list. Select **Users and groups** and then **Add user/group**. Select the group(s) you want to add to the application from **Users and groups**.
+ - The **All Groups** option includes **SecurityGroup**, **DirectoryRole**, and **DistributionList**, but not **Groups assigned to the application**.
1. Optional: select the specific token type properties to modify the groups claim value to contain on premises group attributes or to change the claim type to a role. 1. Select **Save**.
active-directory Msal Authentication Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-authentication-flows.md
In the preceding diagram:
## Implicit grant
-The implicit grant has been replaced by the [authorization code flow with PCKE](scenario-spa-overview.md) as the preferred and more secure token grant flow for client-side single page-applications (SPAs). If you're building a SPA, use the authorization code flow with PKCE instead.
+The implicit grant has been replaced by the [authorization code flow with PKCE](scenario-spa-overview.md) as the preferred and more secure token grant flow for client-side single page-applications (SPAs). If you're building a SPA, use the authorization code flow with PKCE instead.
Single-page web apps written in JavaScript (including frameworks like Angular, Vue.js, or React.js) are downloaded from the server and their code runs directly in the browser. Because their client-side code runs in the browser and not on a web server, they have different security characteristics than traditional server-side web applications. Prior to the availability of Proof Key for Code Exchange (PKCE) for the authorization code flow, the implicit grant flow was used by SPAs for improved responsiveness and efficiency in getting access tokens.
active-directory Howto Hybrid Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-azure-ad-join.md
Previously updated : 02/15/2022 Last updated : 04/06/2022
If you experience issues with completing hybrid Azure AD join for domain-joined
- [Troubleshooting devices using dsregcmd command](./troubleshoot-device-dsregcmd.md) - [Troubleshoot hybrid Azure AD join for Windows current devices](troubleshoot-hybrid-join-windows-current.md) - [Troubleshoot hybrid Azure AD join for Windows downlevel devices](troubleshoot-hybrid-join-windows-legacy.md)
+- [Troubleshoot pending device state](/troubleshoot/azure/active-directory/pending-devices)
## Next steps
active-directory Howto Hybrid Join Verify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-hybrid-join-verify.md
Previously updated : 01/20/2022 Last updated : 04/06/2022
For downlevel devices, see the article [Troubleshooting hybrid Azure Active Dire
1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices). 2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./device-management-azure-portal.md).
-3. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices.
+3. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle.
4. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed. ## Using PowerShell
Get-MsolDevice -All -IncludeSystemManagedDevices | where {($_.DeviceTrustType -e
- [Downlevel device enablement](howto-hybrid-join-downlevel.md) - [Configure hybrid Azure AD join](howto-hybrid-azure-ad-join.md)
+- [Troubleshoot pending device state](/troubleshoot/azure/active-directory/pending-devices)
active-directory Hybrid Azuread Join Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-control.md
Previously updated : 01/20/2022 Last updated : 04/06/2022
To control the device registration, you should deploy the Windows Installer pack
> [!NOTE] > If a SCP is not configured in AD, then you should follow the same approach as described to [Configure client-side registry setting for SCP](#configure-client-side-registry-setting-for-scp)) on your domain-joined computers using a Group Policy Object (GPO).
+## Why a device might be in a pending state
+
+When you configure a **Hybrid Azure AD join** task in the Azure AD Connect Sync for your on-premises devices, the task will sync the device objects to Azure AD, and temporarily set the registered state of the devices to "pending" before the device completes the device registration. This is because the device must be added to the Azure AD directory before it can be registered. For more information about the device registration process, see [How it works: Device registration](device-registration-how-it-works.md#hybrid-azure-ad-joined-in-managed-environments).
+ ## Post validation After you verify that everything works as expected, you can automatically register the rest of your Windows current and down-level devices with Azure AD by [configuring the SCP using Azure AD Connect](hybrid-azuread-join-managed-domains.md#configure-hybrid-azure-ad-join).
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
Previously updated : 11/05/2021 Last updated : 04/05/2022
Because subdomains inherit the authentication type of the root domain by default
1. Use the following example to GET the domain. Because the domain isn't a root domain, it inherits the root domain authentication type. Your command and results might look as follows, using your own tenant ID: ```http
- GET https://graph.windows.net/{tenant_id}/domains?api-version=1.6
+ GET https://graph.microsoft.com/v1.0/domains/foo.contoso.com/
Return: {
Because subdomains inherit the authentication type of the root domain by default
Use the following command to promote the subdomain: ```http
-POST https://graph.windows.net/{tenant_id}/domains/child.mydomain.com/promote?api-version=1.6
+POST https://graph.microsoft.com/v1.0/domains/foo.contoso.com/promote
```
+#### Promote command error conditions
+
+Scenario | Method | Code | Message
+-- | | - | -
+Invoking API with a subdomain whose parent domain is unverified | POST | 400 | Unverified domains cannot be promoted. Please verify the domain before promotion.
+Invoking API with a federated verified subdomain with user references | POST | 400 | Promoting a subdomain with user references is not allowed. Please migrate the users to the current root domain before promotion of the subdomain.
++ ### Change the subdomain authentication type 1. Use the following command to change the subdomain authentication type:
POST https://graph.windows.net/{tenant_id}/domains/child.mydomain.com/promote?ap
1. Verify via GET in Microsoft Graph API that subdomain authentication type is now managed: ```http
- GET https://graph.windows.net/{{tenant_id} }/domains?api-version=1.6
+ GET https://graph.microsoft.com/v1.0/domains/foo.contoso.com/
Return: {
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Previously updated : 03/29/2022 Last updated : 03/31/2022
This article describes how to set up federation with any organization whose iden
> >- We no longer support an allowlist of IdPs for new SAML/WS-Fed IdP federations. When you're setting up a new external federation, refer to [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records). >- In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint. For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below. Any existing federations configured with the global endpoint will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request.
+> - Currently, you can add only one domain to your external federation. We're actively working on allowing additional domains.
> - We've removed the limitation that required the authentication URL domain to match the target domain or be from an allowed IdP. For details, see [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records). ## When is a guest user authenticated with SAML/WS-Fed IdP federation?
You can also give guest users a direct link to an application or resource by inc
## Frequently asked questions ### Can I set up SAML/WS-Fed IdP federation with Azure AD verified domains?
-No, we block SAML/WS-Fed IdP federation for Azure AD verified domains in favor of native Azure AD managed domain capabilities. If you try to set up SAML/WS-Fed IdP federation with a domain that is DNS-verified in Azure AD, you'll see an error in the Azure portal or PowerShell.
+No, we block SAML/WS-Fed IdP federation for Azure AD verified domains in favor of native Azure AD managed domain capabilities. If you try to set up SAML/WS-Fed IdP federation with a domain that is DNS-verified in Azure AD, you'll see an error.
### Can I set up SAML/WS-Fed IdP federation with a domain for which an unmanaged (email-verified) tenant exists? Yes, you can set up SAML/WS-Fed IdP federation with domains that aren't DNS-verified in Azure AD, including unmanaged (email-verified or "viral") Azure AD tenants. Such tenants are created when a user redeems a B2B invitation or performs self-service sign-up for Azure AD using a domain that doesnΓÇÖt currently exist. If the domain hasn't been verified and the tenant hasn't undergone an [admin takeover](../enterprise-users/domains-admin-takeover.md), you can set up federation with that domain.
Required attributes for the SAML 2.0 response from the IdP:
||| |AssertionConsumerService |`https://login.microsoftonline.com/login.srf` | |Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up federation with.<br></br> In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint (for example, `https://login.microsoftonline.com/<tenant ID>/`). For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Any existing federations configured with the global endpoint (for example, `urn:federation:MicrosoftOnline`) will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request sent by Azure AD.|
-|Issuer |The issuer URI of the partner IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` |
+|Issuer |The issuer URI of the partner's IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` |
Required claims for the SAML 2.0 token issued by the IdP:
Required attributes in the WS-Fed message from the IdP:
||| |PassiveRequestorEndpoint |`https://login.microsoftonline.com/login.srf` | |Audience |`https://login.microsoftonline.com/<tenant ID>/` (Recommended) Replace `<tenant ID>` with the tenant ID of the Azure AD tenant you're setting up federation with.<br></br> In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint (for example, `https://login.microsoftonline.com/<tenant ID>/`). For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Any existing federations configured with the global endpoint (for example, `urn:federation:MicrosoftOnline`) will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request sent by Azure AD. |
-|Issuer |The issuer URI of the partner IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` |
+|Issuer |The issuer URI of the partner's IdP, for example `http://www.example.com/exk10l6w90DHM0yi...` |
Required claims for the WS-Fed token issued by the IdP:
Required claims for the WS-Fed token issued by the IdP:
## Step 3: Configure SAML/WS-Fed IdP federation in Azure AD
-Next, you'll configure federation with the IdP configured in step 1 in Azure AD. You can use either the Azure AD portal or PowerShell. It might take 5-10 minutes before the federation policy takes effect. During this time, don't attempt to redeem an invitation for the federation domain. The following attributes are required:
+Next, you'll configure federation with the IdP configured in step 1 in Azure AD. You can use either the Azure AD portal or the [Microsoft Graph API](/graph/api/resources/samlorwsfedexternaldomainfederation?view=graph-rest-beta). It might take 5-10 minutes before the federation policy takes effect. During this time, don't attempt to redeem an invitation for the federation domain. The following attributes are required:
-- Issuer URI of partner IdP
+- Issuer URI of the partner's IdP
- Passive authentication endpoint of partner IdP (only https is supported) - Certificate ### To configure federation in the Azure AD portal
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
+1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
2. Select **External Identities** > **All identity providers**. 3. Select **New SAML/WS-Fed IdP**.
- ![Screenshot showing button for adding a new SAML or WS-Fed IdP](media/direct-federation/new-saml-wsfed-idp.png)
-
-4. On the **New SAML/WS-Fed IdP** page, under **Identity provider protocol**, select **SAML** or **WS-FED**.
+ ![Screenshot showing button for adding a new SAML or WS-Fed IdP.](media/direct-federation/new-saml-wsfed-idp.png)
- ![Screenshot showing parse button on the SAML or WS-Fed IdP page](media/direct-federation/new-saml-wsfed-idp-parse.png)
+4. On the **New SAML/WS-Fed IdP** page, enter the following:
+ - **Display name** - Enter a name to help you identify the partner's IdP.
+ - **Identity provider protocol** - Select **SAML** or **WS-Fed**.
+ - **Domain name of federating IdP** - Enter your partnerΓÇÖs IdP target domain name for federation. Currently, one domain name is supported, but we're working on allowing more.
-5. Enter your partner organizationΓÇÖs domain name, which will be the target domain name for federation
-6. You can upload a metadata file to populate metadata details. If you choose to input metadata manually, enter the following information:
- - Domain name of partner IdP
- - Entity ID of partner IdP
- - Passive requestor endpoint of partner IdP
- - Certificate
- > [!NOTE]
- > Metadata URL is optional, however we strongly recommend it. If you provide the metadata URL, Azure AD can automatically renew the signing certificate when it expires. If the certificate is rotated for any reason before the expiration time or if you do not provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
+ ![Screenshot showing the new SAML or WS-Fed IdP page.](media/direct-federation/new-saml-wsfed-idp-parse.png)
-7. Select **Save**.
+5. Select a method for populating metadata. You can **Input metadata manually**, or if you have a file that contains the metadata, you can automatically populate the fields by selecting **Parse metadata file** and browsing for the file.
+ - **Issuer URI** - The issuer URI of the partner's IdP.
+ - **Passive authentication endpoint** - The partner IdP's passive requestor endpoint.
+ - **Certificate** - The signing certificate ID.
+ - **Metadata URL** - The location of the IdP's metadata for automatic renewal of the signing certificate.
-### To configure SAML/WS-Fed IdP federation in Azure AD using PowerShell
+ ![Screenshot showing metadata fields.](media/direct-federation/new-saml-wsfed-idp-input.png)
-1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)). If you need detailed steps, the Quickstart includes the guidance, [PowerShell module](b2b-quickstart-invite-powershell.md#prerequisites).
-2. Run the following command:
+ > [!NOTE]
+ > Metadata URL is optional, however we strongly recommend it. If you provide the metadata URL, Azure AD can automatically renew the signing certificate when it expires. If the certificate is rotated for any reason before the expiration time or if you do not provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
- ```powershell
- Connect-AzureAD
- ```
+6. Select **Save**.
-3. At the sign-in prompt, sign in with the managed Global Administrator account.
-4. Run the following commands, replacing the values from the federation metadata file. For AD FS Server and Okta, the federation file is federationmetadata.xml, for example: `https://sts.totheclouddemo.com/federationmetadata/2007-06/federationmetadata.xml`.
+### To configure federation using the Microsoft Graph API
- ```powershell
- $federationSettings = New-Object Microsoft.Open.AzureAD.Model.DomainFederationSettings
- $federationSettings.PassiveLogOnUri ="https://sts.totheclouddemo.com/adfs/ls/"
- $federationSettings.LogOffUri = $federationSettings.PassiveLogOnUri
- $federationSettings.IssuerUri = "http://sts.totheclouddemo.com/adfs/services/trust"
- $federationSettings.MetadataExchangeUri="https://sts.totheclouddemo.com/adfs/services/trust/mex"
- $federationSettings.SigningCertificate= <Replace with X509 signing certΓÇÖs public key>
- $federationSettings.PreferredAuthenticationProtocol="WsFed" OR "Samlp"
- $domainName = <Replace with domain name>
- New-AzureADExternalDomainFederation -ExternalDomainName $domainName -FederationSettings $federationSettings
- ```
+You can use the Microsoft Graph API [samlOrWsFedExternalDomainFederation](/graph/api/resources/samlorwsfedexternaldomainfederation?view=graph-rest-beta) resource type to set up federation with an identity provider that supports either the SAML or WS-Fed protocol.
## Step 4: Test SAML/WS-Fed IdP federation in Azure AD Now test your federation setup by inviting a new B2B guest user. For details, see [Add Azure AD B2B collaboration users in the Azure portal](add-users-administrator.md).
-## How do I edit a SAML/WS-Fed IdP federation relationship?
+## How do I update the certificate or configuration details?
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
-2. Select **External Identities**.
-3. Select **All identity providers**
-4. Under **SAML/WS-Fed identity providers**, select the provider.
-5. In the identity provider details pane, update the values.
-6. Select **Save**.
+On the **All identity providers** page, you can view the list of SAML/WS-Fed identity providers you've configured and their certificate expiration dates. From this list, you can renew certificates and modify other configuration details.
+![Screenshot showing an identity provider in the SAML WS-Fed list](media/direct-federation/saml-ws-fed-identity-provider-list.png)
-## How do I remove federation?
+1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
+1. Select **External Identities**.
+1. Select **All identity providers**.
+1. Under **SAML/WS-Fed identity providers**, scroll to an identity provider in the list or use the search box.
+1. To update the certificate or modify configuration details:
+ - In the **Configuration** column for the identity provider, select the **Edit** link.
+ - On the configuration page, modify any of the following details:
+ - **Display name** - Display name for the partner's organization.
+ - **Identity provider protocol** - Select **SAML** or **WS-Fed**.
+ - **Passive authentication endpoint** - The partner IdP's passive requestor endpoint.
+ - **Certificate** - The ID of the signing certificate. To renew it, enter a new certificate ID.
+ - **Metadata URL** - The URL containing the partner's metadata, used for automatic renewal of the signing certificate.
+ - Select **Save**.
+
+ ![Screenshot of the IDP configuration details.](media/direct-federation/modify-configuration.png)
+
+1. To view the domain for the IdP, select the link in the **Domains** column to view the partner's target domain name for federation.
+ > [!NOTE]
+ > If you need to update the partner's domain, you'll need to [delete the configuration](#how-do-i-remove-federation) and reconfigure federation with the identity provider using the new domain.
-You can remove your federation setup. If you do, federation guest users who have already redeemed their invitations won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
-To remove federation with an IdP in the Azure AD portal:
+ ![Screenshot of the domain configuration page](media/direct-federation/view-domain.png)
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
-2. Select **External Identities**.
-3. Select **All identity providers**.
-4. Select the identity provider, and then select **Delete**.
-5. Select **Yes** to confirm deletion.
+## How do I remove federation?
-To remove federation with an identity provider by using PowerShell:
+You can remove your federation configuration. If you do, federation guest users who have already redeemed their invitations won't be able to sign in. But you can give them access to your resources again by [resetting their redemption status](reset-redemption-status.md).
+To remove a configuration for an IdP in the Azure AD portal:
-1. Install the latest version of the Azure AD PowerShell for Graph module ([AzureADPreview](https://www.powershellgallery.com/packages/AzureADPreview)).
-2. Run the following command:
+1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
+1. Select **External Identities**.
+1. Select **All identity providers**.
+1. Under **SAML/WS-Fed identity providers**, scroll to the identity provider in the list or use the search box.
+1. Select the link in the **Domains** column to view the IdP's domain details.
+1. Select **Delete Configuration**.
- ```powershell
- Connect-AzureAD
- ```
+ ![Screenshot of deleting a configuration.](media/direct-federation/delete-configuration.png)
-3. At the sign-in prompt, sign in with the managed Global Administrator account.
-4. Enter the following command:
+1. Select **OK** to confirm deletion.
- ```powershell
- Remove-AzureADExternalDomainFederation -ExternalDomainName $domainName
- ```
+You can also remove federation using the Microsoft Graph API [samlOrWsFedExternalDomainFederation](/graph/api/resources/samlorwsfedexternaldomainfederation?view=graph-rest-beta) resource type.
## Next steps
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Previously updated : 01/05/2022 Last updated : 04/05/2022
Azure Active Directory (Azure AD) can provide a user's group membership informat
- Groups identified by their Azure AD object identifier (OID) attribute - Groups identified by the `sAMAccountName` or `GroupSID` attribute for Active Directory-synchronized groups and users
+> [!IMPORTANT]
+> The number of groups emitted in a token is limited to 150 for SAML assertions and 200 for JWT, including nested groups. In larger organizations, the number of groups where a user is a member might exceed the limit that Azure AD will add to a token. Exceeding a limit can lead to unpredictable results. For workarounds to these limits, read more in [Important caveats for this functionality](#important-caveats-for-this-functionality).
+ ## Important caveats for this functionality - Support for use of `sAMAccountName` and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from Active Directory Federation Services (AD FS) and other identity providers. Groups managed in Azure AD don't contain the attributes necessary to emit these claims.-- In larger organizations, the number of groups where a user is a member might exceed the limit that Azure AD will add to a token. Those limits are 150 groups for a SAML token and 200 for a JSON Web Token (JWT). Exceeding a limit can lead to unpredictable results. -
- If your users have large numbers of group memberships, we recommend using the option to restrict the groups emitted in claims to the relevant groups for the application. If assigning groups to your applications is not possible, you can configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim.
+- In order to avoid the number of groups limit if your users have large numbers of group memberships, you can restrict the groups emitted in claims to the relevant groups for the application. Read more about emitting groups assigned to the application for [JWT tokens](..\develop\active-directory-optional-claims.md#configuring-groups-optional-claims) and [SAML tokens](#add-group-claims-to-tokens-for-saml-applications-using-sso-configuration). If assigning groups to your applications is not possible, you can also configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim. Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
- Group claims have a five-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will have a `"hasgroups":true` claim only if the user is in more than five groups. - We recommend basing in-app authorization on application roles rather than groups when:
To configure group claims for a gallery or non-gallery SAML application via sing
| **All groups** | Emits security groups and distribution lists and roles. | | **Security groups** | Emits security groups that the user is a member of in the groups claim. | | **Directory roles** | If the user is assigned directory roles, they're emitted as a `wids` claim. (The group's claim won't be emitted.) |
- | **Groups assigned to the application** | Emits only the groups that are explicitly assigned to the application and that the user is a member of. |
+ | **Groups assigned to the application** | Emits only the groups that are explicitly assigned to the application and that the user is a member of. Recommended for large organizations due to the group number limit in token. |
- For example, to emit all the security groups that the user is a member of, select **Security groups**.
Some applications require the group membership information to appear in the role
> If you use the option to emit group data as roles, only groups will appear in the role claim. Any application roles that the user is assigned to won't appear in the role claim. #### Group filtering
-Group filtering allows for fine control of the list of groups that's included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the group's claim that's sent to that application. The filter will be applied against all groups regardless of the group hierarchy.
+Group filtering allows for fine control of the list of groups that's included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the group's claim that's sent to that application. The filter will be applied against all groups regardless of the group hierarchy.
+
+> [!NOTE]
+> Group filtering applies to tokens emitted for apps where group claims and filtering was configured in the **Enterprise apps** blade in the portal.
You can configure filters to be applied to the group's display name or `SAMAccountName` attribute. The following filtering operations are supported:
active-directory Grant Consent Single User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/grant-consent-single-user.md
When a user grants consent on his or her own behalf, the following events occur:
To grant consent to an application on behalf of one user, you need: -- A user account. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A Global Administrator or Privileged Administrator role.
+- A user account with Global Administrator, Application Administrator, or Cloud Application Administrator
## Grant consent on behalf of a single user
For this example, we'll use [Microsoft Graph PowerShell](/graph/powershell/get-s
# The app for which consent is being granted. In this example, we're granting access # to Microsoft Graph Explorer, an application published by Microsoft. $clientAppId = "de8bc8b5-d9f9-48b1-a8ad-b748da725064" # Microsoft Graph Explorer+ # The API to which access will be granted. Microsoft Graph Explorer makes API # requests to the Microsoft Graph API, so we'll use that here. $resourceAppId = "00000003-0000-0000-c000-000000000000" # Microsoft Graph API+ # The permissions to grant. Here we're including "openid", "profile", "User.Read" # and "offline_access" (for basic sign-in), as well as "User.ReadBasic.All" (for # reading other users' basic profile). $permissions = @("openid", "profile", "offline_access", "User.Read", "User.ReadBasic.All")+ # The user on behalf of whom access will be granted. The app will be able to access # the API on behalf of this user. $userUpnOrId = "user@example.com"+ # Step 0. Connect to Microsoft Graph PowerShell. We need User.ReadBasic.All to get # users' IDs, Application.ReadWrite.All to list and create service principals, # DelegatedPermissionGrant.ReadWrite.All to create delegated permission grants,
$userUpnOrId = "user@example.com"
Connect-MgGraph -Scopes ("User.ReadBasic.All Application.ReadWrite.All " ` + "DelegatedPermissionGrant.ReadWrite.All " ` + "AppRoleAssignment.ReadWrite.All")+ # Step 1. Check if a service principal exists for the client application. # If one does not exist, create it. $clientSp = Get-MgServicePrincipal -Filter "appId eq '$($clientAppId)'" if (-not $clientSp) { $clientSp = New-MgServicePrincipal -AppId $clientAppId }+ # Step 2. Create a delegated permission that grants the client app access to the # API, on behalf of the user. (This example assumes that an existing delegated # permission grant does not already exist, in which case it would be necessary
$grant = New-MgOauth2PermissionGrant -ResourceId $resourceSp.Id `
-ClientId $clientSp.Id ` -ConsentType "Principal" ` -PrincipalId $user.Id+ # Step 3. Assign the app to the user. This ensures that the user can sign in if assignment # is required, and ensures that the app shows up under the user's My Apps. if ($clientSp.AppRoles | ? { $_.AllowedMemberTypes -contains "User" }) {
active-directory Meta Networks Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|Attribute|Type|Supported for filtering|Required by Meta Networks Connector| ||||| |userName|String|&check;|&check;
+ |active|Boolean||
+ |phonenumbers[type eq "work"].value|String||
|name.givenName|String||&check; |name.familyName|String||&check;
- |active|Boolean||
- |phonenumbers[type eq "work"].value|String||
> [!NOTE] > phonenumbers value should be in E164 format. For example +16175551212
Once you've configured provisioning, use the following resources to monitor your
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change Log
+04/06/2022 - Added support for **phoneNumbers[type eq "work"].value**. Removed support for **emails[type eq "work"].value** and **manager** . **name.givenName** and **name.familyName** made required attributes.
## More resources
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) drivers for Azure Disks on Azure Ku
description: Learn how to use the Container Storage Interface (CSI) drivers for Azure disks in an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/09/2022 Last updated : 04/06/2022
Filesystem Size Used Avail Use% Mounted on
/dev/sdc 15G 46M 15G 1% /mnt/azuredisk ```
-## Shared disk
-
-[Azure shared disks](../virtual-machines/disks-shared.md) is an Azure managed disks feature that enables attaching an Azure disk to agent nodes simultaneously. Attaching a managed disk to multiple agent nodes allows you, for example, to deploy new or migrate existing clustered applications to Azure.
-
-> [!IMPORTANT]
-> Currently, only raw block device (`volumeMode: Block`) is supported by the Azure disk CSI driver. Applications should manage the coordination and control of writes, reads, locks, caches, mounts, and fencing on the shared disk, which is exposed as a raw block device.
-
-Let's create a file called `shared-disk.yaml` by copying the following command that contains the shared disk storage class and PVC:
-
-```yaml
-apiVersion: storage.k8s.io/v1
-kind: StorageClass
-metadata:
- name: managed-csi-shared
-provisioner: disk.csi.azure.com
-parameters:
- skuname: Premium_LRS
- maxShares: "2"
- cachingMode: None # ReadOnly cache is not available for premium SSD with maxShares>1
-reclaimPolicy: Delete
-
-kind: PersistentVolumeClaim
-apiVersion: v1
-metadata:
- name: pvc-azuredisk-shared
-spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 256Gi # minimum size of shared disk is 256GB (P15)
- volumeMode: Block
- storageClassName: managed-csi-shared
-```
-
-Create the storage class with the [kubectl apply][kubectl-apply] command, and specify your `shared-disk.yaml` file:
-
-```console
-$ kubectl apply -f shared-disk.yaml
-
-storageclass.storage.k8s.io/managed-csi-shared created
-persistentvolumeclaim/pvc-azuredisk-shared created
-```
-
-Now let's create a file called `deployment-shared.yml` by copying the following command:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- labels:
- app: nginx
- name: deployment-azuredisk
-spec:
- replicas: 2
- selector:
- matchLabels:
- app: nginx
- template:
- metadata:
- labels:
- app: nginx
- name: deployment-azuredisk
- spec:
- containers:
- - name: deployment-azuredisk
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- volumeDevices:
- - name: azuredisk
- devicePath: /dev/sdx
- volumes:
- - name: azuredisk
- persistentVolumeClaim:
- claimName: pvc-azuredisk-shared
-```
-
-Create the deployment with the [kubectl apply][kubectl-apply] command, and specify your `deployment-shared.yml` file:
-
-```console
-$ kubectl apply -f deployment-shared.yml
-
-deployment/deployment-azuredisk created
-```
-
-Finally, let's check the block device inside the pod:
-
-```console
-# kubectl exec -it deployment-sharedisk-7454978bc6-xh7jp sh
-/ # dd if=/dev/zero of=/dev/sdx bs=1024k count=100
-100+0 records in
-100+0 records out/s
-```
- ## Windows containers The Azure disk CSI driver also supports Windows nodes and containers. If you want to use Windows containers, follow the [Windows containers tutorial](windows-container-cli.md) to add a Windows node pool.
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
Yes, an ingress controller that supports Windows Server containers can run on Wi
## Can my Windows Server containers use gMSA?
-Group-managed service account (gMSA) support is currently unavailable in AKS.
+Group-managed service account (gMSA) support is currently available in preview. See [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster (Preview)](use-group-managed-service-accounts.md)
## Can I use Azure Monitor for containers with Windows nodes and containers?
api-management Get Started Create Service Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance-cli.md
description: Create a new Azure API Management service instance by using the Azu
- Previously updated : 09/10/2020+ Last updated : 03/29/2022 ms.devlang: azurecli # Quickstart: Create a new Azure API Management service instance by using the Azure CLI
-Azure API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM enables you to create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md).
+Azure API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM lets you create and manage modern API gateways for existing backend services hosted anywhere. For more information about APIM, see the [Overview](api-management-key-concepts.md).
This quickstart describes the steps for creating a new API Management instance using [az apim](/cli/azure/apim) commands in the Azure CLI.
This quickstart describes the steps for creating a new API Management instance u
## Create a resource group
-Azure API Management instances, like all Azure resources, must be deployed into a resource group. Resource groups allow you to organize and manage related Azure resources.
+Azure API Management instances, like all Azure resources, must be deployed into a resource group. Resource groups let you organize and manage related Azure resources.
First, create a resource group named *myResourceGroup* in the Central US location with the following [az group create](/cli/azure/group#az-group-create) command:
az group create --name myResourceGroup --location centralus
## Create a new service
-Now that you have a resource group, you can create an API Management service instance. Create one by using the [az apim create](/cli/azure/apim#az-apim-create) command and provide a service name and publisher details. The service name must be unique within Azure.
+Now that you have a resource group, you can create an API Management service instance. Create one by using the [az apim create](/cli/azure/apim#az-apim-create) command and provide a service name and publisher details. The service name must be unique within Azure.
-In the following example, *myapim* is used for the service name. Update the name to a unique value. Also update the name of the API publisher's organization and the email address to receive notifications.
+In the following example, *myapim* is used for the service name. Update the name to a unique value. Also update the name of the API publisher's organization and the email address to receive notifications.
```azurecli-interactive az apim create --name myapim --resource-group myResourceGroup \
When your API Management service instance is online, you're ready to use it. Sta
## Clean up resources
-When no longer needed, you can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group and the API Management service instance.
+You can use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group and the API Management service instance when they aren't needed.
```azurecli-interactive az group delete --name myResourceGroup
api-management Powershell Create Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/powershell-create-service-instance.md
documentationcenter: ''
- Previously updated : 09/14/2020+ Last updated : 03/30/2022 # Quickstart: Create a new Azure API Management service instance by using PowerShell
-Azure API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM enables you to create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md).
+Azure API Management (APIM) helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. APIM lets you create and manage modern API gateways for existing backend services hosted anywhere. For more information, see the [Overview](api-management-key-concepts.md).
This quickstart describes the steps for creating a new API Management instance by using Azure PowerShell cmdlets.
New-AzResourceGroup -Name myResourceGroup -Location WestUS
Now that you have a resource group, you can create an API Management service instance. Create one by using [New-AzApiManagement](/powershell/module/az.apimanagement/new-azapimanagement) and provide a service name and publisher details. The service name must be unique within Azure.
-In the following example, *myapim* is used for the service name. Update the name to a unique value. Also update the organization name of the API publisher and the admin email address to receive notifications.
+In the following example, *myapim* is used for the service name. Update the name to a unique value. Also, update the organization name of the API publisher and the admin email address to receive notifications.
By default, the command creates the instance in the Developer tier, an economical option to evaluate Azure API Management. This tier isn't for production use. For more information about scaling the API Management tiers, see [upgrade and scale](upgrade-and-scale.md). > [!NOTE]
-> This is a long-running operation. It can take between 30 and 40 minutes to create and activate an API Management service in this tier.
+> This is a long-running action. It can take between 30 and 40 minutes to create and activate an API Management service in this tier.
```azurepowershell-interactive New-AzApiManagement -Name "myapim" -ResourceGroupName "myResourceGroup" ` -Location "West US" -Organization "Contoso" -AdminEmail "admin@contoso.com" ```
-When the command returns, run [Get-AzApiManagement](/powershell/module/az.apimanagement/get-azapimanagement) to view the properties of the Azure API Management service. After activation, the provisioning status is Succeeded and the service instance has several associated URLs. For example:
+When the command returns, run [Get-AzApiManagement](/powershell/module/az.apimanagement/get-azapimanagement) to view the properties of the Azure API Management service. After activation, the setting up status is Succeeded and the service instance has several associated URLs. For example:
```azurepowershell-interactive Get-AzApiManagement -Name "myapim" -ResourceGroupName "myResourceGroup"
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md
az webapp config appsettings list --name <app-name> --resource-group <resource-g
To show all supported Node.js versions, navigate to `https://<sitename>.scm.azurewebsites.net/api/diagnostics/runtime` or run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --os windows | grep node
+az webapp list-runtimes --os windows | grep NODE
``` ::: zone-end
automation Automation Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-services.md
The following table describes the scenarios and users for ARM template and Bicep
### [Azure Automation](/azure/automation/overview)
-Orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environment.
-It provides a persistent shared assets including variables, connections, objects that allow orchestration of complex jobs. [Learn more](/azure/automation/automation-runbook-gallery).
+Azure Automation orchestrates repetitive processes using graphical, PowerShell, and Python runbooks in the cloud or hybrid environments. It provides a persistent shared assets including variables, connections, objects that allow orchestration of complex jobs. [Learn more](/azure/automation/automation-runbook-gallery).
+
+There are more than 3,000 modules in the PowerShell Gallery, and the PowerShell community continues to grow. Azure Automation based on PowerShell modules can work with multiple applications and vendors, both 1st party and 3rd party. As more application vendors release PowerShell modules for integration, extensibility and automation tasks, you could use an existing PowerShell script as-is to execute it as a PowerShell runbook in an automation account without making any changes.
**Scenarios** | **Users** |
- | Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises.
+ | Allows to write an [Automation PowerShell runbook](/azure/automation/learn/powershell-runbook-managed-identity) that deploys an Azure resource by using an [Azure Resource Manager template](/azure/azure-resource-manager/templates/quickstart-create-templates-use-the-portal).</br> </br> Schedule tasks, for example ΓÇô Stop dev/test VMs or services at night and turn on during the day. </br> </br> Response to alerts such as system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation where you can manage to automate on-premises servers such as SQL Server, Active Directory and so on. </br> </br> Azure resource life-cycle management and governance include resource provisioning, de-provisioning, adding correct tags, locks, NSGs and so on. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting. </br> </br> Infrastructure administrators manage the on-premises infrastructure using scripts or executing long-running jobs such as month-end operations on servers running on-premises.
### Azure Automation based in-guest management
Orchestrates repetitive processes using graphical, PowerShell, and Python runboo
**Scenarios** | **Users** |
- | Respond to system alerts, service alerts, high CPU/memory alerts, create ServiceNow tickets, and so on. </br> </br> Hybrid automation scenarios where you can manage automate on-premises servers such as SQL Server, Active Directory and so on based on an external event.</br> </br> Azure resource life-cycle management and governance that includes Resource provisioning, deprovisioning, adding correct tags, locks, NSGs and so on based on Azure monitor alerts. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting.
+ | Respond to system alerts, service alerts, or high CPU/memory alerts from 1st party or 3rd party monitoring tools like Splunk or ServiceNow, create ServiceNow tickets basis alerts and so on. </br> </br> Hybrid automation scenarios where you can manage automate on-premises servers such as SQL Server, Active Directory and so on based on an external event.</br> </br> Azure resource life-cycle management and governance that includes Resource provisioning, deprovisioning, adding correct tags, locks, NSGs and so on based on Azure monitor alerts. | IT administrators, System administrators, IT operations administrators who are skilled at using PowerShell or Python based scripting.
### Azure functions
-Provides a serverless automation platform that allows you to write code to react to critical events without worrying about the underlying platform. [Learn more](/azure/azure-functions/functions-overview).
+Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform. [Learn more](/azure/azure-functions/functions-overview).
- - You can use a variety of languages so that you can write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code.
- - It allows you to orchestrate complex workflows through durable functions.
+ - You can use a variety of languages to write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code. Functions runtime is an open source.
+ - You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required.
+ - You can orchestrate complex workflows through [durable functions](/azure-functions/durable/durable-functions-overview?tabs=csharp).
+ - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](/azure/azure-functions/functions-best-practices?tabs=csharp#write-robust-functions).
+ - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](/azure/azure-functions/functions-reference-powershell?tabs=portal).
**Scenarios** | **Users** |
- | Respond to events on resources: such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts to send the teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br> Respond to database changes. | The Application developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications.
+ | Respond to events on resources: such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts to send the teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br> Respond to database changes. | The Application developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications where a single or mutiple Azure Functions could be part of larger application workflow.
## Orchestrate complex jobs in Azure Automation
Orchestrates repetitive processes using graphical, PowerShell, and Python runboo
### Azure functions
-A serverless automation platform that allows you to write code to react to critical events without worrying about the underlying platform. [Learn more](/azure/azure-functions/functions-overview).
+Provides a serverless event-driven compute platform for automation that allows you to write code to react to critical events from various sources, third-party services, and on-premises systems. For example, an HTTP trigger without worrying about the underlying platform [Learn more](/azure/azure-functions/functions-overview).
- - It provides a variety of languages so that you can write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code.
+ - You can use a variety of languages to write functions in a language of your choice such as C#, Java, JavaScript, PowerShell, or Python and focus on specific pieces of code. Functions runtime is an open source.
+ - You can choose the hosting plan according to your function app scaling requirements, functionality, and resources required.
- You can orchestrate complex workflows through [durable functions](/azure-functions/durable/durable-functions-overview?tabs=csharp).
+ - You should avoid large, and long-running functions that can cause unexpected timeout issues. [Learn more](/azure/azure-functions/functions-best-practices?tabs=csharp#write-robust-functions).
+ - When you write Powershell scripts within the Function Apps, you must tweak the scripts to define how the function behaves such as - how it's triggered, its input and output parameters. [Learn more](/azure/azure-functions/functions-reference-powershell?tabs=portal).
**Scenarios** | **Users** |
- | Respond to events on resources : such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts where you can send teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br>Executes Azure Function as part of Logic apps workflow through Azure Function Connector. | Application Developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications.
+ | Respond to events on resources : such as add tags to resource group basis cost center, when VM is deleted etc. </br> </br> Set scheduled tasks such as setting a pattern to stop and start a VM at a specific time, reading blob storage content at regular intervals etc. </br> </br> Process Azure alerts where you can send teamΓÇÖs event when the CPU activity spikes to 90%. </br> </br> Orchestrate with external systems such as Microsoft 365. </br> </br>Executes Azure Function as part of Logic apps workflow through Azure Function Connector. | Application Developers who are skilled in coding languages such as C#, F#, PHP, Java, JavaScript, PowerShell, or Python. </br> </br> Cloud Architects who build serverless Micro-services based applications where a single or mutiple Azure Functions could be part of larger application workflow.
+
## Next steps - To learn on how to securely execute the automation jobs, see [best practices for security in Azure Automation](/azure/automation/automation-security-guidelines).
azure-arc Active Directory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/active-directory-introduction.md
Previously updated : 12/15/2021 Last updated : 04/05/2022
-# Introduction to Azure Arc-enabled SQL Managed Instance with Active Directory authentication
+# Azure Arc-enabled SQL Managed Instance with Active Directory authentication
-This article describes Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication by bring your own keytab (BYOK) where the user is expected to provide a pre-created Active Directory account, Service Principal Names and Keytab.
+This article describes how to enable Azure Arc-enabled SQL Managed Instance with Active Directory (AD) Authentication. The article demonstrates two possible integration modes:
+- Bring your own keytab mode
+- Automatic mode
+
+In Active Directory, the integration mode describes the management the keytab file.
## Background
-In order to support Active Directory authentication for SQL Managed Instance, a SQL Managed Instance must be deployed in an environment that allows it to communicate with the Active Directory domain.
-To facilitate this, Azure Arc introduces a new Kubernetes-native [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`. You can specify this kind of resource in the CRD. An Active Directory Connector custom resource stores the information needed to enable connections to DNS and AD for purposes of authenticating users and service accounts.
+Azure Arc-enabled data services support Active Directory (AD) for Identity and Access Management (IAM). The Arc-enabled SQL Managed Instance uses an existing on-premises Active Directory (AD) domain for authentication. Users need to do the following steps to enable Active Directory authentication for Arc-enabled SQL Managed Instance:
+
+- [Deploy data controller](create-data-controller-indirect-cli.md)
+- [Deploy a bring your own keytab AD connector](deploy-byok-active-directory-connector.md) or [Deploy an automatic AD connector](deploy-automatic-active-directory-connector.md)
+- [Deploy managed instances](deploy-active-directory-sql-managed-instance.md)
+
+The following diagram shows how to enable Active Directory authentication for Azure Arc-enabled SQL Managed Instance:
+
+![Actice Directory Deployment User journey](media/active-directory-deployment/active-directory-user-journey.png)
++
+## What is an Active Directory (AD) connector?
+
+In order to enable Active Directory authentication for SQL Managed Instance, the managed instance must be deployed in an environment that allows it to communicate with the Active Directory domain.
+
+To facilitate this, Azure Arc-enabled data services introduces a new Kubernetes-native [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called `Active Directory Connector`, it provides Azure Arc-enabled managed instances running on the same data controller the ability to perform Active Directory authentication.
++
+## Compare AD integration modes
+
+What is the difference between the two AD integration modes?
+
+To enable Active Directory Authentication for Arc-enabled SQL Managed Instances, you need an Active Directory (AD) connector where you determine the mode of the AD deployment. The two modes are:
+
+- Bring your own keytab
+- Automatic
+
+The following sections describe the compare these modes.
+
+### Bring your own keytab mode
+
+In this mode, you provide:
+
+- An Active Directory account
+- Service Principal Names (SPNs) under that AD account
+- Your own [keytab file](/sql/linux/sql-server-linux-ad-auth-understanding#what-is-a-keytab-file)
-This custom resource deploys a DNS proxy service that mediates between the SQL Managed Instance DNS resolver and the two upstream DNS servers:
+When you deploy the bring your own keytab AD connector, you need to create the AD account, register the service principal names (SPN), and create the keytab file. You can create the account using [Active Directory utility (`adutil`)](/sql/linux/sql-server-linux-ad-auth-adutil-introduction).
-1. Kubernetes DNS servers
-2. Active Directory DNS servers
+For more information, see [deploy a bring your own keytab Active Directory (AD) connector](deploy-automatic-active-directory-connector.md)
-When a SQL Managed Instance is deployed with Active Directory Authentication enabled, it will reference the Active Directory Connector instance it wants to use. Referencing the Active Directory Connector in SQL MI spec will automatically set up the needed environment in the SQL Managed Instance container for SQL MI to perform Active Directory authentication.
+### AD automatic integration mode
-## Active Directory Connector and SQL Managed Instance
+In automatic mode, you need an automatic Active Directory (AD) connector. You will bring an Organizational Unit (OU) and an AD domain service account has sufficient permissions in the Active Directory.
-![Actice Directory Connector](media/active-directory-deployment/active-directory-connector-byok.png)
+Furthermore, the system:
-## Bring Your Own Keytab (BYOK)
+- Creates a domain service AD account for each managed instance.
+- Sets SPNs automatically on that AD account.
+- Creates and delivers a keytab file to the managed instance.
-The following are the steps for user to set up:
+The mode of the AD connector is determined by the value of `spec.activeDirectory.serviceAccountProvisioning`. Set to either `manual` for bring your own keytab, or `automatic`. Once this parameter is set to automatic, the following parameters become mandatory:
+- `spec.activeDirectory.ouDistinguishedName`
+- `spec.activeDirectory.domainServiceAccountSecret`
-1. Creating and providing an Active Directory account for each SQL Managed Instance that must accept AD authentication.
-1. Providing a DNS name belonging to the Active Directory DNS domain for the SQL Managed Instance endpoint.
-1. Creating a DNS record in Active Directory for the SQL endpoint.
-1. Providing a port number for the SQL Managed Instance endpoint.
-1. Registering Service Principal Names (SPNs) under the AD account in Active Directory domain for the SQL endpoint.
-1. Creating and providing a keytab file for SQL Managed Instance containing entries for the AD account and SPNs.
+When you deploy SQL Managed Instance with the intention to enable Active Directory Authentication, the deployment needs to reference the Active Directory Connector instance to use. Referencing the Active Directory Connector in managed instance specification automatically sets up the needed environment in the SQL Managed Instance container for the managed instance to authenticate with Active Directory.
## Next steps
-* [Deploy Active Directory (AD) connector](deploy-active-directory-connector.md)
+* [Deploy and bring your own keytab Active Directory (AD) connector](deploy-byok-active-directory-connector.md)
+* [Deploy an automatic Active Directory (AD) connector](deploy-automatic-active-directory-connector.md)
* [Deploy Azure Arc-enabled SQL Managed Instance in Active Directory (AD)](deploy-active-directory-sql-managed-instance.md) * [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md)
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
Previously updated : 11/12/2021 Last updated : 03/24/2022
This article describes how to create the Azure Arc data controller in direct con
Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md). -- Creating an Azure Arc data controller in direct connectivity mode involves the following steps:
- 1. Create an Azure Arc-enabled data services extension.
- 1. Create a custom location.
- 1. Create the data controller.
+## Deploy Arc data controller
+Creating an Azure Arc data controller in direct connectivity mode involves the following steps:
+
+1. Create an Azure Arc-enabled data services extension.
+1. Create a custom location.
+1. Create the data controller.
+
+You can create them individually or in a unified experience.
+
+## Deploy - unified experience
+
+In the unified experience, you can create the Arc data controller extension, custom location, and Arc data controller all in one command as follows:
+
+```az
+az arcdata dc create -n <name> -g <resource-group> --custom-location <custom-location> --cluster-name <cluster> --connectivity-mode direct --profile <the-deployment-profile>
+```
-## Step 1: Create an Azure Arc-enabled data services extension
+## Deploy - individual experience
+
+### Step 1: Create an Azure Arc-enabled data services extension
Use the k8s-extension CLI to create a data services extension.
-### Set environment variables
+#### Set environment variables
Set the following environment variables, which will be then used in later steps.
Following are two sets of environment variables. The first set of variables iden
The environment variables include passwords for log and metric services. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters.
-#### [Linux](#tab/linux)
+##### [Linux](#tab/linux)
```console ## variables for Azure subscription, resource group, cluster name, location, extension, and namespace.
export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard>
export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard> ```
-#### [Windows (PowerShell)](#tab/windows)
+##### [Windows (PowerShell)](#tab/windows)
``` PowerShell ## variables for Azure location, extension and namespace
$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
-### Create the Arc data services extension
+#### Create the Arc data services extension
The following command creates the Arc data services extension.
-#### [Linux](#tab/linux)
+##### [Linux](#tab/linux)
```azurecli az k8s-extension create --cluster-name ${clusterName} --resource-group ${resourceGroup} --name ${adsExtensionName} --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace ${namespace} --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper az k8s-extension show --resource-group ${resourceGroup} --cluster-name ${resourceName} --name ${adsExtensionName} --cluster-type connectedclusters ```
-#### [Windows (PowerShell)](#tab/windows)
+##### [Windows (PowerShell)](#tab/windows)
```azurecli az k8s-extension create --cluster-name $ENV:clusterName --resource-group $ENV:resourceGroup --name $ENV:adsExtensionName --cluster-type connectedClusters --extension-type microsoft.arcdataservices --auto-upgrade false --scope cluster --release-namespace $ENV:namespace --config Microsoft.CustomLocation.ServiceAccount=sa-arc-bootstrapper
az k8s-extension show --resource-group $ENV:resourceGroup --cluster-name $ENV:cl
-#### Deploy Azure Arc data services extension using private container registry and credentials
+##### Deploy Azure Arc data services extension using private container registry and credentials
Use the below command if you are deploying from your private repository:
az k8s-extension create --cluster-name "my-connected-cluster" --resource-group "
> [!NOTE] > The Arc data services extension install can take a few minutes to complete.
-### Verify the Arc data services extension is created
+#### Verify the Arc data services extension is created
You can verify the status of the deployment of Azure Arc-enabled data services extension. Use the Azure portal or Cube
-#### Check status from Azure portal
+##### Check status from Azure portal
1. Log in to the Azure portal and browse to the resource group where the Kubernetes connected cluster resource is located. 1. Select the Azure Arc-enabled Kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed. 1. In the navigation on the left side, under **Settings**, select **Extensions**. 1. The portal shows the extension that was created earlier in an installed state.
-#### Check status using kubectl CLI
+##### Check status using kubectl CLI
1. Connect to your Kubernetes cluster via a Terminal window. 1. Run the below command and ensure:
For example, the following example gets the pods from `arc` namespace.
kubectl get pods --name arc ```
-## Retrieve the managed identity and grant roles
+### Retrieve the managed identity and grant roles
When the Arc data services extension is created, Azure creates a managed identity. You need to assign certain roles to this managed identity for usage and/or metrics to be uploaded.
-### Retrieve managed identity of the Arc data controller extension
+#### Retrieve managed identity of the Arc data controller extension
```azurecli $Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | convertFrom-json).identity.principalId
$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group <resource group> -
$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | convertFrom-json).identity.principalId ```
-### Assign role to the managed identity
+#### Assign role to the managed identity
Run the below command to assign the **Contributor** and **Monitoring Metrics Publisher** roles:
az role assignment create --assignee $Env:MSI_OBJECT_ID --role "Contributor" --s
az role assignment create --assignee $Env:MSI_OBJECT_ID --role "Monitoring Metrics Publisher" --scope "/subscriptions/$ENV:subscription/resourceGroups/$ENV:resourceGroup" ```
-## Create a custom location using `customlocation` CLI extension
+### Step 2: Create a custom location using `customlocation` CLI extension
A custom location is an Azure resource that is equivalent to a namespace in a Kubernetes cluster. Custom locations are used as a target to deploy resources to or from Azure. Learn more about custom locations in the [Custom locations on top of Azure Arc-enabled Kubernetes documentation](../kubernetes/conceptual-custom-locations.md).
-### Set environment variables
+#### Set environment variables
-#### [Linux](#tab/linux)
+##### [Linux](#tab/linux)
```azurecli export clName=mycustomlocation
export extensionId=$(az k8s-extension show --resource-group ${resourceGroup} --c
az customlocation create --resource-group ${resourceGroup} --name ${clName} --namespace ${namespace} --host-resource-id ${hostClusterId} --cluster-extension-ids ${extensionId} --location ${location} ```
-#### [Windows (PowerShell)](#tab/windows)
+##### [Windows (PowerShell)](#tab/windows)
```azurecli $ENV:clName="mycustomlocation"
az customlocation create --resource-group $ENV:resourceGroup --name $ENV:clName
-## Validate the custom location is created
+### Validate the custom location is created
From the terminal, run the below command to list the custom locations, and validate that the **Provisioning State** shows Succeeded:
From the terminal, run the below command to list the custom locations, and valid
az customlocation list -o table ```
-## Create certificates for logs and metrics UI dashboards
+### Create certificates for logs and metrics UI dashboards
Optionally, you can specify certificates for logs and metrics UI dashboards. See [Provide certificates for monitoring](monitor-certificates.md) for examples. The December, 2021 release introduces this option.
-## Create the Azure Arc data controller
+### Step 3: Create the Azure Arc data controller
After the extension and custom location are created, proceed to deploy the Azure Arc data controller as follows.
azure-arc Deploy Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector.md
- Title: Tutorial ΓÇô Deploy Active Directory Connector
-description: Tutorial to deploy Active Directory Connector
------ Previously updated : 12/10/2021----
-# Tutorial ΓÇô Deploy Active Directory Connector
-
-This article explains how to deploy Active Directory Connector Custom Resource.
-
-## What is an Active Directory (AD) connector?
-
-The Active Directory (AD) connector is a Kubernetes native custom resource definition (CRD) that allows you to provide
-SQL Managed Instances running on the same Data Controller an ability to perform Active Directory Authentication.
-
-An Active Directory Connector instance deploys a DNS proxy service that proxies the DNS requests
-coming from the SQL Managed Instance to either of the two upstream DNS
-* Active Directory DNS Servers
-* Kubernetes DNS Servers
-
-## Prerequisites
-
-Before you proceed, you must have:
-
-* An instance of Data Controller deployed on a supported version of Kubernetes
-* An Active Directory domain
-
-## Input for deploying Active Directory (AD) Connector
-
-To deploy an instance of Active Directory Connector, several inputs are needed from the Active Directory domain environment.
-These inputs are provided in a YAML spec of AD Connector instance.
-
-Following metadata about the AD domain must be available before deploying an instance of AD Connector:
-* Name of the Active Directory domain
-* List of the domain controllers (fully-qualified domain names)
-* List of the DNS server IP addresses
-
-Following input fields are exposed to the users in the Active Directory Connector spec:
--- **Required**
- - **spec.activeDirectory.realm**
- Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with.
-
- - **spec.activeDirectory.domainControllers.primaryDomainController.hostname**
- Fully-qualified domain name of the Primary Domain Controller (PDC) in the AD domain.
-
- If you do not know which domain controller in the domain is primary, you can find out by running this command on any Windows machine joined to the AD domain: `netdom query fsmo`.
-
- - **spec.activeDirectory.dns.nameserverIpAddresses**
- List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers.
---- **Optional**
- - **spec.activeDirectory.netbiosDomainName**
- NETBIOS name of the Active Directory domain. This is the short domain name that represents the Active Directory domain.
-
- This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name.
-
- This field is optional. When not provided, it defaults to the first label of the `spec.activeDirectory.realm` field.
-
- In most domain environments, this is set to the default value but some domain environments may have a non-default value.
-
- - **spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname**
- List of the fully-qualified domain names of the secondary domain controllers in the AD domain.
-
- If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully-qualified domain names in this list. This allows high-availability for Kerberos operations.
-
- This field is optional and not needed if your domain is served by only one domain controller.
-
- - **spec.activeDirectory.dns.domainName**
- DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers.
-
- A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory.
-
- This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase.
-
- - **spec.activeDirectory.dns.replicas**
- Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided.
-
- - **spec.activeDirectory.dns.preferK8sDnsForPtrLookups**
- Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups.
-
- DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups.
-
- This field is optional. When not provided, it defaults to true i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers.
-
- If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers.
--
-## Deploy Active Directory (AD) connector
-To deploy an AD connector, create a YAML spec file called `active-directory-connector.yaml`.
-The following example uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain.
-
-```yaml
-apiVersion: arcdata.microsoft.com/v1beta1
-kind: ActiveDirectoryConnector
-metadata:
- name: adarc
- namespace: <namespace>
-spec:
- activeDirectory:
- realm: CONTOSO.LOCAL
- domainControllers:
- primaryDomainController:
- hostname: dc1.contoso.local
- secondaryDomainControllers:
- - hostname: dc2.contoso.local
- - hostname: dc3.contoso.local
- dns:
- preferK8sDnsForPtrLookups: false
- nameserverIPAddresses:
- - <DNS Server 1 IP address>
- - <DNS Server 2 IP address>
-```
-
-The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported.
-
-```console
-kubectl apply ΓÇôf active-directory-connector.yaml
-```
-
-After submitting the deployment of AD Connector instance, you may check the status of the deployment using the following command.
-
-```console
-kubectl get adc -n <namespace>
-```
-
-## Next steps
-
-* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
-* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
-
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
Title: Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance
-description: Tutorial to deploy AD-integrated SQL Managed Instance
+ Title: Tutorial ΓÇô Deploy AD-integrated Azure Arc-enabled SQL Managed Instance
+description: Tutorial to deploy AD-integrated Azure Arc-enabled SQL Managed Instance
Previously updated : 12/10/2021 Last updated : 04/05/2022 -
-# Tutorial ΓÇô Deploy AD-integrated SQL Managed Instance
+# Tutorial ΓÇô deploy AD-integrated Azure Arc-enabled SQL Managed Instance
This article explains how to deploy Azure Arc-enabled SQL Managed Instance with Active Directory (AD) authentication.
-Before you proceed, you need to complete the steps explained in [Tutorial ΓÇô Deploy Active Directory Connector](deploy-active-directory-connector.md).
+
+Before you proceed, complete the steps explained in [Deploy bring your own keytab (BYOK) Active Directory (AD) connector](deploy-byok-active-directory-connector.md) or [Tutorial ΓÇô deploy an automatic AD connector](deploy-automatic-active-directory-connector.md)
## Prerequisites Before you proceed, verify that you have: * An Active Directory (AD) Domain
-* An instance of Data Controller deployed
+* An instance of data controller deployed
* An instance of Active Directory Connector deployed
-These instructions expect that the users are able to generate the following in
-the Active Directory domain and provide to the deployment.
-
-* An Active Directory user account for the SQL Managed Instance
-* Service Principal Names (SPNs) under the user account
-* DNS record for the endpoint DNS name for SQL Managed Instance
-
-## Steps Before the Deployment of SQL Managed Instance
-
-1. Identify a DNS name for the SQL Managed Instance endpoint.
-
- The DNS name for the endpoint the SQL Managed Instance will listen on for connections coming from outside the Kubernetes cluster.
-
- This DNS name should be in the Active Directory domain or its descendant domains.
-
- The examples in these instructions use `sqlmi.contoso.local` for the DNS name .
-
-2. Identify the port number for the SQL Managed Instance endpoint.
-
- You must decide a port number for the endpoint SQL Managed Instance will listen on for connections coming from outside the Kubernetes cluster.
-
- This port number must be in the acceptable range of port numbers to Kubernetes cluster.
-
- The examples in these instructions use `31433` for the port number.
-
-3. Create an Active Directory account for the SQL Managed Instance.
-
- Choose a name for the Active Directory account that will represent your SQL Managed Instance. This name should be unique in the Active Directory domain.
-
- Use `Active Directory Users and Computers` on the Domain Controllers, create an account for the SQL Managed Instance name.
-
- Provide a complex password to this account that is acceptable to the Active Directory domain password policy. This password will be needed in some of the next steps.
-
- The account does not need any special permissions. Ensure that the account is enabled.
-
- The examples in these instructions use `sqlmi-account` for the AD account name.
-
-4. Create a DNS record for the SQL Managed Instance endpoint in the Active Directory DNS servers.
-
- In one of the Active Directory DNS servers, create an A record (forward lookup record) for the DNS name chosen in step 1. This DNS record should point to the IP address that the SQL Managed Instance endpoint will listen on for connections from outside the Kubernetes cluster.
-
- You do not need to create a PTR record (reverse lookup record) in association with the A record.
-
-5. Create Service Principal Names (SPNs)
-
- In order for SQL Managed Instance to be able to accept AD authentication against the SQL Managed Instance endpoint DNS name, we need to register two SPNs under the account generated in the previous step. These two SPNs should be of the following format:
-
- ```output
- MSSQLSvc/<DNS name>
- MSSQLSvc/<DNS name>:<port>
- ```
-
- To register the SPNs, run the following commands on one of the domain controllers.
-
- ```console
- setspn -S MSSQLSvc/<DNS name> <account>
- setspn -S MSSQLSvc/<DNS name>:<port> <account>
- ```
-
- With the chosen example DNS name, port number and the account name in this document, the commands should look like the following:
-
- ```console
- setspn -S MSSQLSvc/sqlmi.contoso.local sqlmi-account
- setspn -S MSSQLSvc/sqlmi.contoso.local:31433 sqlmi-account
- ```
-
-6. Generate a keytab file containing entries for the account and SPNs
-
- For SQL Managed Instance to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file using a Kubernetes secret.
-
- The keytab file contains encrypted entries for the Active Directory account generated for SQL Managed Instance and the SPNs.
+## Azure Arc-enabled SQL Managed Instance specification for Active Directory Authentication
- SQL Server will use this file as its credential against Active Directory.
+To deploy an Azure Arc-enabled SQL Managed Instance for Azure Arc Active Directory Authentication, the deployment specification needs to reference the Active Directory Connector instance it wants to use. Referencing the Active Directory Connector in managed instance specification will automatically set up the managed instance to perform Active Directory authentication.
- There are multiple tools available to generate a keytab file.
- - **`ktutil`**: This tool is available on Linux
- - **`ktpass`**: This tool is available on Windows
- - **`adutil`**: This tool is available for Linux. See [Introduction to `adutil` - Active Directory utility](/sql/linux/sql-server-linux-ad-auth-adutil-introduction).
-
- To generate the keytab file specifically for SQL Managed Instance, use a bash shell script we have published. It wraps `ktutil` and `adutil` together. It is for use on Linux.
-
- The script can be found here: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh).
-
- This script accepts several parameters and will output a keytab file and a yaml spec file for the Kubernetes secret containing the keytab.
-
- Use the following command to run the script after replacing the parameter values with the ones for your SQL Managed Instance deployment.
-
- ```console
- AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <AD domain in uppercase> --account <AD account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace>
- ```
-
- The input parameters are expecting the following values :
- * **--realm** expects the uppercase of the AD domain, such as CONTOSO.LOCAL
- * **--account** expects the AD account under where the SPNs are registered, such sqlmi-account
- * **--port** expects the SQL endpoint port number 31433
- * **--dns-name** expects the DNS name for the SQL endpoint
- * **--keytab-file** expects the path to the keytab file
- * **--secret-name** expects the name of the keytab secret to generate a spec for
- * **--secret-namespace** expects the Kubernetes namespace containing the keytab secret
-
- Using the examples chosen in this document, the command should look like the following.
-
- Choose a name for the Kubernetes secret hosting the keytab. The namespace should be the same as what the SQL Managed Instance will be deployed in.
-
- ```console
- AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns
- ```
-
- To verify that the keytab is correct, you may run the following command:
-
- ```console
- klist -kte <keytab file>
- ```
-
-## Deploy Kubernetes secret for the keytab
-
-Use the Kubernetes secret spec file generated in the previous step to deploy the secret.
-The spec file should look like the following:
-
-```yaml
-apiVersion: v1
-kind: Secret
-type: Opaque
-metadata:
- name: <secret name>
- namespace: <secret namespace>
-data:
- keytab: <keytab content in base64>
-```
-
-Deploy the Kubernetes secret with `kubectl apply -f <file>`. For example:
-
-```console
-kubectl apply ΓÇôf sqlmi-keytab-secret.yaml
-```
-
-## SQL Managed Instance Spec for Active Directory Authentication
-
-To support Active Directory authentication on SQL Managed Instance, new spec fields are introduced as follows.
+To support Active Directory authentication on managed instance, the deployment specification uses the following fields:
- **Required** (For AD authentication)
- - **spec.security.activeDirectory.connector.name**
+ - `spec.security.activeDirectory.connector.name`
Name of the pre-existing Active Directory Connector custom resource to join for AD authentication. When provided, system will assume that AD authentication is desired.
- - **spec.security.activeDirectory.accountName**
- Name of the Active Directory account pre-created for this SQL Managed Instance.
- - **spec.security.activeDirectory.keytabSecret**
- Name of the Kubernetes secret hosting the pre-created keytab file by users. This secret must be in the same namespace as the SQL Managed Instance.
- - **spec.services.primary.dnsName**
- DNS name for the primary endpoint.
- - **spec.services.primary.port**
- Port number for the primary endpoint.
+ - `spec.security.activeDirectory.accountName`
+ Name of the Active Directory (AD) account that was automatically generated for this instance.
+ - `spec.security.activeDirectory.keytabSecret`
+ Name of the Kubernetes secret hosting the pre-created keytab file by users. This secret must be in the same namespace as the managed instance. This parameter is only required for the AD deployment in bring your own keytab AD integration mode.
+ - `spec.services.primary.dnsName`
+ DNS name for the primary endpoint, this is the primary for the managed instance endpoint
+ - `spec.services.primary.port`
+ Port number for the primary endpoint, this is port number for the managed instance endpoint
- **Optional**
- - **spec.security.activeDirectory.connector.namespace**
- Kubernetes namespace of the pre-existing Active Directory Connector instance to join for AD authentication. When not provided, system will assume the same namespace as the SQL Managed Instance.
+ - `spec.security.activeDirectory.connector.namespace`
+ Kubernetes namespace of the pre-existing Active Directory Connector instance to join for AD authentication. When not provided, system will assume the same namespace as the managed instance.
-### Prepare SQL Managed Instance spec
+### Prepare deployment specification for SQL Managed Instance for Azure Arc
+
+Prepare the following .yaml specification to deploy a managed instance. Set the fields described in the spec.
+
+> [!NOTE]
+> The *admin-login-secret* in the yaml example is used for basic authentication. You can use it to login into the SQL managed instance, and then create SQL logins for AD users and groups. Check out [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md) for further details.
-Prepare the following yaml specification to deploy a SQL Managed Instance. The fields described above should be specified in the spec.
```yaml apiVersion: v1
data:
username: <your base64 encoded username> kind: Secret metadata:
- name: my-login-secret
+ name: admin-login-secret
type: Opaque
-apiVersion: sql.arcdata.microsoft.com/v2
+apiVersion: sql.arcdata.microsoft.com/v3
kind: SqlManagedInstance metadata: name: <name>
spec:
licenseType: LicenseIncluded replicas: 1 security:
- adminLoginSecret: my-login-secret
+ adminLoginSecret: admin-login-secret
activeDirectory: connector: name: <AD connector name>
spec:
size: 5Gi ```
-### Deploy SQL Managed Instance
+### Deploy a managed instance
+
+To deploy a managed instance using the prepared specification:
-To deploy the SQL Managed Instance using the prepared spec, save the spec file as sqlmi.yaml or any name of your choice.
-Run the following command to deploy the spec in the file:
+1. Save the file. The example uses the name `sqlmi.yaml`. Use any name.
+1. Run the following command to deploy the instance according to the specification:
```console kubectl apply -f sqlmi.yaml
azure-arc Deploy Automatic Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-automatic-active-directory-connector.md
+
+ Title: Tutorial ΓÇô Deploy an automatic Active Directory (AD) Connector
+description: Tutorial to deploy an automatic Active Directory (AD) Connector
++++++ Last updated : 04/05/2022++++
+# Tutorial ΓÇô deploy an automatic Active Directory (AD) connector
+
+This article explains how to deploy an automatic Active Directory (AD) connector custom resource. It is a key component to enable the Azure Arc-enabled SQL Managed Instance with Active Directory. It applies to either integration mode (bring your own keytab (BYOK) or automatic).
+
+## Prerequisites
+
+Before you proceed, you must have:
+
+* An instance of Data Controller deployed on a supported version of Kubernetes
+* An Active Directory (AD) domain
+* A pre-created organizational unit (OU) in the Active Directory
+* An Active Directory (AD) domain service account
+
+The AD domain service account should have sufficient permissions to create users, groups, and machine accounts automatically inside the provided organizational unit (OU) in the active directory.
++
+The sufficient permission including the following:
+
+- Read all properties
+- Write all properties
+- Create User objects
+- Delete User objects
+- Reset Password for Descendant User objects
++
+## Input for deploying an automatic Active Directory (AD) connector
+
+To deploy an instance of Active Directory connector, several inputs are needed from the Active Directory domain environment.
+
+These inputs are provided in a .yaml specification for an AD connector instance.
+
+The following metadata about the AD domain must be available before deploying an instance of AD connector:
+
+* Name of the Active Directory domain
+* List of the domain controllers (fully qualified domain names)
+* List of the DNS server IP addresses
+
+The following input fields are exposed to the users in the Active Directory Connector specification:
+
+- **Required**
+ - `spec.activeDirectory.realm`
+ Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with.
+
+ - `spec.activeDirectory.domainControllers.primaryDomainController.hostname`
+ Fully-qualified domain name of the Primary Domain Controller (PDC) in the AD domain.
+
+ If you do not know which domain controller in the domain is primary, you can find out by running this command on any Windows machine joined to the AD domain: `netdom query fsmo`.
+
+ - `spec.activeDirectory.dns.nameserverIpAddresses`
+ List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers.
+
+- **Optional**
+ - `spec.activeDirectory.serviceAccountProvisioning` This is an optional field defines your AD connector deployment mode with possible value `bring your own keytab (BYOK)` or `automatic`. This field indicating whether the service account provisioning including SPN and keytab generation should be automatic or bring your own keytab (BYOK). The default is bring your own keytab (BYOK). When set to bring your own keytab (BYOK), the system will not take care of AD service account generation, SPN registration and keytab generation. When set to automatic, the service AD account is automatically generated and SPNs are registered on that account. A keytab file is generated then transported to SQL Managed Instance.
+
+ - `spec.activeDirectory.ouDistinguishedName` This is an optional field. Though it becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to `automatic`. This field accepts the Distinguished Name (DN) of an Organizational Unit (OU) that the users must create in Active Directory domain before deploying AD Connector. It stores the system-generated AD accounts in active directory for AD LDAP server. The example of the value would look like: `OU=arcou,DC=contoso,DC=local`.
+
+ - `spec.activeDirectory.domainServiceAccountSecret` This is an optional field. it becomes conditionally mandatory when the value of `serviceAccountProvisioning` is set to automatic. This field accepts a name of the Kubernetes secret that contains the username and password of the service Domain Service Account that was created prior to the AD deployment, the Security Support Service will use it to generate other AD users in the OU and perform actions on those AD accounts.
+
+ - `spec.activeDirectory.netbiosDomainName`
+ NETBIOS name of the Active Directory domain. This is the short domain name that represents the Active Directory domain.
+
+ This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name.
+
+ This field is optional. When not provided, it defaults to the first label of the `spec.activeDirectory.realm` field.
+
+ In most domain environments, this is set to the default value but some domain environments may have a non-default value.
+
+ - `spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname`
+ List of the fully qualified domain names of the secondary domain controllers in the AD domain.
+
+ If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully qualified domain names in this list. This allows high-availability for Kerberos operations.
+
+ This field is optional and not needed if your domain is served by only one domain controller.
+
+ - `spec.activeDirectory.dns.domainName`
+ DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers.
+
+ A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory.
+
+ This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase.
+
+ - `spec.activeDirectory.dns.replicas`
+ Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided.
+
+ - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups`
+ Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups.
+
+ DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups.
+
+ This field is optional. When not provided, it defaults to true i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers.
+
+ If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers.
+
+## Deploy an Automatic Active Directory (AD) connector
+
+To deploy an AD connector, create a YAML specification file called `active-directory-connector.yaml`.
+
+The following example is an example of an Automatic AD connector uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain. The `adarc-dsa-secret` contains the AD domain service account that was created prior to the AD deployment.
+
+> [!NOTE]
+> Make sure the password of provided domain service AD acccount here doesn't contain `!` as special characters.
+>
+
+```yaml
+apiVersion: v1
+kind: Secret
+type: Opaque
+metadata:
+ name: adarc-dsa-secret
+ namespace: <namespace>
+data:
+ password: <your base64 encoded password>
+ username: <your base64 encoded username>
+
+apiVersion: arcdata.microsoft.com/v1beta2
+kind: ActiveDirectoryConnector
+metadata:
+ name: adarc
+ namespace: <namespace>
+spec:
+ activeDirectory:
+ realm: CONTOSO.LOCAL
+ serviceAccountProvisioning: automatic
+ ouDistinguishedName: "OU=arcou,DC=contoso,DC=local"
+ domainServiceAccountSecret: adarc-dsa-secret
+ domainControllers:
+ primaryDomainController:
+ hostname: dc1.contoso.local
+ secondaryDomainControllers:
+ - hostname: dc2.contoso.local
+ - hostname: dc3.contoso.local
+ dns:
+ preferK8sDnsForPtrLookups: false
+ nameserverIPAddresses:
+ - <DNS Server 1 IP address>
+ - <DNS Server 2 IP address>
+```
++
+The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported.
+
+```console
+kubectl apply ΓÇôf active-directory-connector.yaml
+```
+
+After submitting the deployment for the AD connector instance, you may check the status of the deployment using the following command.
+
+```console
+kubectl get adc -n <namespace>
+```
+
+## Next steps
+* [Deploy an Bring your own keytab (BYOK) Active Directory (AD) connector](deploy-byok-active-directory-connector.md)
+* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
+* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
azure-arc Deploy Byok Active Directory Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-byok-active-directory-connector.md
+
+ Title: Tutorial ΓÇô Deploy a bring your own keytab (BYOK) Active Directory (AD) connector
+description: Tutorial to deploy a bring your own keytab (BYOK) Active Directory (AD) connector
++++++ Last updated : 04/05/2022+++
+# Tutorial ΓÇô Deploy a bring your own keytab (BYOK) Active Directory (AD) connector
+
+This article explains how to deploy an automatic Active Directory (AD) connector custom resource. It is a key component to enable the Azure Arc-enabled SQL Managed Instance with Active Directory. It applies to both integration modes (bring your own keytab (BYOK) or automatic).
+
+## Prerequisites
+
+Before you proceed, you must have:
+
+* An instance of Data Controller deployed on a supported version of Kubernetes
+* An Active Directory (AD) domain
+
+The following instructions expect that the users can bring in the Active Directory domain and provide to the AD bring your own keytab (BYOK) deployment.
+
+* An Active Directory user account for the managed instance
+* Service Principal Names (SPNs) under the user account
+* DNS record for the endpoint DNS name for managed instance
+
+## Before you deploy the managed instance
+
+1. Identify a DNS name for the managed instance endpoint.
+
+ The DNS name for the endpoint the managed instance will listen on for connections coming from outside the Kubernetes cluster.
+
+ This DNS name should be in the Active Directory domain or its descendant domains.
+
+ The examples in these instructions use `sqlmi.contoso.local` for the DNS name.
+
+2. Identify the port number for the managed instance endpoint.
+
+ You must decide a port number for the endpoint managed instance will listen on for connections coming from outside the Kubernetes cluster.
+
+ This port number must be in the acceptable range of port numbers to Kubernetes cluster.
+
+ The examples in these instructions use `31433` for the port number.
+
+3. Create an Active Directory account for the managed instance.
+
+ Choose a name for the Active Directory account that will represent your managed instance. This name should be unique in the Active Directory domain.
+
+ Use `Active Directory Users and Computers` on the Domain Controllers, create an account for the managed instance name.
+
+ Provide a complex password to this account that is acceptable to the Active Directory domain password policy. This password will be needed in some of the next steps.
+
+ The account does not need any special permissions. Ensure that the account is enabled.
+
+ The examples in these instructions use `sqlmi-account` for the AD account name.
+
+4. Create a DNS record for the managed instance endpoint in the Active Directory DNS servers.
+
+ In one of the Active Directory DNS servers, create an A record (forward lookup record) for the DNS name chosen in step 1. This DNS record should point to the IP address that the managed instance endpoint will listen on for connections from outside the Kubernetes cluster.
+
+ You do not need to create a PTR record (reverse lookup record) in association with the A record.
+
+5. Create Service Principal Names (SPNs)
+
+ In order for managed instance to be able to accept AD authentication against the managed instance endpoint DNS name, we need to register two SPNs under the account generated in the previous step. These two SPNs should be of the following format:
+
+ ```output
+ MSSQLSvc/<DNS name>
+ MSSQLSvc/<DNS name>:<port>
+ ```
+
+ To register the SPNs, run the following commands on one of the domain controllers.
+
+ ```console
+ setspn -S MSSQLSvc/<DNS name> <account>
+ setspn -S MSSQLSvc/<DNS name>:<port> <account>
+ ```
+
+ With the chosen example DNS name, port number and the account name in this document, the commands should look like the following:
+
+ ```console
+ setspn -S MSSQLSvc/sqlmi.contoso.local sqlmi-account
+ setspn -S MSSQLSvc/sqlmi.contoso.local:31433 sqlmi-account
+ ```
+
+6. Generate a keytab file containing entries for the account and SPNs
+
+ For the managed instance to be able to authenticate itself to Active Directory and accept authentication from Active Directory users, provide a keytab file using a Kubernetes secret.
+
+ The keytab file contains encrypted entries for the Active Directory account generated for the managed instance and the SPNs.
+
+ SQL Server will use this file as its credential against Active Directory.
+
+ There are multiple tools available to generate a keytab file.
+ - `ktutil`: This tool is available on Linux
+ - `ktpass`: This tool is available on Windows
+ - `adutil`: This tool is available for Linux. See [Introduction to `adutil` - Active Directory utility](/sql/linux/sql-server-linux-ad-auth-adutil-introduction).
+
+ To generate the keytab file specifically for the managed instance, use a bash shell script we have published. It wraps `ktutil` and `adutil` together. It is for use on Linux.
+
+ A bash script works on Linux-based OS can be found here: [create-sql-keytab.sh](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.sh).
+ A PowerShell script works on Windows server based OS can be found here: [create-sql-keytab.ps1](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/create-sql-keytab.ps1).
+
+ This script accepts several parameters and will output a keytab file and a yaml specification file for the Kubernetes secret containing the keytab.
+
+ Use the following command to run the script after replacing the parameter values with the ones for your managed instance deployment.
+
+ ```console
+ AD_PASSWORD=<password> ./create-sql-keytab.sh --realm <AD domain in uppercase> --account <AD account name> --port <endpoint port> --dns-name <endpoint DNS name> --keytab-file <keytab file name/path> --secret-name <keytab secret name> --secret-namespace <keytab secret namespace>
+ ```
+
+ The input parameters are expecting the following values:
+ * `--realm` expects the uppercase of the AD domain, such as CONTOSO.LOCAL
+ * `--account` expects the AD account under where the SPNs are registered, such sqlmi-account
+ * `--port` expects the SQL endpoint port number 31433
+ * `--dns-name` expects the DNS name for the SQL endpoint
+ * `--keytab-file` expects the path to the keytab file
+ * `--secret-name` expects the name of the keytab secret to generate a specification for
+ * `--secret-namespace` expects the Kubernetes namespace containing the keytab secret
+
+ Choose a name for the Kubernetes secret hosting the keytab. The namespace should be the same as what the managed instance will be deployed in.
+
+ The following command creates a keytab. It uses values that this article describes:
+
+ ```console
+ AD_PASSWORD=<password> ./create-sql-keytab.sh --realm CONTOSO.LOCAL --account sqlmi-account --port 31433 --dns-name sqlmi.contoso.local --keytab-file sqlmi.keytab --secret-name sqlmi-keytab-secret --secret-namespace sqlmi-ns
+ ```
+
+ To verify that the keytab is correct, you may run the following command:
+
+ ```console
+ klist -kte <keytab file>
+ ```
+
+## Deploy Kubernetes secret for the keytab
+
+Use the Kubernetes secret specification file generated in the previous step to deploy the secret.
+The specification file should look like the following:
+
+```yaml
+apiVersion: v1
+kind: Secret
+type: Opaque
+metadata:
+ name: <secret name>
+ namespace: <secret namespace>
+data:
+ keytab: <keytab content in base64>
+```
+
+Deploy the Kubernetes secret with `kubectl apply -f <file>`. For example:
+
+```console
+kubectl apply ΓÇôf sqlmi-keytab-secret.yaml
+```
+
+## Active directory (AD) bring your own keytab (BYOK) integration mode
+
+The following are the steps for user to set up:
+1. Creating and providing an Active Directory account for each managed instance that must accept AD authentication.
+1. Providing a DNS name belonging to the Active Directory DNS domain for the managed instance endpoint.
+1. Creating a DNS record in Active Directory for the SQL endpoint.
+1. Providing a port number for the managed instance endpoint.
+1. Registering Service Principal Names (SPNs) under the AD account in Active Directory domain for the SQL endpoint.
+1. Creating and providing a keytab file for managed instance containing entries for the AD account and SPNs.
+
+An Active Directory Connector instance stores the information needed to enable connections to DNS and AD for purposes of authenticating users and service accounts and it deploys a DNS proxy service that proxies the DNS requests coming from the managed instance to either of the two upstream DNS
+* Active Directory DNS Servers
+* Kubernetes DNS Servers
+
+The following diagram Active Directory Connector and SQL Managed Instance describes how the AD bring your own keytab (BYOK) integration mode works:
+
+![Actice Directory Connector](media/active-directory-deployment/active-directory-connector-byok.png)
+
+## Input for deploying Active Directory (AD) Connector
+
+To deploy an instance of Active Directory Connector, several inputs are needed from the Active Directory domain environment.
+
+These inputs are provided in a YAML specification of AD Connector instance.
+
+Following metadata about the AD domain must be available before deploying an instance of AD Connector:
+* Name of the Active Directory domain
+* List of the domain controllers (fully qualified domain names)
+* List of the DNS server IP addresses
+
+Following input fields are exposed to the users in the Active Directory Connector spec:
+
+- **Required**
+
+ - `spec.activeDirectory.realm`
+ Name of the Active Directory domain in uppercase. This is the AD domain that this instance of AD Connector will be associated with.
+
+ - `spec.activeDirectory.domainControllers.primaryDomainController.hostname`
+ Fully qualified domain name of the primary domain controller in the AD domain.
+
+ To identify the primary domain controller, run this command on any Windows machine joined to the AD domain:
+
+ ```console
+ netdom query fsmo
+ ```
+
+ - `spec.activeDirectory.dns.nameserverIpAddresses`
+ List of Active Directory DNS server IP addresses. DNS proxy service will forward DNS queries in the provided domain name to these servers.
+
+- **Optional**
+
+ - `spec.activeDirectory.netbiosDomainName`
+ NETBIOS name of the Active Directory domain. This is the short domain name that represents the Active Directory domain.
+
+ This is often used to qualify accounts in the AD domain. e.g. if the accounts in the domain are referred to as CONTOSO\admin, then CONTOSO is the NETBIOS domain name.
+
+ This field is optional. When not provided, it defaults to the first label of the `spec.activeDirectory.realm` field.
+
+ In most domain environments, this is set to the default value but some domain environments may have a non-default value.
+
+ - `spec.activeDirectory.domainControllers.secondaryDomainControllers[*].hostname`
+ List of the fully qualified domain names of the secondary domain controllers in the AD domain.
+
+ If your domain is served by multiple domain controllers, it is a good practice to provide some of their fully qualified domain names in this list. This allows high-availability for Kerberos operations.
+
+ This field is optional and not needed if your domain is served by only one domain controller.
+
+ - `spec.activeDirectory.dns.domainName`
+ DNS domain name for which DNS lookups should be forwarded to the Active Directory DNS servers.
+
+ A DNS lookup for any name belonging to this domain or its descendant domains will get forwarded to Active Directory.
+
+ This field is optional. When not provided, it defaults to the value provided for `spec.activeDirectory.realm` converted to lowercase.
+
+ - `spec.activeDirectory.dns.replicas`
+ Replica count for DNS proxy service. This field is optional and defaults to 1 when not provided.
+
+ - `spec.activeDirectory.dns.preferK8sDnsForPtrLookups`
+ Flag indicating whether to prefer Kubernetes DNS server response over AD DNS server response for IP address lookups.
+
+ DNS proxy service relies on this field to determine which upstream group of DNS servers to prefer for IP address lookups.
+
+ This field is optional. When not provided, it defaults to true i.e. the DNS lookups of IP addresses will be first forwarded to Kubernetes DNS servers.
+
+ If Kubernetes DNS servers fail to answer the lookup, the query is then forwarded to AD DNS servers.
++
+## Deploy a bring your own keytab (BYOK) Active Directory (AD) connector
+
+To deploy an AD connector, create a .yaml specification file called `active-directory-connector.yaml`.
+
+The following example is an example of a bring your own keytab (BYOK) AD connector uses an AD domain of name `CONTOSO.LOCAL`. Ensure to replace the values with the ones for your AD domain.
+
+```yaml
+apiVersion: arcdata.microsoft.com/v1beta1
+kind: ActiveDirectoryConnector
+metadata:
+ name: adarc
+ namespace: <namespace>
+spec:
+ activeDirectory:
+ realm: CONTOSO.LOCAL
+ domainControllers:
+ primaryDomainController:
+ hostname: dc1.contoso.local
+ secondaryDomainControllers:
+ - hostname: dc2.contoso.local
+ - hostname: dc3.contoso.local
+ dns:
+ preferK8sDnsForPtrLookups: false
+ nameserverIPAddresses:
+ - <DNS Server 1 IP address>
+ - <DNS Server 2 IP address>
+```
+
+The following command deploys the AD connector instance. Currently, only kube-native approach of deploying is supported.
+
+```console
+kubectl apply ΓÇôf active-directory-connector.yaml
+```
+
+After submitting the deployment of AD Connector instance, you may check the status of the deployment using the following command.
+
+```console
+kubectl get adc -n <namespace>
+```
+
+## Next steps
+* [Deploy an Automatic Active Directory (AD) connector](deploy-automatic-active-directory-connector.md)
+* [Deploy SQL Managed Instance with Active Directory Authentication](deploy-active-directory-sql-managed-instance.md).
+* [Connect to AD-integrated Azure Arc-enabled SQL Managed Instance](connect-active-directory-sql-managed-instance.md).
+
azure-arc Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/maintenance-window.md
+
+ Title: Maintenance window - Azure Arc-enabled data services
+description: Article describes how to set a maintenance window
++++++ Last updated : 03/31/2022+++
+# Maintenance window - Azure Arc-enabled data services
+
+Configure a maintenance window on a data controller to define a time period for upgrades. In this time period, the Arc-enabled SQL Managed Instances on that data controller which have the `desiredVersion` property set to `auto` will be upgraded.
+
+During setup, specify a duration, recurrence, and start date and time. After the maintenance window starts, it will run for the period of time set in the duration. The instances attached to the data controller will begin upgrades (in parallel). At the end of the set duration, any upgrades that are in progress will continue to completion. Any instances that did not begin upgrading in the window will begin upgrading in the following recurrence.
+
+## Prerequisites
+
+An Azure Arc-enabled SQL Managed Instance with the [`desiredVersion` property set to `auto`](upgrade-sql-managed-instance-auto.md).
+
+## Limitations
+
+The maintenance window duration can be from 2 hours to 8 hours.
+
+Only one maintenance window can be set per data controller.
+
+## Configure a maintenance window
+
+The maintenance window has these settings:
+
+- Duration - The length of time the window will run, expressed in hours and minutes (HH:mm).
+- Recurrence - how often the window will occur. All words are case sensitive and must be capitalized. You can set weekly or monthly windows.
+ - Weekly
+ - [Week | Weekly][day of week]
+ - Examples:
+ - `--recurrence "Week Thursday"`
+ - `--recurrence "Weekly Saturday"`
+ - Monthly
+ - [Month | Monthly] [First | Second | Third | Fourth | Last] [day of week]
+ - Examples:
+ - `--recurrence "Month Fourth Saturday"`
+ - `--recurrence "Monthly Last Monday"`
+ - If recurrence isn't specified, it will be a one-time maintenance window.
+- Start - the date and time the first window will occur, in the format `YYYY-MM-DDThh:mm` (24-hour format).
+ - Example:
+ - `--start "2022-02-01T23:00"`
+- Time Zone - the [time zone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) associated with the maintenance window.
+
+#### CLI
+
+To create a maintenance window, use the following command:
+
+```cli
+az arcdata dc update --maintenance-start <date and time> --maintenance-duration <time> --maintenance-recurrence <interval> --maintenance-time-zone <time zone> --k8s-namespace <namespace> --use-k8s
+```
+
+Example:
+
+```cli
+az arcdata dc update --maintenance-start "2022-01-01T23:00" --maintenance-duration 3:00 --maintenance-recurrence "Monthly First Saturday" --maintenance-time-zone US/Pacific --k8s-namespace arc --use-k8s
+```
+
+## Monitor the upgrades
+
+During the maintenance window, you can view the status of upgrades.
+
+```kubectl
+kubectl -n <namespace> get sqlmi -o yaml
+```
+
+The `status.runningVersion` and `status.lastUpdateTime` fields will show the latest version and when the status changed.
+
+## View existing maintenance window
+
+You can view the maintenance window in the `datacontroller` spec.
+
+```kubectl
+kubectl describe datacontroller -n <namespace>
+```
+
+Output:
+
+```text
+Spec:
+ Settings:
+ Maintenance:
+ Duration: 3:00
+ Recurrence: Monthly First Saturday
+ Start: 2022-01-01T23:00
+ Time Zone: US/Pacific
+```
+
+## Failed upgrades
+
+There is no automatic rollback for failed upgrades. If an instance failed to upgrade automatically, manual intervention will be needed to pin the instance to its current running version, using `az sql mi-arc update`. After the issue is resolved, the version can be set back to "auto".
+
+```cli
+az sql mi-arc upgrade --name <instance name> --desired-version <version>
+```
+
+Example:
+```cli
+az sql mi-arc upgrade --name sql01 --desired-version v1.2.0_2021-12-15
+```
+
+## Disable maintenance window
+
+When the maintenance window is disabled, automatic upgrades will not run.
+
+```cli
+az arcdata dc update --maintenance-enabled false --k8s-namespace <namespace> --use-k8s
+```
+
+Example:
+
+```cli
+az arcdata dc update --maintenance-enabled false --k8s-namespace arc --use-k8s
+```
+
+## Enable maintenance window
+
+When the maintenance window is enabled, automatic upgrades will resume.
+
+```cli
+az arcdata dc update --maintenance-enabled true --k8s-namespace <namespace> --use-k8s
+```
+
+Example:
+
+```cli
+az arcdata dc update --maintenance-enabled true --k8s-namespace arc --use-k8s
+```
+
+## Change maintenance window start time
+
+The update command can be used to change the maintenance start time.
+
+```cli
+az arcdata dc update --maintenance-start <date and time> --k8s-namespace arc --use-k8s
+```
+
+Example:
+
+```cli
+az arcdata dc update --maintenance-start "2022-04-15T23:00" --k8s-namespace arc --use-k8s
+```
+
+## Next steps
+
+[Enable automatic upgrades of a SQL Managed Instance](upgrade-sql-managed-instance-auto.md)
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
Previously updated : 03/16/2022 Last updated : 04/06/2022 # Azure Arc-enabled SQL Managed Instance - disaster recovery (preview)
-Disaster recovery in Azure Arc-enabled SQL Managed Instance is achieved using distributed availability groups.
-
-Disaster recovery features for Azure Arc-enabled SQL Managed Instance are available as preview.
+To configure disaster recovery in Azure Arc-enabled SQL Managed Instance, set up failover groups.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)]
The following image shows a properly configured distributed availability group:
1. Provision the managed instance in the primary site. ```azurecli
- az sql mi-arc create --name sqlprimary --tier bc --replicas 3 --k8s-namespace my-namespace --use-k8s
+ az sql mi-arc create --name <primaryinstance> --tier bc --replicas 3 --k8s-namespace <namespace> --use-k8s
``` 2. Provision the managed instance in the secondary site and configure as a disaster recovery instance. At this point, the system databases are not part of the contained availability group. ```azurecli
- az sql mi-arc create --name sqlsecondary --tier bc --replicas 3 --disaster-recovery-site true --k8s-namespace my-namespace --use-k8s
+ az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --disaster-recovery-site true --k8s-namespace <namespace> --use-k8s
``` 3. Copy the mirroring certificates from each site to a location that's accessible to both the geo-primary and geo-secondary instances.
The following image shows a properly configured distributed availability group:
az sql mi-arc get-mirroring-cert --name sqlsecondary --cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s ```
-4. Create the distributed availability group resource on both sites.
+4. Create the failover group resource on both sites.
+
+ If the managed instance names are identical between the two sites, you do not need to use the `--shared-name <name of failover group>` parameter.
+
+ If the managed instance names are different between the two sites, use the `--shared-name <name of failover group>` parameter.
- Use `az sql mi-arc dag...` to complete the task. The command seeds system databases in the disaster recovery instance, from the primary instance.
+ The following examples use the `--shared-name <name of failover group>...` to complete the task. The command seeds system databases in the disaster recovery instance, from the primary instance.
> [!NOTE]
- > The distributed availability group name should be identical on both sites.
+ > The `shared-name` value should be identical on both sites.
+
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary DAG resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
- ```azurecli
- az sql mi-arc dag create --dag-name <name of DAG> --name <name for primary DAG resource> --local-instance-name <primary instance name> --role primary --remote-instance-name <secondary instance name> --remote-mirroring-url tcp://<secondary IP> --remote-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
- az sql mi-arc dag create --dag-name <name of DAG> --name <name for secondary DAG resource> --local-instance-name <secondary instance name> --role secondary --remote-instance-name <primary instance name> --remote-mirroring-url tcp://<primary IP> --remote-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s
- ```
+ az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary DAG resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s
+ ```
+ Example:
- Example:
- ```azurecli
- az sql mi-arc dag create --dag-name dagtest --name dagPrimary --local-instance-name sqlPrimary --role primary --remote-instance-name sqlSecondary --remote-mirroring-url tcp://10.20.5.20:970 --remote-mirroring-cert-file $HOME/sqlcerts/sqlsecondary.pem --k8s-namespace my-namespace --use-k8s
+ ```azurecli
+ az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s
- az sql mi-arc dag create --dag-name dagtest --name dagSecondary --local-instance-name sqlSecondary --role secondary --remote-instance-name sqlPrimary --remote-mirroring-url tcp://10.20.5.50:970 --remote-mirroring-cert-file $HOME/sqlcerts/sqlprimary.pem --k8s-namespace my-namespace --use-k8s
- ```
+ az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role primary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s
+ ```
## Manual failover from primary to secondary instance
-Use `az sql mi-arc dag...` to initiate a failover from primary to secondary. The following command initiates a failover from the primary instance to the secondary instance. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover.
+Use `az sql instance-failover-group-arc ...` to initiate a failover from primary to secondary. The following command initiates a failover from the primary instance to the secondary instance. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover.
```azurecli
-az sql mi-arc dag update --name <name of DAG resource> --role secondary --k8s-namespace <namespace> --use-k8s
+az sql instance-failover-group-arc update --name <name of DAG resource> --role secondary --k8s-namespace <namespace> --use-k8s
``` Example: ```azurecli
-az sql mi-arc dag update --name dagtest --role secondary --k8s-namespace <namespace> --use-k8s
+az sql instance-failover-group-arc update --name myfog --role secondary --k8s-namespace my-namespace --use-k8s
``` - ## Forced failover In the circumstance when the geo-primary instance becomes unavailable, the following commands can be run on the geo-secondary DR instance to promote to primary with a forced failover incurring potential data loss.
In the circumstance when the geo-primary instance becomes unavailable, the follo
Run the below command on geo-primary, if available: ```azurecli
-az sql mi-arc dag update -k test --name dagtestp --use-k8s --role force-secondary
+az sql instance-failover-group-arc update --k8s-namespace my-namespace --name primarycr --use-k8s --role force-secondary
``` On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss. ```azurecli
-az sql mi-arc dag update -k test --name dagtests --use-k8s --role force-primary-allow-data-loss
+az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-primary-allow-data-loss
``` ## Limitation
azure-arc Monitor Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitor-certificates.md
The following table describes the requirements for each certificate and key.
|Requirement|Logs certificate|Metrics certificate| |--|--|--| |CN|`logsui-svc`|`metricsui-svc`|
-|SANs| `logsui-external-svc.${NAMESPACE}.svc.cluster.local`<br/><br>`logsui-svc` | `metricsui-external-svc.${NAMESPACE}.svc.cluster.local`<br/><br>`metricsui-svc`|
+|SANs| None required | `metricsui-svc.${NAMESPACE}.${K8S_DNS_DOMAIN_NAME}`|
|keyUsage|`digitalsignature`<br/><br>`keyEncipherment`|`digitalsignature`<br/><br>`keyEncipherment`| |extendedKeyUsage|`serverAuth`|`serverAuth`|
+> [!NOTE]
+> Default K8S_DNS_DOMAIN_NAME is `svc.cluster.local`, though it may differ depending on environment and configuration.
+ The GitHub repository directory includes example template files that identify the certificate specifications. - [/arc_data_services/deploy/scripts/monitoring/logsui-ssl.conf.tmpl](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/scripts/monitoring/logsui-ssl.conf.tmpl)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
Previously updated : 03/09/2022 Last updated : 04/06/2022 # Customer intent: As a data professional, I want to understand why my solutions would benefit from running with Azure Arc-enabled data services so that I can leverage the capability of the feature.
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## April 2022
+
+This release is published April 6, 2022.
+
+### Image tag
+
+`v1.5.0_2022-04-05`
+
+For complete release version information, see [Version log](version-log.md).
+
+### Data controller
+
+- Logs are retained in ElasticSearch for 2 weeks by default now.
+- Upgrades are now limited to only upgrading to the next incremental minor or major version. For example:
+ - Supported version upgrades:
+ - 1.1 -> 1.2
+ - 1.3 -> 2.0
+ - Not supported version upgrade.
+ - 1.1. -> 1.4
+ Not supported because one or more minor versions are skipped.
+- Updates to open source projects included in Azure Arc-enabled data services to patch vulnerabilities.
+
+### Azure Arc-enabled SQL Managed Instance
+
+You can create a maintenance window on the data controller, and if you have SQL managed instances with a desired version set to `auto`, they will be upgraded in the next maintenance windows after a data controller upgrade.
+
+Metrics for each replica in a business critical instance are now sent to the Azure portal so you can view them in the monitoring charts.
+
+AD authentication connectors can now be set up in an `automatic mode` which will use a service account to automatically create SQL service accounts, SPNs, and DNS entries as an alternative to the AD authentication connectors which use the `Bring Your Own Keytab` mode.
+
+Backup and point-in-time-restore when a database has Transparent Data Encryption (TDE) enabled is now supported.
+
+Change Data Capture (CDC) is now enabled in Azure Arc-enabled SQL Managed Instance.
+
+Bug fixes for replica scaling in Arc SQL MI Business Critical and database restore when there is insufficient disk space.
+
+Distributed availability groups have been renamed to failover groups. The `az sql mi-arc dag` command group has been moved to `az sql instance-failover-group-arc`. Before upgrade, delete all resources of the `dag` resource type.
+
+### User experience improvements
+
+You can now use the Azure CLI `az arcdata dc create` command to create:
+- A custom location
+- A data services extension
+- A data controller in one command.
+
+New enforcements of constraints:
+
+- The data controller and managed instance resources it manages must be in the same resource group.
+- There can only be one data controller in a given custom location.
+
+#### Azure Data Studio
+
+During direct connected mode data controller creation, you can now specify the log analytics workspace information for auto sync upload of the logs.
+ ## March 2022 This release is published March 8, 2022.
For complete release version information, see [Version log](version-log.md).
### Data controller - Initiate an upgrade of the data controller from the portal in the direct connected mode-- Removed block on data controller upgrade if there are Azure Arc-enabled SQL Managed Instance business critical instances that exist
+- Removed block on data controller upgrade if there are business critical instances that exist
- Better handling of delete user experiences in Azure portal ### SQL Managed Instance
Use the following tools:
- Currently, modifying the configuration of ElasticSearch and Kibana is not supported beyond what is available through the Kibana administrative experience. Only basic authentication with a single user is supported. -- Custom metrics in Azure portal is in preview.
+- Custom metrics in Azure portal - preview.
- Exporting usage/billing information, metrics, and logs using the command `az arcdata dc export` requires bypassing SSL verification for now. You will be prompted to bypass SSL verification or you can set the `AZDATA_VERIFY_SSL=no` environment variable to avoid prompting. There is no way to configure an SSL certificate for the data controller export API currently.
Use the following tools:
#### Azure Arc-enabled PostgreSQL Hyperscale - At this time, PosgreSQL Hyperscale can't be used on Kubernetes version 1.22 and higher. -- Backup and restore operations no longer work in the July 30 release. This is a temporary limitation. Use the June 2021 release for now if you need to do to back up or restore. This will be fixed in a future release.
+- Backup and restore no longer work in the July 30 release. This is a temporary limitation. Use the June 2021 release for now if you need to do to back up or restore. This will be fixed in a future release.
- It is not possible to enable and configure the `pg_cron` extension at the same time. You need to use two commands for this. One command to enable it and one command to configure it. For example:
Use the following tools:
##### Point-in-time restore(PITR) supportability and limitations: - Doesn't support restore from one Azure Arc-enabled SQL Managed Instance to another Azure Arc-enabled SQL Managed Instance. The database can only be restored to the same Azure Arc-enabled SQL Managed Instance where the backups were created.-- Renaming of a databases is currently not supported, for point in time restore purposes.
+- Renaming a database is currently not supported, for point in time restore purposes.
- Currently there is no CLI command or an API to provide the allowed time window information for point-in-time restore. You can provide a time within a reasonable window, since the time the database was created, and if the timestamp is valid the restore would work. If the timestamp is not valid, the allowed time window will be provided via an error message. - No support for restoring a TDE enabled database. - A deleted database cannot be restored currently.
Use the following tools:
- System database `model` is not backed up in order to prevent interference with creation/deletion of database. The DB gets locked when admin operations are performed. - Currently only `master` and `msdb` system databases are backed up. Only full backups are performed every 12 hours. - Only `ONLINE` user databases are backup up.-- Default recovery point objective (RPO): 5 minutes. Can not be modified in current release.
+- Default recovery point objective (RPO): 5 minutes. Can't be modified in current release.
- Backups are retained indefinitely. To recover space, manually delete backups. ##### Other limitations
This preview release is published July 13, 2021.
#### New deployment templates -- Kubernetes native deployment templates have been modified for for data controller, bootstrapper, & SQL Managed Instance. Update your .yaml templates. [Sample yaml files](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml)
+- Kubernetes native deployment templates have been modified for data controller, bootstrapper, & SQL Managed Instance. Update your .yaml templates. [Sample yaml files](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml)
#### New Azure CLI extension for data controller and Azure Arc-enabled SQL Managed Instance
Additional updates include:
- Issues with Python environments when using azdata in notebooks in Azure Data Studio resolved - The pg_audit extension is now available for PostgreSQL Hyperscale - A backup ID is no longer required when doing a full restore of a PostgreSQL Hyperscale database-- The status (health state) is reported for each of the PostgreSQL instances that constitute a sever group
+- The status (health state) is reported for each of the PostgreSQL instances in a sever group
In earlier releases, the status was aggregated at the server group level and not itemized at the PostgreSQL node level.
This release introduces the following breaking changes:
## September 2020
-Azure Arc-enabled data services is released for public preview. Azure Arc-enabled data services allow you to manage data services anywhere.
+Azure Arc-enabled data services allow you to manage data services anywhere. This is a preview release.
- SQL Managed Instance - PostgreSQL Hyperscale
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## April 6, 2022
+
+|Component |Value |
+|--||
+|Container images tag |`v1.5.0_2022-04-05`|
+|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`kafkas.arcdata.microsoft.com`: v1beta1</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3, v4</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2|
+|ARM API version|2021-11-01|
+|`arcdata` Azure CLI extension version| 1.3.0|
+|Arc enabled Kubernetes helm chart extension version|1.1.19091004|
+|Arc Data extension for Azure Data Studio|1.0|
+ ## March 08, 2022 |Component |Value |
This article identifies the component versions with each release of Azure Arc-en
|Component |Value | |--||
-|Container images tag |v1.3.0_2022-01-27
+|Container images tag |`v1.3.0_2022-01-27`
|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2</br>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2</br>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2</br>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2, v3</br>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2</br>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1</br>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2</br>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1| |ARM API version|2021-11-01| |`arcdata` Azure CLI extension version| 1.2.0|
The following table describes the components in this release.
|Component |Value | |--||
-|Container images tag | v1.2.0_2021-12-15 |
+|Container images tag | `v1.2.0_2021-12-15` |
|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1 | |ARM API version | 2021-11-01 | |`arcdata` Azure CLI extension version | 1.1.2 |
The following table describes the components in this release.
|Component |Value | |--||
-|Container images tag | v1.1.0_2021-11-02 |
+|Container images tag | `v1.1.0_2021-11-02` |
|CRD names and versions | `datacontrollers.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1, v2 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1, v2beta2 | |ARM API version | 2021-11-01 | |`arcdata` Azure CLI extension version | 1.1.0, (Nov 3),</br>1.1.1 (Nov4) |
This release introduces general availability for Azure Arc-enabled SQL Managed I
|Component |Value | |--||
-|Container images tag | v1.0.0_2021-07-30 |
+|Container images tag | `v1.0.0_2021-07-30` |
|CRD names and versions |`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 <br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1 <br/>`monitors.arcdata.microsoft.com`: v1beta1, v1 <br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 <br/>`postgresqls.arcdata.microsoft.com`: v1beta1 <br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1 <br/>`dags.sql.arcdata.microsoft.com`: v1beta1 <br/> | |ARM API version | 2021-08-01 (stable) | |`arcdata` Azure CLI extension version | 1.0 |
azure-arc Conceptual Connectivity Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-connectivity-modes.md
The connectivity status of a cluster is determined by the time of the latest hea
| Connecting | Azure Arc-enabled Kubernetes resource is created in Azure Resource Manager, but service hasn't received the agent heartbeat yet. | | Connected | Azure Arc-enabled Kubernetes service received an agent heartbeat sometime within the previous 15 minutes. | | Offline | Azure Arc-enabled Kubernetes resource was previously connected, but the service hasn't received any agent heartbeat for 15 minutes. |
-| Expired | Managed identity certificate of the cluster has an expiration window of 90 days after it is issued. Once this certificate expires, the resource is considered `Expired` and all features such as configuration, monitoring, and policy stop working on this cluster. More information on how to address expired Azure Arc-enabled Kubernetes resources can be found [in the FAQ article](./faq.md#how-to-address-expired-azure-arc-enabled-kubernetes-resources). |
+| Expired | Managed identity certificate of the cluster has an expiration window of 90 days after it is issued. Once this certificate expires, the resource is considered `Expired` and all features such as configuration, monitoring, and policy stop working on this cluster. More information on how to address expired Azure Arc-enabled Kubernetes resources can be found [in the FAQ article](./faq.md#how-do-i-address-expired-azure-arc-enabled-kubernetes-resources). |
## Next steps
azure-arc Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/faq.md
Title: "Azure Arc-enabled Kubernetes and GitOps frequently asked questions" Previously updated : 02/15/2022 Last updated : 04/06/2022 description: "This article contains a list of frequently asked questions related to Azure Arc-enabled Kubernetes and Azure GitOps" keywords: "Kubernetes, Arc, Azure, containers, configuration, GitOps, faq"+ # Frequently Asked Questions - Azure Arc-enabled Kubernetes and GitOps
Azure Arc-enabled Kubernetes allows you to extend AzureΓÇÖs management capabilit
Connecting an Azure Kubernetes Service (AKS) cluster to Azure Arc is only required for running Azure Arc-enabled services like App Services and Data Services on top of the cluster. This can be done using the [custom locations](custom-locations.md) feature of Azure Arc-enabled Kubernetes. This is a point in time limitation for now till cluster extensions and custom locations are introduced natively on top of AKS clusters. If you don't want to use custom locations and just want to use management features like Azure Monitor and Azure Policy (Gatekeeper), they are available natively on AKS and connection to Azure Arc is not required in such cases.
-
+ ## Should I connect my AKS-HCI cluster and Kubernetes clusters on Azure Stack Edge to Azure Arc? Yes, connecting your AKS-HCI cluster or Kubernetes clusters on Azure Stack Edge to Azure Arc provides clusters with resource representation in Azure Resource Manager. This resource representation extends capabilities like Cluster Configuration, Azure Monitor, and Azure Policy (Gatekeeper) to connected Kubernetes clusters. If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, AKS on Azure Stack HCI (>= April 2021 update), or AKS on Windows Server 2019 Datacenter (>= April 2021 update), then the Kubernetes configuration is included at no charge.
-## How to address expired Azure Arc-enabled Kubernetes resources?
+## How do I address expired Azure Arc-enabled Kubernetes resources?
-The system assigned managed identity associated with your Azure Arc-enabled Kubernetes cluster is only used by the Azure Arc agents to communicate with the Azure Arc services. The certificate associated with this system assigned managed identity has an expiration window of 90 days and the agents keep attempting to renew this certificate between Day 46 to Day 90. Once this certificate expires, the resource is considered `Expired` and all features (such as configuration, monitoring, and policy) stop working on this cluster and you'll then need to delete and connect the cluster to Azure Arc once again. It is thus advisable to have the cluster come online at least once between Day 46 to Day 90 time window to ensure renewal of the managed identity certificate.
+The system assigned managed identity associated with your Azure Arc-enabled Kubernetes cluster is only used by the Azure Arc agents to communicate with the Azure Arc services. The certificate associated with this system assigned managed identity has an expiration window of 90 days, and the agents will attempt to renew this certificate between Day 46 to Day 90. Once this certificate expires, the resource is considered `Expired` and all features (such as configuration, monitoring, and policy) stop working on this cluster and you'll then need to delete and connect the cluster to Azure Arc once again. It is thus advisable to have the cluster come online at least once between Day 46 to Day 90 time window to ensure renewal of the managed identity certificate.
To check when the certificate is about to expire for any given cluster, run the following command:
If the value of `managedIdentityCertificateExpirationTime` indicates a timestamp
``` 1. Recreate the Azure Arc-enabled Kubernetes resource by deploying agents on the cluster.
-
+ ```azurecli az connectedk8s connect -n <name> -g <resource-group> ```
If the value of `managedIdentityCertificateExpirationTime` indicates a timestamp
Yes, you can still use configurations on a cluster receiving deployments via a CI/CD pipeline. Compared to traditional CI/CD pipelines, GitOps configurations feature some extra benefits:
-**Drift reconciliation**
+### Drift reconciliation
The CI/CD pipeline applies changes only once during pipeline run. However, the GitOps operator on the cluster continuously polls the Git repository to fetch the desired state of Kubernetes resources on the cluster. If the GitOps operator finds the desired state of resources to be different from the actual state of resources on the cluster, this drift is reconciled.
-**Apply GitOps at scale**
+### Apply GitOps at scale
-CI/CD pipelines are useful for event-driven deployments to your Kubernetes cluster (for example, a push to a Git repository). However, if you want to deploy the same configuration to all of your Kubernetes clusters, you would need to manually configure each Kubernetes cluster's credentials to the CI/CD pipeline.
+CI/CD pipelines are useful for event-driven deployments to your Kubernetes cluster (for example, a push to a Git repository). However, if you want to deploy the same configuration to all of your Kubernetes clusters, you would need to manually configure each Kubernetes cluster's credentials to the CI/CD pipeline.
For Azure Arc-enabled Kubernetes, since Azure Resource Manager manages your GitOps configurations, you can automate creating the same configuration across all Azure Arc-enabled Kubernetes and AKS resources using Azure Policy, within scope of a subscription or a resource group. This capability is even applicable to Azure Arc-enabled Kubernetes and AKS resources created after the policy assignment. This feature applies baseline configurations (like network policies, role bindings, and pod security policies) across the entire Kubernetes cluster inventory to meet compliance and governance requirements.
-**Cluster compliance**
+### Cluster compliance
The compliance state of each GitOps configuration is reported back to Azure. This lets you keep track of any failed deployments. ## Does Azure Arc-enabled Kubernetes store any customer data outside of the cluster's region?
-The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. For more information, see [Trust Center](https://azure.microsoft.com/global-infrastructure/data-residency/).
+The feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. This is applicable for Azure Arc-enabled Open Service Mesh and Azure Key Vault Secrets Provider extensions supported in Azure Arc-enabled Kubernetes. For other cluster extensions, please see their documentation to learn how they store customer data. For more information, see [Trust Center](https://azure.microsoft.com/global-infrastructure/data-residency/).
## Next steps
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/platform/conceptual-custom-locations.md
Title: "Overview of custom locations with Azure Arc" Previously updated : 02/17/2022 Last updated : 02/24/2022 description: "This article provides a conceptual overview of the custom locations capability of Azure Arc."
azure-cache-for-redis Cache Best Practices Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-scale.md
description: Learn how to scale your Azure Cache for Redis.
Previously updated : 08/25/2021 Last updated : 04/06/2022
If you're using TLS and you have a high number of connections, consider scaling
## Scaling and memory
-You can scale your cache instances in the Azure portal or programatically using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
+You can scale your cache instances in the Azure portal. Also, you can programatically scale your cache using PowerShell cmdlets, Azure CLI, and by using the Microsoft Azure Management Libraries (MAML).
Either way, when you scale a cache up or down, both `maxmemory-reserved` and `maxfragmentationmemory-reserved` settings automatically scale in proportion to the cache size. For example, if `maxmemory-reserved` is set to 3 GB on a 6-GB cache, and you scale to 12-GB cache, the settings automatically updated to 6 GB during scaling. When you scale down, the reverse happens.
For more information on scaling and memory, see [How to automate a scaling opera
> [!NOTE] > When you scale a cache up or down programmatically, any `maxmemory-reserved` or `maxfragmentationmemory-reserved` are ignored as part of the update request. Only your scaling change is honored. You can update these memory settings after the scaling operation has completed.
+## Minimizing your data helps scaling complete quicker
+
+If preserving the data in the cache isn't a requirement, consider flushing the data prior to scaling. Flushing the cache helps the scaling operation complete more quickly so the new capacity is available sooner.
## Next steps
azure-fluid-relay Azure Function Token Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/azure-function-token-provider.md
const key = "myTenantKey";
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> { // tenantId, documentId, userId and userName are required parameters const tenantId = (req.query.tenantId || (req.body && req.body.tenantId)) as string;
- const documentId = (req.query.documentId || (req.body && req.body.documentId)) as string;
+ const documentId = (req.query.documentId || (req.body && req.body.documentId)) as string | undefined;
const userId = (req.query.userId || (req.body && req.body.userId)) as string; const userName = (req.query.userName || (req.body && req.body.userName)) as string; const scopes = (req.query.scopes || (req.body && req.body.scopes)) as ScopeType[];
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRe
return; }
- if (!documentId) {
- context.res = {
- status: 400,
- body: "No documentId provided in query params"
- };
- return;
- }
- let user = { name: userName, id: userId }; // Will generate the token and returned by an ITokenProvider implementation to use with the AzureClient.
export class AzureFunctionTokenProvider implements ITokenProvider {
- [Add custom data to an auth token](connect-fluid-azure-service.md#adding-custom-data-to-tokens) - [How to: Deploy Fluid applications using Azure Static Web Apps](deploy-fluid-static-web-apps.md)
+- [How to: Validate a User Created a Document](validate-document-creator.md)
azure-fluid-relay Validate Document Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/validate-document-creator.md
+
+ Title: "How to: Validate a User Created a Document"
+description: How to validate that the user who created a document is the same user who is claiming to be creating the document.
+++ Last updated : 04/05/2022++
+fluid.url: https://fluidframework.com/docs/apis/azure-client/itokenprovider/
++
+# How to: Validate a User Created a Document
+
+> [!NOTE]
+> This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+
+When creating a document in Azure Fluid Relay, the JWT provided by the `ITokenProvider` for the creation request can only be used once. After creating a document, the client must generate a new JWT that contains the document ID provided by the service at creation time. If an application has an authorization service that manages document access control, it will need to know who created a document with a given ID in order to authorize the generation of a new JWT for access to that document.
+
+## Inform an Authorization Service when a document is Created
+
+An application can tie into the document creation lifecycle by implementing a public `documentPostCreateCallback()` property in its `TokenProvider`. This callback will be triggered directly after creating the document, before a client requests the new JWT it needs to gain read/write permissions to the document that was created.
+
+The `documentPostCreateCallback()` receives 2 parameters: 1) the ID of the document that was created and 2) a JWT signed by the service with no permission scopes. The authorization service can verify the given JWT and use the information in the JWT to grant the correct user permissions for the newly created document.
+
+### Create an endpoint for your document creation callback
+
+This example below is an [Azure Function](../../azure-functions/functions-overview.md) based off the example in [How to: Write a TokenProvider with an Azure Function](azure-function-token-provider.md#create-an-endpoint-for-your-tokenprovider-using-azure-functions).
+
+```typescript
+import { AzureFunction, Context, HttpRequest } from "@azure/functions";
+import { ITokenClaims, IUser } from "@fluidframework/protocol-definitions";
+import * as jwt from "jsonwebtoken";
+
+// NOTE: retrieve the key from a secure location.
+const key = "myTenantKey";
+
+const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
+ const token = (req.query.token || (req.body && req.body.token)) as string;
+ const documentId = (req.query.documentId || (req.body && req.body.documentId)) as string;
+
+ if (!token) {
+ context.res = {
+ status: 400,
+ body: "No token provided in request",
+ };
+ return;
+ }
+ if (!documentId) {
+ context.res = {
+ status: 400,
+ body: "No documentId provided in request",
+ };
+ return;
+ }
+
+ const claims = jwt.decode(token) as ITokenClaims;
+ if (!claims) {
+ context.res = {
+ status: 403,
+ body: "Missing token claims",
+ };
+ return;
+ }
+
+ const tenantId = claims.tenantId;
+ if (!claims) {
+ context.res = {
+ status: 400,
+ body: "No tenantId provided in token claims",
+ };
+ return;
+ }
+ if (!key) {
+ context.res = {
+ status: 404,
+ body: `No key found for the provided tenantId: ${tenantId}`,
+ };
+ return;
+ }
+ try {
+ jwt.verify(token, key);
+ } catch (e) {
+ if (e instanceof jwt.TokenExpiredError) {
+ context.res = {
+ status: 401,
+ body: `Token is expired`,
+ };
+ return
+ }
+ context.res = {
+ status: 403,
+ body: `Token signed with invalid key`,
+ }
+ return;
+ }
+
+ const user: IUser = claims.user;
+ // Pseudo-function: implement according to your needs
+ giveUserPermissionsForDocument(documentId, user);
+
+ context.res = {
+ status: 200,
+ body: "OK",
+ };
+};
+
+export default httpTrigger;
+```
+
+### Implement the `documentPostCreateCallback`
+
+This example implementation below extends the [AzureFunctionTokenProvider](https://fluidframework.com/docs/apis/azure-client/azurefunctiontokenprovider/) and uses the [axios](https://www.npmjs.com/package/axios) library to make a simple HTTP request to the Azure Function used for generating tokens.
+
+```typescript
+import { AzureFunctionTokenProvider, AzureMember } from "@fluidframework/azure-client";
+import axios from "axios";
+
+/**
+ * Token Provider implementation for connecting to an Azure Function endpoint for
+ * Azure Fluid Relay token resolution.
+ */
+export class AzureFunctionTokenProviderWithDocumentCreateCallback extends AzureFunctionTokenProvider {
+ /**
+ * Creates a new instance using configuration parameters.
+ * @param azFunctionUrl - URL to Azure Function endpoint
+ * @param user - User object
+ */
+ constructor(
+ private readonly authAzFunctionUrl: string,
+ azFunctionUrl: string,
+ user?: Pick<AzureMember, "userId" | "userName" | "additionalDetails">,
+ ) {
+ super(azFunctionUrl, user);
+ }
+
+ public async documentPostCreateCallback?(documentId: string, creationToken: string): Promise<void> {
+ await axios.post(this.authAzFunctionUrl, {
+ params: {
+ documentId,
+ token: creationToken,
+ },
+ });
+ }
+}
+```
+
+## See also
+
+- [Add custom data to an auth token](connect-fluid-azure-service.md#adding-custom-data-to-tokens)
+- [Azure Fluid Relay token contract](fluid-json-web-token.md)
azure-functions Functions Bindings Rabbitmq Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-output.md
The attribute's constructor takes the following parameters:
# [In-process](#tab/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQAttribute](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/RabbitMQAttribute.cs).
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQAttribute](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/RabbitMQAttribute.cs).
Here's a `RabbitMQTrigger` attribute in a method signature for an in-process library:
ILogger log)
# [Isolated process](#tab/isolated-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/Trigger/RabbitMQTriggerAttribute.cs) attribute.
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
azure-functions Functions Bindings Rabbitmq Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-rabbitmq-trigger.md
The attribute's constructor takes the following parameters:
# [In-process](#tab/in-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/Trigger/RabbitMQTriggerAttribute.cs) attribute.
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
Here's a `RabbitMQTrigger` attribute in a method signature for an in-process library:
public static void RabbitMQTest([RabbitMQTrigger("queue")] string message, ILogg
# [Isolated process](#tab/isolated-process)
-In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/src/Trigger/RabbitMQTriggerAttribute.cs) attribute.
+In [C# class libraries](functions-dotnet-class-library.md), use the [RabbitMQTrigger](https://github.com/Azure/azure-functions-rabbitmq-extension/blob/dev/extension/WebJobs.Extensions.RabbitMQ/Trigger/RabbitMQTriggerAttribute.cs) attribute.
Here's a `RabbitMQTrigger` attribute in a method signature for an isolated process library:
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
See the complete regional availability of Functions on the [Azure web site](http
|China East 2| 100 | 20 | |China North 2| 100 | 20 | |East Asia| 100 | 20 |
-|East US | 100 | 40 |
+|East US | 100 | 60 |
|East US 2| 100 | 20 | |France Central| 100 | 20 | |Germany West Central| 100 | 20 |
azure-functions Functions Cli Mount Files Storage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-mount-files-storage-linux.md
# Mount a file share to a Python function app using Azure CLI
-This Azure Functions sample script creates a function app using the [Consumption plan](../consumption-plan.md)and creates a share in Azure Files. It then mounts the share so that the data can be accessed by your functions.
+This Azure Functions sample script creates a function app using the [Consumption plan](../consumption-plan.md) and creates a share in Azure Files. It then mounts the share so that the data can be accessed by your functions.
>[!NOTE] >The function app created runs on Python version 3.9. Azure Functions also [supports Python versions 3.7 and 3.8](../functions-reference-python.md#python-version).
azure-government Documentation Government Overview Wwps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md
recommendations: false Previously updated : 03/07/2022 Last updated : 04/06/2022 # Azure for secure worldwide public sector cloud adoption
Listed below are some of the options available to you to safeguard your data in
- While you can't control the precise network path for data in transit, data encryption in transit helps protect data from interception. - Azure is a 24x7 globally operated service; however, support and troubleshooting rarely require access to your data. - If you want extra control for support and troubleshooting scenarios, you can use Customer Lockbox for Azure to approve or deny access to your data.-- Microsoft will notify you of any breach of your data (customer or personal) within 72 hours of incident declaration.
+- Microsoft will notify you of any breach of your data ΓÇô customer or personal ΓÇô within 72 hours of incident declaration.
- You can monitor potential threats and respond to incidents on your own using Microsoft Defender for Cloud. Using Azure data protection technologies and intelligent edge capabilities from the Azure Stack portfolio of products, you can process confidential and secret data in secure isolated infrastructure within the public multi-tenant cloud or top secret data on premises and at the edge under your full operational control.
With innovative solutions such as [IoT Hub](https://azure.microsoft.com/services
### Precision Agriculture with Farm Beats
-Agriculture plays a vital role in most economies worldwide. In the US, over 70% of the rural households depend on agriculture as it contributes about 17% to the total GDP and provides employment to over 60% of the population. In project [Farm Beats](https://www.microsoft.com/research/project/farmbeats-iot-agriculture/), we gather numerous data from farms that we couldnΓÇÖt get before, and then by applying AI and ML algorithms we're able to turn this data into actionable insights for farmers. We call this technique data-driven farming. What we mean by data-driven farming is the ability to map every farm and overlay it with data. For example, what is the soil moisture level 15 cm below surface, what is the soil temperature 15 cm below surface, etc. These maps can then enable techniques, such as Precision Agriculture, which has been shown to improve yield, reduce costs, and benefit the environment. Despite the fact the Precision Agriculture as a technique was proposed more than 30 years ago, it hasnΓÇÖt taken off. The biggest reason is the inability to capture numerous data from farms to accurately represent the conditions in the farm. Our goal as part of the Farm Beats project is to be able to accurately construct precision maps at a fraction of the cost.
+Agriculture plays a vital role in most economies worldwide. In the US, over 70% of the rural households depend on agriculture as it contributes about 17% to the total GDP and provides employment to over 60% of the population. In project [Farm Beats](https://www.microsoft.com/research/project/farmbeats-iot-agriculture/), we gather numerous data from farms that we couldnΓÇÖt get before, and then by applying AI and ML algorithms we're able to turn this data into actionable insights for farmers. We call this technique data-driven farming. What we mean by data-driven farming is the ability to map every farm and overlay it with data. For example, what is the soil moisture level 15 cm below surface, what is the soil temperature 15 cm below surface, and so on. These maps can then enable techniques, such as Precision Agriculture, which has been shown to improve yield, reduce costs, and benefit the environment. Despite the fact the Precision Agriculture as a technique was proposed more than 30 years ago, it hasnΓÇÖt taken off. The biggest reason is the inability to capture numerous data from farms to accurately represent the conditions in the farm. Our goal as part of the Farm Beats project is to be able to accurately construct precision maps at a fraction of the cost.
### Unleashing the power of analytics with synthetic data
Synthetic data can exist in several forms, including text, audio, video, and hyb
### Knowledge mining
-The exponential growth of unstructured data gathering in recent years has created many analytical problems for government agencies. This problem intensifies when data sets come from diverse sources such as text, audio, video, imaging, etc. [Knowledge mining](/learn/modules/azure-artificial-intelligence/2-knowledge-mining) is the process of discovering useful knowledge from a collection of diverse data sources. This widely used data mining technique is a process that includes data preparation and selection, data cleansing, incorporation of prior knowledge on data sets, and interpretation of accurate solutions from the observed results. This process has proven to be useful for large volumes of data in different government agencies.
+The exponential growth of unstructured data gathering in recent years has created many analytical problems for government agencies. This problem intensifies when data sets come from diverse sources such as text, audio, video, imaging, and so on. [Knowledge mining](/learn/modules/azure-artificial-intelligence/2-knowledge-mining) is the process of discovering useful knowledge from a collection of diverse data sources. This widely used data mining technique is a process that includes data preparation and selection, data cleansing, incorporation of prior knowledge on data sets, and interpretation of accurate solutions from the observed results. This process has proven to be useful for large volumes of data in different government agencies.
For instance, captured data from the field often includes documents, pamphlets, letters, spreadsheets, propaganda, videos, and audio files across many disparate structured and unstructured formats. Buried within the data are [actionable insights](https://www.youtube.com/watch?v=JFdF-Z7ypQo) that can enhance effective and timely response to crisis and drive decisions. The objective of knowledge mining is to enable decisions that are better, faster, and more humane by implementing proven commercial algorithm-based technologies.
When deploying applications that are subject to regulatory compliance obligation
- Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) assessment report, including Azure Stack Hub control mapping to CCM domains and controls. - FedRAMP High System Security Plan (SSP) precompiled template to demonstrate how Azure Stack Hub addresses applicable controls, Customer Responsibility Matrix for the FedRAMP High baseline, and FedRAMP assessment report produced by an accredited third-party assessment organization (3PAO).
-**[Azure Blueprints](https://azure.microsoft.com/services/blueprints/)** is a service that helps automate compliance and cybersecurity risk management in cloud environments. For more information on Azure Blueprints, including production-ready blueprint solutions for ISO 27001, NIST SP 800-53, PCI DSS, HITRUST, and other standards, see the [Azure Blueprints guidance](../governance/blueprints/overview.md).
+**Azure Policy regulatory compliance built-in initiatives** map to compliance domains and controls in key standards, including:
+
+- [Australian Government ISM PROTECTED](../governance/policy/samples/australia-ism.md)
+- [Canada Federal PBMM](../governance/policy/samples/canada-federal-pbmm.md)
+- [ISO/IEC 27001](../governance/policy/samples/iso-27001.md)
+- [US Government FedRAMP High](../governance/policy/samples/fedramp-high.md)
+- And others
+
+For more regulatory compliance built-in initiatives, see [Azure Policy samples](../governance/policy/samples/index.md#regulatory-compliance).
+
+Regulatory compliance in Azure Policy provides built-in initiative definitions to view a list of the controls and compliance domains based on responsibility ΓÇô customer, Microsoft, or shared. For Microsoft-responsible controls, we provide extra audit result details based on third-party attestations and our control implementation details to achieve that compliance. Each control is associated with one or more Azure Policy definitions. These policies may help you [assess compliance](../governance/policy/how-to/get-compliance-data.md) with the control; however, compliance in Azure Policy is only a partial view of your overall compliance status. Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to more granular status.
Azure compliance and certification resources are intended to help you address your own compliance obligations with various standards and regulations. You may have an established cloud adoption mandate in your country and the corresponding regulation to facilitate cloud onboarding. Or you may still operate traditional on-premises datacenters and are in the process of formulating your cloud adoption strategy. AzureΓÇÖs extensive compliance portfolio can help you irrespective of your cloud adoption maturity level.
This section addresses common customer questions related to Azure public, privat
### Transparency and audit -- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes independent third-party audit reports and other related documentation available for download under a non-disclosure agreement from the Azure portal. You'll need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access the Microsoft Defender for Cloud [audit reports blade](https://portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade). Additional compliance documentation is available from the Service Trust Portal (STP) [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3) section. You must log in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
+- **Audit documentation:** Does Microsoft make all audit documentation readily available to customers to download and examine? **Answer:** Yes, Microsoft makes independent third-party audit reports and other related documentation available for download under a non-disclosure agreement from the Azure portal. You'll need an existing Azure subscription or [free trial subscription](https://azure.microsoft.com/free/) to access the Microsoft Defender for Cloud [audit reports blade](https://portal.azure.com/#blade/Microsoft_Azure_Security/AuditReportsBlade). Extra compliance documentation is available from the Service Trust Portal (STP) [Audit Reports](https://servicetrust.microsoft.com/ViewPage/MSComplianceGuideV3) section. You must log in to access audit reports on the STP. For more information, see [Get started with the Microsoft Service Trust Portal](/microsoft-365/compliance/get-started-with-service-trust-portal).
- **Process auditability:** Does Microsoft make its processes, data flow, and documentation available to customers or regulators for audit? **Answer:** Microsoft offers a Regulator Right to Examine, which is a program Microsoft implemented to provide regulators with direct right to examine Azure, including the ability to conduct an on-site examination, to meet with Microsoft personnel and Microsoft external auditors, and to access any related information, records, reports, and documents. - **Service documentation:** Can Microsoft provide in-depth documentation covering service architecture, software and hardware components, and data protocols? **Answer:** Yes, Microsoft provides extensive and in-depth Azure online documentation covering all these topics. For example, you can review documentation on Azure [products](../index.yml), [global infrastructure](https://azure.microsoft.com/global-infrastructure/), and [API reference](/rest/api/azure/).
azure-government Documentation Government Plan Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-plan-compliance.md
description: Provides an overview of the available compliance assurances for Azu
- + recommendations: false Previously updated : 03/07/2022 Last updated : 04/06/2022 # Azure Government compliance
Azure Government maintains the following authorizations that pertain to Azure Go
- [DoD IL4](/azure/compliance/offerings/offering-dod-il4) PA issued by DISA - [DoD IL5](/azure/compliance/offerings/offering-dod-il5) PA issued by DISA
-For links to additional Azure Government compliance assurances, see [Azure compliance](../compliance/index.yml). For example, Azure Government can help you meet your compliance obligations with many US government requirements, including:
+For links to extra Azure Government compliance assurances, see [Azure compliance](../compliance/index.yml). For example, Azure Government can help you meet your compliance obligations with many US government requirements, including:
- [Criminal Justice Information Services (CJIS)](/azure/compliance/offerings/offering-cjis) - [Internal Revenue Service (IRS) Publication 1075](/azure/compliance/offerings/offering-irs-1075)
You must have an existing subscription or free trial account in [Azure](https://
## Azure Policy regulatory compliance built-in initiatives
-For additional customer assistance, Microsoft provides **Azure Policy regulatory compliance built-in initiatives**, which map to **compliance domains** and **controls** in key US government standards, including:
+For extra customer assistance, Microsoft provides Azure Policy regulatory compliance built-in initiatives, which map to **compliance domains** and **controls** in key US government standards, including:
- [FedRAMP High](../governance/policy/samples/gov-fedramp-high.md) - [DoD IL4](../governance/policy/samples/gov-dod-impact-level-4.md) - [DoD IL5](../governance/policy/samples/gov-dod-impact-level-5.md)
+- And others
-For additional regulatory compliance built-in initiatives that pertain to Azure Government, see [Azure Policy samples](../governance/policy/samples/index.md#regulatory-compliance).
+For more regulatory compliance built-in initiatives that pertain to Azure Government, see [Azure Policy samples](../governance/policy/samples/index.md#regulatory-compliance).
-Regulatory compliance in Azure Policy provides built-in initiative definitions to view a list of the controls and compliance domains based on responsibility - customer, Microsoft, or shared. For Microsoft-responsible controls, we provide additional audit result details based on third-party attestations and our control implementation details to achieve that compliance. Each control is associated with one or more Azure Policy definitions. These policies may help you [assess compliance](../governance/policy/how-to/get-compliance-data.md) with the control; however, compliance in Azure Policy is only a partial view of your overall compliance status. Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to more granular status.
+Regulatory compliance in Azure Policy provides built-in initiative definitions to view a list of the controls and compliance domains based on responsibility ΓÇô customer, Microsoft, or shared. For Microsoft-responsible controls, we provide extra audit result details based on third-party attestations and our control implementation details to achieve that compliance. Each control is associated with one or more Azure Policy definitions. These policies may help you [assess compliance](../governance/policy/how-to/get-compliance-data.md) with the control; however, compliance in Azure Policy is only a partial view of your overall compliance status. Azure Policy helps to enforce organizational standards and assess compliance at scale. Through its compliance dashboard, it provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to more granular status.
## Next steps
azure-monitor Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux.md
Title: Install Log Analytics agent on Linux computers description: This article describes how to connect Linux computers hosted in other clouds or on-premises to Azure Monitor with the Log Analytics agent for Linux. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
Title: Troubleshoot issues with Log Analytics agent for Windows description: Describe the symptoms, causes, and resolution for the most common issues with the Log Analytics agent for Windows in Azure Monitor. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
Title: Install Log Analytics agent on Windows computers description: This article describes how to connect Windows computers hosted in other clouds or on-premises to Azure Monitor with the Log Analytics agent for Windows. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Data Sources Collectd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-collectd.md
Title: Collect data from CollectD in Azure Monitor | Microsoft Docs description: CollectD is an open source Linux daemon that periodically collects data from applications and system level information. This article provides information on collecting data from CollectD in Azure Monitor. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Data Sources Iis Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-iis-logs.md
Title: Collect IIS logs with Log Analytics agent in Azure Monitor description: Internet Information Services (IIS) stores user activity in log files that can be collected by Azure Monitor. This article describes how to configure collection of IIS logs and details of the records they create in Azure Monitor. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Data Sources Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-json.md
Title: Collecting custom JSON data sources with the Log Analytics agent for Linux in Azure Monitor description: Custom JSON data sources can be collected into Azure Monitor using the Log Analytics Agent for Linux. These custom data sources can be simple scripts returning JSON such as curl or one of FluentD's 300+ plugins. This article describes the configuration required for this data collection. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Data Sources Linux Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-linux-applications.md
Title: Collect Linux application performance in Azure Monitor | Microsoft Docs description: This article provides details for configuring the Log Analytics agent for Linux to collect performance counters for MySQL and Apache HTTP Server. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Diagnostics Extension To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-to-application-insights.md
Title: Send Azure Diagnostics data to Application Insights description: Update the Azure Diagnostics public configuration to send data to Application Insights. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Diagnostics Extension Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-troubleshooting.md
Title: Troubleshooting Azure Diagnostics extension description: Troubleshoot problems when using Azure diagnostics in Azure Virtual Machines, Service Fabric, or Cloud Services. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
Title: Connect Operations Manager to Azure Monitor | Microsoft Docs description: To maintain your existing investment in System Center Operations Manager and use extended capabilities with Log Analytics, you can integrate Operations Manager with your workspace. Previously updated : 03/31/2021 Last updated : 03/31/2022
azure-monitor Activity Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/activity-log-alerts.md
Title: Activity log alerts in Azure Monitor description: Be notified via SMS, webhook, SMS, email and more, when certain events occur in the activity log. Previously updated : 09/17/2018 Last updated : 04/04/2022
Last updated 09/17/2018
## Overview
-Activity log alerts are alerts that activate when a new [activity log event](../essentials/activity-log-schema.md) occurs that matches the conditions specified in the alert. Based on the order and volume of the events recorded in [Azure activity log](../essentials/platform-logs-overview.md), the alert rule will fire. Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal. This article introduces the concepts behind activity log alerts. For more information on creating or usage of activity log alert rules, see [Create and manage activity log alerts](alerts-activity-log.md).
+Activity log alerts allow you to be notified on events and operations that are logged in [Azure Activity Log](../essentials/activity-log.md). An alert is fired when a new [activity log event](../essentials/activity-log-schema.md) occurs that matches the conditions specified in the alert rule. Activity log alert rules are Azure resources, so they can be created by using an Azure Resource Manager template. They also can be created, updated, or deleted in the Azure portal. This article introduces the concepts behind activity log alerts. For more information on creating or usage of activity log alert rules, see [Create and manage activity log alerts](./alerts-activity-log.md).
+
+## Alerting on activity log event categories
+
+You can create activity log alert rules to receive notifications on one of the following activity log event categories :
+
+* **Administrative events** - get notified when a create, update, delete, or action operation occur on resources in your Azure subscription, resource group, or on a specific resource. For example, you might want to be notified when any virtual machine in myProductionResourceGroup is deleted. Or, you might want to be notified if any new roles are assigned to a user in your subscription.
+* **Service Health events** - get notified on Azure incidents, such as an outage or a maintenance event, occurred in a specific Azure region and may impact services in your subscription.
+* **Resource health events** - get notified when the health of a specific Azure resource you are using is degraded, or if the resource becomes unavailable.
+* **Autoscale events** - get notified when events related to the operation of the configured [autoscale operations](../autoscale/autoscale-overview.md) in your subscription. An example of an Autoscale event is Autoscale scale up action failed.
+* **Recommendation** - get notified when a new [Azure Advisor recommendation](../../advisor/advisor-overview.md) is available for your subscription.
+* **Security** - get notified on events generated by Microsoft Defender for Cloud. An example of a Security event is Suspicious double extension file executed.
+* **Policy** - get notified on effect action operations performed by Azure Policy. Examples of Policy events include Audit and Deny.
> [!NOTE]
-> * Alerts **cannot** be created for events in Alert category of activity log.
-> * Activity Log Alerts with the category of Security can be defined also in a [new upgraded flow](../../security-center/continuous-export.md?tabs=azure-portal) to [ServiceNow](../../security-center/export-to-siem.md)
+> Alerts **cannot** be created for events in Alert category of activity log.
-Typically, you create activity log alerts to receive notifications when:
-* Specific operations occur on resources in your Azure subscription, often scoped to particular resource groups or resources. For example, you might want to be notified when any virtual machine in myProductionResourceGroup is deleted. Or, you might want to be notified if any new roles are assigned to a user in your subscription.
-* A service health event occurs. Service health events include notification of incidents and maintenance events that apply to resources in your subscription.
+## Configuring activity log alert rules
-A simple analogy for understanding conditions on which alert rules can be created on activity log, is to explore or filter events via [Activity log in Azure portal](../essentials/activity-log.md#view-the-activity-log). In Azure Monitor - Activity log, one can filter or find necessary event and then create an alert by using the **Add activity log alert** button.
+You can configure an activity log alert based on any top-level property in the JSON object for an activity log event. For more information, see [Categories in the Activity Log](../essentials/activity-log.md#view-the-activity-log).
-In either case, an activity log alert monitors only for events in the subscription in which the alert is created.
+An alternative simple way for creating conditions for activity log alerts is to explore or filter events via [Activity log in Azure portal](../essentials/activity-log.md#view-the-activity-log). In Azure Monitor - Activity log, one can filter and locate a required event and then create an alert to notify on similar by using the **New alert rule** button.
-You can configure an activity log alert based on any top-level property in the JSON object for an activity log event. For more information, see [Categories in the Activity Log](../essentials/activity-log.md#view-the-activity-log). To learn more about service health events, see [Receive activity log alerts on service notifications](../../service-health/alerts-activity-log-service-notifications-portal.md).
+> [!NOTE]
+> An activity log alert rule monitors only for events in the subscription in which the alert rule is created.
-Activity log alerts have a few common options:
+Activity log events have a few common properties which can be used to define a the activity log alert rule condition:
-- **Category**: Administrative, Service Health, Autoscale, Security, Policy, and Recommendation.
+- **Category**: Administrative, Service Health, Resource Health, Autoscale, Security, Policy, or Recommendation.
- **Scope**: The individual resource or set of resource(s) for which the alert on activity log is defined. Scope for an activity log alert can be defined at various levels: - Resource Level: For example, for a specific virtual machine - Resource Group Level: For example, all virtual machines in a specific resource group
Activity log alerts have a few common options:
- **Operation name**: The [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md) name utilized for Azure role-based access control . Operations not registered with Azure Resource Manager can not be used in an activity log alert rule. - **Level**: The severity level of the event (Informational, Warning, Error, or Critical). - **Status**: The status of the event, typically Started, Failed, or Succeeded.-- **Event initiated by**: Also known as the "caller." The email address or Azure Active Directory identifier of the user who performed the operation.
+- **Event initiated by**: Also known as the "caller." The email address or Azure Active Directory identifier of the user (or application) who performed the operation.
-> [!NOTE]
-> In a subscription up to 100 alert rules can be created for an activity of scope at either: a single resource, all resources in resource group (or) entire subscription level.
+In addition to these comment properties, different activity log events categories have categpry-specific properties that can be used to define an alert rule for events of this category. For example, when creating a service health alert rule you can configure a condition on the impacted region name or service name that appear in the event.
+
+## Using action groups
-When an activity log alert is activated, it uses an action group to generate actions or notifications. An action group is a reusable set of notification receivers, such as email addresses, webhook URLs, or SMS phone numbers. The receivers can be referenced from multiple alerts to centralize and group your notification channels. When you define your activity log alert, you have two options. You can:
+When an activity log alert is fired, it uses an action group to generate actions or notifications. An action group is a reusable set of notification receivers, such as email addresses, webhook URLs, or SMS phone numbers. The receivers can be referenced from multiple alerts to centralize and group your notification channels. When you define your activity log alert rule, you have two options. You can:
-* Use an existing action group in your activity log alert.
+* Use an existing action group in your activity log alert rule.
* Create a new action group. To learn more about action groups, see [Create and manage action groups in the Azure portal](./action-groups.md).
+## Activity log alert rules limit
+You can create up to 100 active activity log alert rules per subscription (including alert rules all activity log categories, such as resource health or service health ). This limit can't be increased.
+If you are reaching near this limit, there are several guidelines you can follow to optimize the use of activity log alerts rules so that you can cover more resources and events with the same number of rules:
+* A single activity log alert rule can be configured to cover the scope of a single resource, a resource group, or an entire subscription. To reduce the number of rules you're using, consider to replace multiple rules covering a narrow scope with a single rule covering a broad scope. For example, if you have multiple VMs in a subscription, and you want an alert to be triggered whenever one of them is restarted, you can use a single activity log alert rule to cover all the VMs in your subscription. The alert will be triggered whenever any VM in the subscription is restarted.
+* A single service health alert rule can cover all the services and Azure regions used by your subscription. If you're using multiple service health alert rules per subscription, you can replace them with a single rule (or with a small number of rules, if you prefer).
+* A single resource health alert rule can cover multiple resource types and resources in your subscription. If you're using multiple resource health alert rules per subscription, you can replace them with a smaller number of rules (or even a single rule) that covers multiple resource types.
+ ## Next steps - Get an [overview of alerts](./alerts-overview.md). - Learn about [create and modify activity log alerts](alerts-activity-log.md). - Review the [activity log alert webhook schema](../alerts/activity-log-alerts-webhook.md).-- Learn about [service health notifications](../../service-health/service-notifications.md).
+- Learn more about [service health alerts](../../service-health/service-notifications.md).
+- Learn more about [Resource health alerts](../../service-health/resource-health-alert-monitor-guide.md).
+- Learn more about [Recommendation alerts](../../advisor/advisor-alerts-portal.md).
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
3. Set up the connection string.
- Although you can provide the connection string as an argument to `AddApplicationInsightsTelemetry`, we recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
+ Although you can provide a connection string as part of the `ApplicationInsightsServiceOptions` argument to AddApplicationInsightsTelemetry, we recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
```json {
azure-monitor Snapshot Collector Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-collector-release-notes.md
For bug reports and feedback, open an issue on GitHub at https://github.com/micr
## Release notes
+## [1.4.3](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.3)
+A point release to address user-reported bugs.
+### Bug fixes
+- Fix [Hide the IDMS dependency from dependency tracker.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/17)
+- Fix [ArgumentException: telemetryProcessorTypedoes not implement ITelemetryProcessor.](https://github.com/microsoft/ApplicationInsights-SnapshotCollector/issues/19)
+<br>Snapshot Collector used via SDK is not supported when Interop feature is enabled. [See more not supported scenarios.](https://docs.microsoft.com/azure/azure-monitor/app/snapshot-debugger-troubleshoot#not-supported-scenarios)
+ ## [1.4.2](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.2) A point release to address a user-reported bug. ### Bug fixes
azure-monitor Azure Cli Metrics Alert Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-cli-metrics-alert-sample.md
description: Learn how to create metric alerts in Azure Monitor with Azure CLI c
Previously updated : 08/06/2021 Last updated : 04/05/2022
azure-monitor Azure Monitor Operations Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-operations-manager.md
description: Guidance for existing users of Operations Manager to transition mon
Previously updated : 01/11/2021 Last updated : 04/05/2022
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
na Previously updated : 03/26/2019 Last updated : 04/05/2022
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
description: Reference of all services and other resources monitored by Azure Mo
Previously updated : 02/10/2021 Last updated : 04/05/2022
azure-monitor Resource Manager Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-manager-samples.md
Previously updated : 05/18/2020 Last updated : 04/05/2022 # Resource Manager template samples for Azure Monitor
azure-monitor Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/service-limits.md
description: Lists limits in different areas of Azure Monitor.
Previously updated : 06/10/2019 Last updated : 04/05/2022
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 03/31/2022 Last updated : 04/06/2022 # SMB FAQs for Azure NetApp Files
No. However, you can create a new SMB volume with the new share name from a snap
Alternatively, you can use [Windows Server DFS Namespace](/windows-server/storage/dfs-namespaces/dfs-overview) where a DFS Namespace with the new share name can point to the Azure NetApp Files SMB volume with the old share name.
+## Does Azure NetApp Files support SMB change notification and file locking?
+
+Yes.
+
+Azure NetApp Files supports [`CHANGE_NOTIFY` response](/openspecs/windows_protocols/ms-smb2/14f9d050-27b2-49df-b009-54e08e8bf7b5). This response is for the clientΓÇÖs request that comes in the form of a [`CHANGE_NOTIFY` request](/openspecs/windows_protocols/ms-smb2/598f395a-e7a2-4cc8-afb3-ccb30dd2df7c).
+
+Azure NetApp Files also supports [`LOCK` response](/openspecs/windows_protocols/ms-smb2/e215700a-102c-450a-a598-7ec2a99cd82c). This response is for the clientΓÇÖs request that comes in the form of a [`LOCK` request](/openspecs/windows_protocols/ms-smb2/6178b960-48b6-4999-b589-669f88e9017d).
+ ## Next steps - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md)
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md
description: Describes the functions to use in a Bicep file for working with arr
Previously updated : 12/08/2021 Last updated : 04/06/2022 # Array functions for Bicep
The output from the preceding example is:
| commonUp | array | [1, 2, 3] | | commonDown | array | [3, 2, 1] |
-## items
-
-`items(object)`
-
-Converts a dictionary object to an array.
-
-Namespace: [sys](bicep-functions.md#namespaces-for-functions).
-
-### Parameters
-
-| Parameter | Required | Type | Description |
-|: |: |: |: |
-| object |Yes |object |The dictionary object to convert to an array. |
-
-### Return value
-
-An array of objects for the converted dictionary. Each object in the array has a `key` property that contains the key value for the dictionary. Each object also has a `value` property that contains the properties for the object.
-
-### Example
-
-The following example converts a dictionary object to an array. For each object in the array, it creates a new object with modified values.
-
-```bicep
-var entities = {
- item002: {
- enabled: false
- displayName: 'Example item 2'
- number: 200
- }
- item001: {
- enabled: true
- displayName: 'Example item 1'
- number: 300
- }
-}
-
-var modifiedListOfEntities = [for entity in items(entities): {
- key: entity.key
- fullName: entity.value.displayName
- itemEnabled: entity.value.enabled
-}]
-
-output modifiedResult array = modifiedListOfEntities
-```
-
-The preceding example returns:
-
-```json
-"modifiedResult": {
- "type": "Array",
- "value": [
- {
- "fullName": "Example item 1",
- "itemEnabled": true,
- "key": "item001"
- },
- {
- "fullName": "Example item 2",
- "itemEnabled": false,
- "key": "item002"
- }
- ]
-}
-```
-
-The following example shows the array that is returned from the items function.
-
-```bicep
-var entities = {
- item002: {
- enabled: false
- displayName: 'Example item 2'
- number: 200
- }
- item001: {
- enabled: true
- displayName: 'Example item 1'
- number: 300
- }
-}
-
-var entitiesArray = items(entities)
-
-output itemsResult array = entitiesArray
-```
-
-The example returns:
-
-```json
-"itemsResult": {
- "type": "Array",
- "value": [
- {
- "key": "item001",
- "value": {
- "displayName": "Example item 1",
- "enabled": true,
- "number": 300
- }
- },
- {
- "key": "item002",
- "value": {
- "displayName": "Example item 2",
- "enabled": false,
- "number": 200
- }
- }
- ]
-}
-```
-
-The items() function sorts the objects in the alphabetical order. For example, **item001** appears before **item002** in the outputs of the two preceding samples.
- ## last `last(arg1)`
azure-resource-manager Bicep Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-object.md
description: Describes the functions to use in a Bicep file for working with obj
Previously updated : 09/30/2021 Last updated : 04/06/2022 # Object functions for Bicep
The output from the preceding example with the default values is:
| objectOutput | Object | {"one": "a", "three": "c"} | | arrayOutput | Array | ["two", "three"] |
+## items
+
+`items(object)`
+
+Converts a dictionary object to an array.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| object |Yes |object |The dictionary object to convert to an array. |
+
+### Return value
+
+An array of objects for the converted dictionary. Each object in the array has a `key` property that contains the key value for the dictionary. Each object also has a `value` property that contains the properties for the object.
+
+### Example
+
+The following example converts a dictionary object to an array. For each object in the array, it creates a new object with modified values.
+
+```bicep
+var entities = {
+ item002: {
+ enabled: false
+ displayName: 'Example item 2'
+ number: 200
+ }
+ item001: {
+ enabled: true
+ displayName: 'Example item 1'
+ number: 300
+ }
+}
+
+var modifiedListOfEntities = [for entity in items(entities): {
+ key: entity.key
+ fullName: entity.value.displayName
+ itemEnabled: entity.value.enabled
+}]
+
+output modifiedResult array = modifiedListOfEntities
+```
+
+The preceding example returns:
+
+```json
+"modifiedResult": {
+ "type": "Array",
+ "value": [
+ {
+ "fullName": "Example item 1",
+ "itemEnabled": true,
+ "key": "item001"
+ },
+ {
+ "fullName": "Example item 2",
+ "itemEnabled": false,
+ "key": "item002"
+ }
+ ]
+}
+```
+
+The following example shows the array that is returned from the items function.
+
+```bicep
+var entities = {
+ item002: {
+ enabled: false
+ displayName: 'Example item 2'
+ number: 200
+ }
+ item001: {
+ enabled: true
+ displayName: 'Example item 1'
+ number: 300
+ }
+}
+
+var entitiesArray = items(entities)
+
+output itemsResult array = entitiesArray
+```
+
+The example returns:
+
+```json
+"itemsResult": {
+ "type": "Array",
+ "value": [
+ {
+ "key": "item001",
+ "value": {
+ "displayName": "Example item 1",
+ "enabled": true,
+ "number": 300
+ }
+ },
+ {
+ "key": "item002",
+ "value": {
+ "displayName": "Example item 2",
+ "enabled": false,
+ "number": 200
+ }
+ }
+ ]
+}
+```
+
+The items() function sorts the objects in the alphabetical order. For example, **item001** appears before **item002** in the outputs of the two preceding samples.
+ <a id="json"></a> ## json
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md
Title: Bicep functions description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 10/15/2021 Last updated : 04/06/2022 # Bicep functions
All Bicep functions are contained within two namespaces - `az` and `sys`. Typica
// Parameter contains the same name as a function param range int
-// Must use sys namespace to call the function.
+// Must use sys namespace to call the function.
// The second use of range refers to the parameter. output result array = sys.range(1, range) ```
The following functions are available for working with arrays. All of these func
* [empty](./bicep-functions-array.md#empty) * [first](./bicep-functions-array.md#first) * [intersection](./bicep-functions-array.md#intersection)
-* [items](./bicep-functions-array.md#items)
* [last](./bicep-functions-array.md#last) * [length](./bicep-functions-array.md#length) * [min](./bicep-functions-array.md#min)
The following functions are available for working with objects. All of these fun
* [contains](./bicep-functions-object.md#contains) * [empty](./bicep-functions-object.md#empty) * [intersection](./bicep-functions-object.md#intersection)
+* [items](./bicep-functions-object.md#items)
* [json](./bicep-functions-object.md#json) * [length](./bicep-functions-object.md#length) * [union](./bicep-functions-object.md#union)
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Loops can be declared by:
}] ``` -- Using **items in a dictionary object**. This option works when your scenario is: "I want to create an instance for each item in an object." The [items function](bicep-functions-array.md#items) converts the object to an array. Within the loop, you can use properties from the object to create values. For more information, see [Dictionary object](#dictionary-object).
+- Using **items in a dictionary object**. This option works when your scenario is: "I want to create an instance for each item in an object." The [items function](bicep-functions-object.md#items) converts the object to an array. Within the loop, you can use properties from the object to create values. For more information, see [Dictionary object](#dictionary-object).
```bicep [for <item> in items(<object>): {
output deployedNSGs array = [for (name, i) in orgNames: {
## Dictionary object
-To iterate over elements in a dictionary object, use the [items function](bicep-functions-array.md#items), which converts the object to an array. Use the `value` property to get properties on the objects.
+To iterate over elements in a dictionary object, use the [items function](bicep-functions-object.md#items), which converts the object to an array. Use the `value` property to get properties on the objects.
```bicep param nsgValues object = {
azure-resource-manager Quickstart Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-loops.md
After deploying the preceding sample, you create two storage accounts that are s
## Use dictionary object
-To iterate over elements in a dictionary object, use the [items function](./bicep-functions-array.md#items), which converts the object to an array. Use the `value` property to get properties on the objects.
+To iterate over elements in a dictionary object, use the [items function](./bicep-functions-object.md#items), which converts the object to an array. Use the `value` property to get properties on the objects.
:::code language="bicep" source="~/azure-docs-bicep-samples/samples/loops-quickstart/loopObject.bicep" highlight="3-12,14,15,18":::
azure-resource-manager Microsoft Common Textbox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-textbox.md
The following example uses a text box with the [Microsoft.Solutions.ArmApiContro
```json "basics": [
- {
- "name": "nameApi",
- "type": "Microsoft.Solutions.ArmApiControl",
- "request": {
- "method": "POST",
- "path": "[concat(subscription().id,ΓÇ»'/providers/Microsoft.Storage/checkNameAvailability?api-version=2019-06-01')]",
- "body": "[parse(concat('{\"name\": \"', basics('txtStorageName'), '\", \"type\": \"Microsoft.Storage/storageAccounts\"}'))]"
- }
- },
- {
- "name": "txtStorageName",
- "type": "Microsoft.Common.TextBox",
- "label": "Storage account name",
- "constraints": {
- "validations": [
- {
- "isValid": "[not(equals(basics('nameApi').nameAvailable, false))]",
- "message": "[concat('Name unavailable: ', basics('txtStorageName'))]"
+ {
+ "name": "nameApi",
+ "type": "Microsoft.Solutions.ArmApiControl",
+ "request": {
+ "method": "POST",
+ "path": "[concat(subscription().id, '/providers/Microsoft.Storage/checkNameAvailability?api-version=2021-04-01')]",
+ "body": {
+ "name": "[basics('txtStorageName')]",
+ "type": "Microsoft.Storage/storageAccounts"
+ }
+ }
+ },
+ {
+ "name": "txtStorageName",
+ "type": "Microsoft.Common.TextBox",
+ "label": "Storage account name",
+ "constraints": {
+ "validations": [
+ {
+ "isValid": "[basics('nameApi').nameAvailable]",
+ "message": "[basics('nameApi').message]"
+ }
+ ]
}
- ]
}
- }
] ```
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | privatednszones / virtualnetworklinks | Yes | Yes | No | > | privatednszonesinternal | No | No | No | > | privateendpointredirectmaps | No | No | No |
-> | privateendpoints | No | No | No |
+> | privateendpoints | Yes | Yes | Yes |
> | privatelinkservices | No | No | No | > | publicipaddresses | Yes - Basic SKU<br>Yes - Standard SKU | Yes - Basic SKU<br>No - Standard SKU | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). | > | publicipprefixes | Yes | Yes | No |
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auditing-overview.md
You can use SQL Database auditing to:
- **Hierarchical namespace** for **Azure Data Lake Storage Gen2 storage account** is currently **not supported**. - Enabling auditing on a paused **Azure Synapse** is not supported. To enable auditing, resume Azure Synapse. - Auditing for **Azure Synapse SQL pools** supports default audit action groups **only**.
+- When you configure the auditing in Azure SQL Server or Azure SQL Database with log destination as the storage account, the target storage account must be enabled with access to storage account keys. If the storage account is configured to use Azure AD authentication only and not configured for access key usage, the auditing cannot be configured. <!-- REST API reference: - https://docs.microsoft.com/rest/api/sql/2021-08-01-preview/server-blob-auditing-policies/create-or-update -->
#### <a id="server-vs-database-level"></a>Define server-level vs. database-level auditing policy
azure-sql Authentication Aad Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/authentication-aad-configure.md
Your SQL Managed Instance needs permissions to read Azure AD to successfully acc
To grant your SQL Managed Instance Azure AD read permission using the Azure portal, log in as Global Administrator in Azure AD and follow these steps:
-1. In the [Azure portal](https://portal.azure.com), in the upper-right corner, select your connection from a drop-down list of possible Active Directories.
+1. In the [Azure portal](https://portal.azure.com), in the upper-right corner select your account, and then choose **Switch directories** to confirm which Active Directory is currently your active directory. Switch directories, if necessary.
+
+ :::image type="content" source="media/authentication-aad-configure/switch-directory.png" alt-text="Screenshot of the Azure portal showing where to switch your directory":::
2. Choose the correct Active Directory as the default Azure AD.
To grant your SQL Managed Instance Azure AD read permission using the Azure port
3. Navigate to the SQL Managed Instance you want to use for Azure AD integration.
- ![Screenshot of the Azure portal showing the Active Directory admin page open for the selected SQL managed instance.](./media/authentication-aad-configure/aad.png)
+ :::image type="content" source="./media/authentication-aad-configure/aad.png" alt-text="Screenshot of the Azure portal showing the Active Directory admin page open for the selected SQL managed instance.":::
4. Select the banner on top of the Active Directory admin page and grant permission to the current user.
- ![Screenshot of the dialog for granting permissions to a SQL managed instance for accessing Active Directory. The Grant permissions button is selected.](./media/authentication-aad-configure/grant-permissions.png)
+ :::image type="content" source="./media/authentication-aad-configure/grant-permissions.png" alt-text="Screenshot of the dialog for granting permissions to a SQL managed instance for accessing Active Directory. The Grant permissions button is selected.":::
5. After the operation succeeds, the following notification will show up in the top-right corner:
- ![Screenshot of a notification confirming that active directory read permissions have been successfully updated for the managed instance.](./media/authentication-aad-configure/success.png)
+ :::image type="content" source="./media/authentication-aad-configure/success.png" alt-text="Screenshot of a notification confirming that active directory read permissions have been successfully updated for the managed instance.":::
6. Now you can choose your Azure AD admin for your SQL Managed Instance. For that, on the Active Directory admin page, select **Set admin** command.
- ![Screenshot showing the Set admin command highlighted on the Active Directory admin page for the selected SQL managed instance.](./media/authentication-aad-configure/set-admin.png)
+ :::image type="content" source="./media/authentication-aad-configure/set-admin.png" alt-text="Screenshot showing the Set admin command highlighted on the Active Directory admin page for the selected SQL managed instance.":::
7. On the Azure AD admin page, search for a user, select the user or group to be an administrator, and then select **Select**. The Active Directory admin page shows all members and groups of your Active Directory. Users or groups that are grayed out can't be selected because they aren't supported as Azure AD administrators. See the list of supported admins in [Azure AD Features and Limitations](authentication-aad-overview.md#azure-ad-features-and-limitations). Azure role-based access control (Azure RBAC) applies only to the Azure portal and isn't propagated to SQL Database, SQL Managed Instance, or Azure Synapse.
- ![Add Azure Active Directory admin](./media/authentication-aad-configure/add-azure-active-directory-admin.png)
+ :::image type="content" source="./media/authentication-aad-configure/add-azure-active-directory-admin.png" alt-text="Add Azure Active Directory admin":::
8. At the top of the Active Directory admin page, select **Save**.
- ![Screenshot of the Active Directory admin page with the Save button in the top row next to the Set admin and Remove admin buttons.](./media/authentication-aad-configure/save.png)
+ :::image type="content" source="./media/authentication-aad-configure/save.png" alt-text="Screenshot of the Active Directory admin page with the Save button in the top row next to the Set admin and Remove admin buttons.":::
The process of changing the administrator may take several minutes. Then the new administrator appears in the Active Directory admin box.
The following two procedures show you how to provision an Azure Active Directory
2. Search for and select **SQL server**.
- ![Search for and select SQL servers](./media/authentication-aad-configure/search-for-and-select-sql-servers.png)
+ :::image type="content" source="./media/authentication-aad-configure/search-for-and-select-sql-servers.png" alt-text="Search for and select SQL servers":::
>[!NOTE] > On this page, before you select **SQL servers**, you can select the **star** next to the name to *favorite* the category and add **SQL servers** to the left navigation bar.
The following two procedures show you how to provision an Azure Active Directory
4. In the **Active Directory admin** page, select **Set admin**.
- ![SQL servers set Active Directory admin](./media/authentication-aad-configure/sql-servers-set-active-directory-admin.png)
+ :::image type="content" source="./media/authentication-aad-configure/sql-servers-set-active-directory-admin.png" alt-text="SQL servers set Active Directory admin":::
5. In the **Add admin** page, search for a user, select the user or group to be an administrator, and then select **Select**. (The Active Directory admin page shows all members and groups of your Active Directory. Users or groups that are grayed out cannot be selected because they are not supported as Azure AD administrators. (See the list of supported admins in the **Azure AD Features and Limitations** section of [Use Azure Active Directory Authentication for authentication with SQL Database or Azure Synapse](authentication-aad-overview.md).) Azure role-based access control (Azure RBAC) applies only to the portal and is not propagated to SQL Server.
- ![Select Azure Active Directory admin](./media/authentication-aad-configure/select-azure-active-directory-admin.png)
+ :::image type="content" source="./media/authentication-aad-configure/select-azure-active-directory-admin.png" alt-text="Select Azure Active Directory admin":::
6. At the top of the **Active Directory admin** page, select **Save**.
- ![save admin](./media/authentication-aad-configure/save-admin.png)
+ :::image type="content" source="./media/authentication-aad-configure/save-admin.png" alt-text="save admin":::
The process of changing the administrator may take several minutes. Then the new administrator appears in the **Active Directory admin** box.
azure-sql Dynamic Data Masking Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/dynamic-data-masking-configure-portal.md
ms.devlang: --++ Previously updated : 04/28/2020 Last updated : 04/05/2022 # Get started with SQL Database dynamic data masking with the Azure portal [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
azure-sql Dynamic Data Masking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/dynamic-data-masking-overview.md
ms.devlang: --++ Previously updated : 09/12/2021 Last updated : 04/05/2022 tags: azure-synpase # Dynamic data masking
Write:
To learn more about permissions when using dynamic data masking with T-SQL command, see [Permissions](/sql/relational-databases/security/dynamic-data-masking#permissions)
+## Granular permission example
+
+Prevent unauthorized access to sensitive data and gain control by masking it to an unauthorized user at different levels of the database. You can grant or revoke UNMASK permission at the database-level, schema-level, table-level or at the column-level to a user. Using UNMASK permission provides a more granular way to control and limit unauthorized access to data stored in the database and improve data security management.
+
+1. Create schema to contain user tables
+
+ ```sql
+ CREATE SCHEMA Data;
+ GO
+ ```
+
+1. Create table with masked columns
+
+ ```sql
+ CREATE TABLE Data.Membership (
+ MemberID int IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
+ FirstName varchar(100) MASKED WITH (FUNCTION = 'partial(1, "xxxxx", 1)') NULL,
+ LastName varchar(100) NOT NULL,
+ Phone varchar(12) MASKED WITH (FUNCTION = 'default()') NULL,
+ Email varchar(100) MASKED WITH (FUNCTION = 'email()') NOT NULL,
+ DiscountCode smallint MASKED WITH (FUNCTION = 'random(1, 100)') NULL,
+ BirthDay datetime MASKED WITH (FUNCTION = 'default()') NULL
+ );
+ ```
+
+1. Insert sample data
+
+ ```sql
+ INSERT INTO Data.Membership (FirstName, LastName, Phone, Email, DiscountCode, BirthDay)
+ VALUES
+ ('Roberto', 'Tamburello', '555.123.4567', 'RTamburello@contoso.com', 10, '1985-01-25 03:25:05'),
+ ('Janice', 'Galvin', '555.123.4568', 'JGalvin@contoso.com.co', 5,'1990-05-14 11:30:00'),
+ ('Shakti', 'Menon', '555.123.4570', 'SMenon@contoso.net', 50,'2004-02-29 14:20:10'),
+ ('Zheng', 'Mu', '555.123.4569', 'ZMu@contoso.net', 40,'1990-03-01 06:00:00');
+ ```
+
+1. Create schema to contain service tables
+
+ ```sql
+ CREATE SCHEMA Service;
+ GO
+ ```
+
+1. Create service table with masked columns
+
+ ```sql
+ CREATE TABLE Service.Feedback (
+ MemberID int IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED,
+ Feedback varchar(100) MASKED WITH (FUNCTION = 'default()') NULL,
+ Rating int MASKED WITH (FUNCTION='default()'),
+ Received_On datetime)
+ );
+ ```
+
+1. Insert sample data
+
+ ```sql
+ INSERT INTO Service.Feedback(Feedback,Rating,Received_On)
+ VALUES
+ ('Good',4,'2022-01-25 11:25:05'),
+ ('Excellent', 5, '2021-12-22 08:10:07'),
+ ('Average', 3, '2021-09-15 09:00:00');
+ ```
+
+1. Create different users in the database
+
+ ```sql
+ CREATE USER ServiceAttendant WITHOUT LOGIN;
+ GO
+
+ CREATE USER ServiceLead WITHOUT LOGIN;
+ GO
+
+ CREATE USER ServiceManager WITHOUT LOGIN;
+ GO
+
+ CREATE USER ServiceHead WITHOUT LOGIN;
+ GO
+ ```
+
+1. Grant read permissions to the users in the database
+
+ ```sql
+ ALTER ROLE db_datareader ADD MEMBER ServiceAttendant;
+
+ ALTER ROLE db_datareader ADD MEMBER ServiceLead;
+
+ ALTER ROLE db_datareader ADD MEMBER ServiceManager;
+
+ ALTER ROLE db_datareader ADD MEMBER ServiceHead;
+ ```
+
+1. Grant different UNMASK permissions to users
+
+ ```sql
+ --Grant column level UNMASK permission to ServiceAttendant
+ GRANT UNMASK ON Data.Membership(FirstName) TO ServiceAttendant;
+
+ -- Grant table level UNMASK permission to ServiceLead
+ GRANT UNMASK ON Data.Membership TO ServiceLead;
+
+ -- Grant schema level UNMASK permission to ServiceManager
+ GRANT UNMASK ON SCHEMA::Data TO ServiceManager;
+ GRANT UNMASK ON SCHEMA::Service TO ServiceManager;
+
+ --Grant database level UNMASK permission to ServiceHead;
+ GRANT UNMASK TO ServiceHead;
+ ```
+
+1. Query the data under the context of user `ServiceAttendant`
+
+ ```sql
+ EXECUTE AS USER='ServiceAttendant';
+ SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data. Membership;
+ SELECT MemberID,Feedback,Rating FROM Service.Feedback;
+ REVERT;
+ ```
+
+1. Query the data under the context of user `ServiceLead`
+
+ ```sql
+ EXECUTE AS USER='ServiceLead';
+ SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data. Membership;
+ SELECT MemberID,Feedback,Rating FROM Service.Feedback;
+ REVERT;
+ ```
+
+1. Query the data under the context of user `ServiceManager`
+
+ ```sql
+ EXECUTE AS USER='ServiceManager';
+ SELECT MemberID,FirstName,LastName,Phone,Email FROM Data.Membership;
+ SELECT MemberID,Feedback,Rating FROM Service.Feedback;
+ REVERT;
+ ```
+
+1. Query the data under the context of user `ServiceHead`
+
+ ```sql
+ EXECUTE AS USER='ServiceHead';
+ SELECT MemberID,FirstName,LastName,Phone,Email,BirthDay FROM Data.Membership;
+ SELECT MemberID,Feedback,Rating FROM Service.Feedback;
+ REVERT;
+ ```
+
+
+1. To revoke UNMASK permissions, use the following T-SQL statements:
+
+ ```sql
+ REVOKE UNMASK ON Data.Membership(FirstName) FROM ServiceAttendant;
+
+ REVOKE UNMASK ON Data.Membership FROM ServiceLead;
+
+ REVOKE UNMASK ON SCHEMA::Data FROM ServiceManager;
+
+ REVOKE UNMASK ON SCHEMA::Service FROM ServiceManager;
+
+ REVOKE UNMASK FROM ServiceHead;
+ ```
+ ## See also - [Dynamic Data Masking](/sql/relational-databases/security/dynamic-data-masking) for SQL Server.
azure-sql Outbound Firewall Rule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/outbound-firewall-rule-overview.md
Title: Outbound firewall rules (preview)
-description: Overview of the outbound firewall rules feature.
+ Title: Outbound firewall rules
+description: Overview of the outbound firewall rules feature for Azure SQL Database and Azure Synapse Analytics.
Previously updated : 11/10/2021 Last updated : 4/6/2022 # Outbound firewall rules for Azure SQL Database and Azure Synapse Analytics [!INCLUDE[appliesto-sqldb-asa](../includes/appliesto-sqldb-asa-formerly-sqldw.md)]
-Outbound firewall rules limit network traffic from the Azure SQL logical server to a customer defined list of Azure Storage accounts and Azure SQL logical servers. Any attempt to access storage accounts or SQL Databases not in this list is denied. The following Azure SQL Database features support this feature:
+Outbound firewall rules limit network traffic from the Azure SQL [logical server](logical-servers.md) to a customer defined list of Azure Storage accounts and Azure SQL logical servers. Any attempt to access storage accounts or databases not in this list is denied. The following [Azure SQL Database](sql-database-paas-overview.md) features support this feature:
- [Auditing](auditing-overview.md) - [Vulnerability assessment](sql-vulnerability-assessment.md)-- [I/E service](database-import-export-azure-services-off.md)-- OPENROWSET-- Bulk Insert
+- [Import/Export service](database-import-export-azure-services-off.md)
+- [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql)
+- [Bulk Insert](/sql/t-sql/statements/bulk-insert-transact-sql)
- [Elastic query](elastic-query-overview.md) > [!IMPORTANT] > This article applies to both Azure SQL Database and [dedicated SQL pool (formerly SQL DW)](../../synapse-analytics\sql-data-warehouse\sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics. These settings apply to all SQL Database and dedicated SQL pool (formerly SQL DW) databases associated with the server. For simplicity, the term 'database' refers to both databases in Azure SQL Database and Azure Synapse Analytics. Likewise, any references to 'server' is referring to the [logical SQL server](logical-servers.md) that hosts Azure SQL Database and dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. This article does *not* apply to Azure SQL Managed Instance or dedicated SQL pools in Azure Synapse Analytics workspaces. > [!IMPORTANT]
-> Outbound firewall rules are defined at the [logical SQL server](logical-servers.md). Geo-replication and Auto-failover groups require the same set of rules to be defined on the primary and all secondarys
+> Outbound firewall rules are defined at the [logical SQL server](logical-servers.md). Geo-replication and Auto-failover groups require the same set of rules to be defined on the primary and all secondaries.
## Set outbound firewall rules in the Azure portal
az sql server outbound-firewall-rule delete -g sql-server-group -s sql-server-na
## Next steps -- For an overview of Azure SQL Database security, see [Securing your database](security-overview.md)-- For an overview of Azure SQL Database connectivity, see [Azure SQL Connectivity Architecture](connectivity-architecture.md)
+- For an overview of Azure SQL Database security, see [Securing your database](security-overview.md).
+- For an overview of Azure SQL Database connectivity, see [Azure SQL Connectivity Architecture](connectivity-architecture.md).
+- Learn more about [Azure SQL Database and Azure Synapse Analytics network access controls](network-access-controls-overview.md).
+- Learn about [Azure Private Link for Azure SQL Database and Azure Synapse Analytics](private-endpoint-overview.md).
<!--Image references--> [1]: media/outbound-firewall-rules/Step1.jpg
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
Title: Attach disk pools to Azure VMware Solution hosts (Preview)
-description: Learn how to attach a disk pool surfaced through an iSCSI target as the VMware datastore of an Azure VMware Solution private cloud. Once the datastore is configured, you can create volumes on it and attach them to your VMware instance.
+ Title: Attach Azure disk pools to Azure VMware Solution hosts (Preview)
+description: Learn how to attach an Azure disk pool surfaced through an iSCSI target as the VMware vSphere datastore of an Azure VMware Solution private cloud. Once the datastore is configured, you can create volumes on it and consume them from your Azure VMware Solution private cloud.
Last updated 11/02/2021
-#Customer intent: As an Azure service administrator, I want to scale my AVS hosts using disk pools instead of scaling clusters. So that I can use block storage for active working sets and tier less frequently accessed data from vSAN to disks. I can also replicate data from on-premises or primary VMware environment to disk storage for the secondary site.
+#Customer intent: As an Azure service administrator, I want to scale my AVS hosts using disk pools instead of scaling clusters. So that I can use block storage for active working sets and tier less frequently accessed data from vSAN to disks. I can also replicate data from on-premises or primary VMware vSphere environment to disk storage for the secondary site.
ms.devlang: azurecli # Attach disk pools to Azure VMware Solution hosts (Preview)
-[Azure disk pools](../virtual-machines/disks-pools.md) offer persistent block storage to applications and workloads backed by Azure Disks. You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance. For example, you can scale up by using disk pools instead of scaling clusters if you host storage-intensive workloads. You can also use disks to replicate data from on-premises or primary VMware environments to disk storage for the secondary site. To scale storage independent of the Azure VMware Solution hosts, we support surfacing [ultra disks](../virtual-machines/disks-types.md#ultra-disks), [premium SSD](../virtual-machines/disks-types.md#premium-ssds) and [standard SSD](../virtual-machines/disks-types.md#standard-ssds) as the datastores.
+[Azure disk pools](../virtual-machines/disks-pools.md) offer persistent block storage to applications and workloads backed by Azure Disks. You can use disks as the persistent storage for Azure VMware Solution for optimal cost and performance. For example, you can scale up by using disk pools instead of scaling clusters if you host storage-intensive workloads. You can also use disks to replicate data from on-premises or primary VMware vSphere environments to disk storage for the secondary site. To scale storage independent of the Azure VMware Solution hosts, we support surfacing [ultra disks](../virtual-machines/disks-types.md#ultra-disks), [premium SSD](../virtual-machines/disks-types.md#premium-ssds) and [standard SSD](../virtual-machines/disks-types.md#standard-ssds) as the datastores.
>[!IMPORTANT] >Azure disk pools on Azure VMware Solution (Preview) is currently in public preview.
azure-vmware Backup Azure Vmware Solution Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-vmware-solution-virtual-machines.md
Title: Back up Azure VMware Solution VMs with Azure Backup Server description: Configure your Azure VMware Solution environment to back up virtual machines by using Azure Backup Server. Previously updated : 02/04/2021 Last updated : 04/06/2022 # Back up Azure VMware Solution VMs with Azure Backup Server
This article shows you how to back up VMware virtual machines (VMs) running on A
Then, we'll walk through all of the necessary procedures to: > [!div class="checklist"]
-> * Set up a secure channel so that Azure Backup Server can communicate with VMware servers over HTTPS.
+> * Set up a secure channel so that Azure Backup Server can communicate with VMware vCenter Server over HTTPS.
> * Add the account credentials to Azure Backup Server.
-> * Add the vCenter to Azure Backup Server.
+> * Add the vCenter Server to Azure Backup Server.
> * Set up a protection group that contains the VMware VMs you want to back up, specify backup settings, and schedule the backup. ## Create a secure connection to the vCenter server
-By default, Azure Backup Server communicates with VMware servers over HTTPS. To set up the HTTPS connection, download the VMware certificate authority (CA) certificate and import it on the Azure Backup Server.
+By default, Azure Backup Server communicates with VMware vCenter Server over HTTPS. To set up the HTTPS connection, download the VMware certificate authority (CA) certificate and import it on the Azure Backup Server.
### Set up the certificate
By default, Azure Backup Server communicates with VMware servers over HTTPS. To
:::image type="content" source="../backup/media/backup-azure-backup-server-vmware/cert-wizard-final-screen.png" alt-text="Screenshot showing the Certificate Import Wizard.":::
-1. After the certificate import is confirmed, sign in to the vCenter server to confirm that your connection is secure.
+1. After the certificate import is confirmed, sign in to the vCenter Server to confirm that your connection is secure.
### Enable TLS 1.2 on Azure Backup Server
-VMware 6.7 onwards had TLS enabled as the communication protocol.
+VMware vSphere 6.7 onwards has TLS enabled as the communication protocol.
1. Copy the following registry settings, and paste them into Notepad. Then save the file as TLS.REG without the .txt extension.
VMware 6.7 onwards had TLS enabled as the communication protocol.
:::image type="content" source="../backup/media/backup-azure-backup-server-vmware/new-list-of-mabs-creds.png" alt-text="Screenshot showing the Azure Backup Server Manage Credentials dialog box with new credentials displayed.":::
-## Add the vCenter server to Azure Backup Server
+## Add the vCenter Server to Azure Backup Server
1. In the Azure Backup Server console, select **Management** > **Production Servers** > **Add**.
VMware 6.7 onwards had TLS enabled as the communication protocol.
:::image type="content" source="../backup/media/backup-azure-backup-server-vmware/production-server-add-wizard.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the VMware Servers option selected.":::
-1. Specify the IP address of the vCenter.
+1. Specify the IP address of the vCenter Server.
- :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vmware-server-provide-server-name.png" alt-text="Screenshot showing the Production Server Addition Wizard showing how to add a VMware vCenter or ESXi host server and its credentials.":::
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vmware-server-provide-server-name.png" alt-text="Screenshot showing the Production Server Addition Wizard showing how to add a VMware vCenter Server or ESXi host server and its credentials.":::
-1. In the **SSL Port** box, enter the port used to communicate with the vCenter.
+1. In the **SSL Port** box, enter the port used to communicate with the vCenter Server.
> [!TIP]
- > Port 443 is the default port, but you can change it if your vCenter listens on a different port.
+ > Port 443 is the default port, but you can change it if your vCenter Server listens on a different port.
1. In the **Specify Credential** box, select the credential that you created in the previous section.
-1. Select **Add** to add the vCenter to the servers list, and select **Next**.
+1. Select **Add** to add the vCenter Server to the servers list, and select **Next**.
- :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the VMware server and credentials defined.":::
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the VMware vCenter Server and credentials defined.":::
-1. On the **Summary** page, select **Add** to add the vCenter to Azure Backup Server.
+1. On the **Summary** page, select **Add** to add the vCenter Server to Azure Backup Server.
- The new server gets added immediately. vCenter doesn't need an agent.
+ The new vCenter Server gets added immediately. vCenter Server doesn't need an agent.
- :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/tasks-screen.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the summary of the VMware server and credentials defined and the Add button selected.":::
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/tasks-screen.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the summary of the VMware vCenter Server and credentials defined and the Add button selected.":::
1. On the **Finish** page, review the settings, and then select **Close**.
- :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/summary-screen.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the summary of the VMware server and credentials added.":::
+ :::image type="content" source="../backup/media/backup-azure-backup-server-vmware/summary-screen.png" alt-text="Screenshot showing the Production Server Addition Wizard showing the summary of the VMware vCenter Server and credentials added.":::
- You see the vCenter server listed under **Production Server** with:
+ You see the vCenter Server listed under **Production Server** with:
- Type as **VMware Server** - Agent Status as **OK**
Protection groups gather multiple VMs and apply the same data retention and back
1. On the **Specify Online Retention Policy** page, indicate how long you want to keep the recovery points created from the backups to Azure. - There's no time limit for how long you can keep data in Azure.
- - The only limit is that you can't have more than 9,999 recovery points per protected instance. In this example, the protected instance is the VMware server.
+ - The only limit is that you can't have more than 9,999 recovery points per protected instance. In this example, the protected instance is the VMware vCenter Server.
:::image type="content" source="../backup/media/backup-azure-backup-server-vmware/retention-policy.png" alt-text="Screenshot showing the Create New Protection Group Wizard to specify online retention policy.":::
In the Azure Backup Server Administrator Console, there are two ways to find rec
1. Using the **Browse** pane, browse or filter to find the VM you want to recover. After you select a VM or folder, the **Recovery points for pane display the available recovery points.
- :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/recovery-points.png" alt-text="Screenshot showing the available recovery points for VMware server.":::
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/recovery-points.png" alt-text="Screenshot showing the available recovery points for VMware vCenter Server.":::
1. In the **Recovery points for** pane, select a date when a recovery point was taken. For example, calendar dates in bold have available recovery points. Alternately, you can right-click the VM, select **Show all recovery points**, and then select the recovery point from the list.
In the Azure Backup Server Administrator Console, there are two ways to find rec
1. Select **Next** to go to the **Specify Recovery Options** screen. Select **Next** again to go to the **Select Recovery Type** screen. > [!NOTE]
- > VMware workloads don't support enabling network bandwidth throttling.
+ > VMware vSphere workloads don't support enabling network bandwidth throttling.
1. On the **Select Recovery Type** page, either recover to the original instance or a new location.
You can restore individual files from a protected VM recovery point. This featur
1. Using the **Browse** pane, browse or filter to find the VM you want to recover. After you select a VM or folder, the **Recovery points for pane display the available recovery points.
- :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/vmware-rp-disk.png" alt-text="Screenshot showing the recovery points for VMware server.":::
+ :::image type="content" source="../backup/media/restore-azure-backup-server-vmware/vmware-rp-disk.png" alt-text="Screenshot showing the recovery points for VMware vCenter Server.":::
1. In the **Recovery points for** pane, use the calendar to select the wanted recovery points' date. Depending on how the backup policy was configured, dates can have more than one recovery point.
azure-vmware Configure Alerts For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-alerts-for-azure-vmware-solution.md
The following metrics are visible through Azure Monitor Metrics.
1. Under **Condition**, select **Add condition**, and in the window that opens, selects the signal you want to create for the alert rule.
- In our example, we've selected **Percentage Datastore Disk Used**, which is relevant from an [Azure VMware Solution SLA](https://aka.ms/avs/sla) perspective.
+ In our example, we've selected **Percentage Datastore Disk Used**, which is relevant from an [Azure VMware Solution SLA](https://azure.microsoft.com/support/legal/sla/azure-vmware/v1_1/) perspective.
:::image type="content" source="media/configure-alerts-for-azure-vmware-solution/configure-signal-logic-options.png" alt-text="Screenshot showing the Configure signal logic window with signals to create for the alert rule.":::
azure-vmware Deploy Traffic Manager Balance Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-traffic-manager-balance-workloads.md
Last updated 02/08/2021
-# Deploy Traffic Manager to balance Azure VMware Solution workloads
+# Deploy Azure Traffic Manager to balance Azure VMware Solution workloads
This article walks through the steps of how to integrate [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) with Azure VMware Solution. The integration balances application workloads across multiple endpoints. This article also walks through the steps of how to configure Traffic Manager to direct traffic between three [Azure Application Gateway](../application-gateway/overview.md) spanning several Azure VMware Solution regions.
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
Title: Set up Azure Backup Server for Azure VMware Solution description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server. Previously updated : 02/16/2022 Last updated : 04/06/2022 # Set up Azure Backup Server for Azure VMware Solution
Azure Backup Server contributes to your business continuity and disaster recover
Azure Backup Server can store backup data to: - **Disk**: For short-term storage, Azure Backup Server backs up data to disk pools.-- **Azure**: For both short-term and long-term storage off-premises, Azure Backup Server data stored in disk pools can be backed up to the Microsoft Azure cloud by using Azure Backup.
+- **Azure cloud**: For both short-term and long-term storage off-premises, Azure Backup Server data stored in disk pools can be backed up to the Microsoft Azure cloud by using Azure Backup.
Use Azure Backup Server to restore data to the source or an alternate location. That way, if the original data is unavailable because of planned or unexpected issues, you can restore data to an alternate location.
This article helps you prepare your Azure VMware Solution environment to back up
## Supported VMware features -- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign in credentials used to authenticate the VMware server with Azure Backup Server.
+- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter Server or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign in credentials used to authenticate the VMware vCenter Server with Azure Backup Server.
- **Cloud-integrated backup:** Azure Backup Server protects workloads to disk and the cloud. The backup and recovery workflow of Azure Backup Server helps you manage long-term retention and offsite backup.-- **Detect and protect VMs managed by vCenter:** Azure Backup Server detects and protects VMs deployed on a vCenter or ESXi server. Azure Backup Server also detects VMs managed by vCenter so that you can protect large deployments.-- **Folder-level auto protection:** vCenter lets you organize your VMs in VM folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. When protecting folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.
+- **Detect and protect VMs managed by vCenter:** Azure Backup Server detects and protects VMs deployed on a vCenter Server or ESXi hosts. Azure Backup Server also detects VMs managed by vCenter Server so that you can protect large deployments.
+- **Folder-level auto protection:** vCenter Server lets you organize your VMs in VM folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. When protecting folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.
- **Azure Backup Server continues to protect vMotioned VMs within the cluster:** As VMs are vMotioned for load balancing within the cluster, Azure Backup Server automatically detects and continues VM protection. - **Recover necessary files faster:** Azure Backup Server can recover files or folders from a Windows VM without recovering the entire VM.
Ensure that you [configure networking for your VMware private cloud in Azure](tu
### Determine the size of the VM
-Use the [MABS Capacity Planner](https://www.microsoft.com/download/details.aspx) to determine the correct VM size. Based on your inputs, the capacity planner will give you the required memory size and CPU core count. Use this information to choose the appropriate Azure VM size. The capacity planner also provides total disk size required for the VM along with the required disk IOPS. We recommend using a standard SSD disk for the VM. By pooling more than one SSD, you can achieve the required IOPS.
+Use the [MABS Capacity Planner](https://www.microsoft.com/en-us/download/details.aspx?id=54301) to determine the correct VM size. Based on your inputs, the capacity planner will give you the required memory size and CPU core count. Use this information to choose the appropriate Azure VM size. The capacity planner also provides total disk size required for the VM along with the required disk IOPS. We recommend using a standard SSD disk for the VM. By pooling more than one SSD, you can achieve the required IOPS.
Follow the instructions in the [Create your first Windows VM in the Azure portal](../virtual-machines/windows/quick-create-portal.md) tutorial. You'll create the VM in the virtual network that you created in the previous step. Start with a gallery image of Windows Server 2019 Datacenter to run the Azure Backup Server.
cdn Cdn Create New Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-new-endpoint.md
ms.assetid: 4ca51224-5423-419b-98cf-89860ef516d2 Previously updated : 04/30/2020 Last updated : 04/06/2022 # Quickstart: Create an Azure CDN profile and endpoint
-In this quickstart, you enable Azure Content Delivery Network (CDN) by creating a new CDN profile, which is a collection of one or more CDN endpoints. After you have created a profile and an endpoint, you can start delivering content to your customers.
+In this quickstart, you enable Azure Content Delivery Network (CDN) by creating a new CDN profile, which is a collection of one or more CDN endpoints. After you've created a profile and an endpoint, you can start delivering content to your customers.
## Prerequisites
After you've created a CDN profile, you use it to create an endpoint.
The time it takes for the endpoint to propagate depends on the pricing tier selected when you created the profile. **Standard Akamai** usually completes within one minute, **Standard Microsoft** in 10 minutes, and **Standard Verizon** and **Premium Verizon** in up to 30 minutes. > [!NOTE]
-> For *Verizon CDN endpoints*, when an endpoint is **disabled** or **stopped** for any reason, all resources configured through the Verizon supplemental portal will be cleaned up. These configurations can't be restored automatically by restarting the endpoint. You will need to make those configuration changes again.
+> For *Verizon CDN endpoints*, when an endpoint is **disabled** or **stopped** for any reason, all resources configured through the Verizon supplemental portal will be cleaned up. These configurations can't be restored automatically by restarting the endpoint. You will need to make the configuration change again.
## Clean up resources
In the preceding steps, you created a CDN profile and an endpoint in a resource
1. From the left-hand menu in the Azure portal, select **Resource groups** and then select **CDNQuickstart-rg**.
-2. On the **Resource group** page, select **Delete resource group**, enter *CDNQuickstart-rg* in the text box, then select **Delete**. This action delete the resource group, profile, and endpoint that you created in this quickstart.
+2. On the **Resource group** page, select **Delete resource group**, enter *CDNQuickstart-rg* in the text box, then select **Delete**. This action deletes the resource group, profile, and endpoint that you created in this quickstart.
## Next steps
cdn Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-overview.md
# What is a content delivery network on Azure?
-A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. CDNs' store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency.
+A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. CDNs store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency.
Azure Content Delivery Network (CDN) offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Azure CDN can also accelerate dynamic content, which cannot be cached, by leveraging various network optimizations using CDN POPs. For example, route optimization to bypass Border Gateway Protocol (BGP).
cdn Cdn Standard Rules Engine Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-standard-rules-engine-actions.md
Use this action to rewrite the path of a request that's en route to your origin.
Field | Description |
-Source pattern | Define the source pattern in the URL path to replace. Currently, source pattern uses a prefix-based match. To match all URL paths, use a forward slash (**/**) as the source pattern value.
+Source pattern | Define the source pattern in the URL path to replace. To match all URL paths, use a forward slash (**/**) as the source pattern value.
Destination | Define the destination path to use in the rewrite. The destination path overwrites the source pattern. Preserve unmatched path | If set to **Yes**, the remaining path after the source pattern is appended to the new destination path.
cdn Cdn Troubleshoot Compression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-troubleshoot-compression.md
If you need more help at any point in this article, you can contact the Azure ex
Compression for your endpoint is enabled, but files are being returned uncompressed. > [!TIP]
-> To check whether your files are being returned compressed, you need to use a tool like [Fiddler](https://www.telerik.com/fiddler) or your browser's [developer tools](https://developer.microsoft.com/microsoft-edge/platform/documentation/f12-devtools-guide/). Check the HTTP response headers returned with your cached CDN content. If there is a header named `Content-Encoding` with a value of **gzip**, **bzip2**, or **deflate**, your content is compressed.
+> To check whether your files are being returned compressed, you need to use a tool like [Fiddler](https://www.telerik.com/fiddler) or your browser's [developer tools](https://developer.microsoft.com/microsoft-edge/platform/documentation/f12-devtools-guide/). Check the HTTP response headers returned with your cached CDN content. If there is a header named `Content-Encoding` with a value of **gzip**, **bzip2**, **brotli**, or **deflate**, your content is compressed.
> > ![Content-Encoding header](./media/cdn-troubleshoot-compression/cdn-content-header.png) >
There are several possible causes, including:
First, we should do a quick sanity check on the request. You can use your browser's [developer tools](https://developer.microsoft.com/microsoft-edge/platform/documentation/f12-devtools-guide/) to view the requests being made. * Verify the request is being sent to your endpoint URL, `<endpointname>.azureedge.net`, and not your origin.
-* Verify the request contains an **Accept-Encoding** header, and the value for that header contains **gzip**, **deflate**, or **bzip2**.
+* Verify the request contains an **Accept-Encoding** header, and the value for that header contains **gzip**, **deflate**, **brotli**, or **bzip2**.
> [!NOTE] > **Azure CDN from Akamai** profiles only support **gzip** encoding.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
The following tables show the Microsoft Security Response Center (MSRC) updates
| Rel 21-12 OOB | [4578013] | Standalone Security Update | [4.97] | Aug 19, 2020 | | Rel 21-12 | [5005698] | Servicing Stack update | [5.62] | Sep 14, 2021 | | Rel 21-12 | [5006749] | Servicing Stack update | [2.117] | July 13, 2021 |
-| Rel 21-12 | [5008287] | Servicing Stack update | [6.38] | Aug 10, 2021 |
| Rel 21-12 | [4494175] | Microcode | [5.62] | Sep 1, 2020 | | Rel 21-12 | [4494174] | Microcode | [6.38] | Sep 1, 2020 |
The following tables show the Microsoft Security Response Center (MSRC) updates
[4578013]: https://support.microsoft.com/kb/4578013 [5005698]: https://support.microsoft.com/kb/5005698 [5006749]: https://support.microsoft.com/kb/5006749
-[5008287]: https://support.microsoft.com/kb/5008287
[4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174 [2.117]: ./cloud-services-guestos-update-matrix.md#family-2-releases
cognitive-services How To Speech Synthesis Viseme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-speech-synthesis-viseme.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
# Get facial pose events for lip-sync > [!NOTE]
-> At this time, viseme events are available only for English (US) [neural voices](language-support.md#text-to-speech).
+> At this time, viseme events are available only for [neural voices](language-support.md#text-to-speech).
A _viseme_ is the visual description of a phoneme in spoken language. It defines the position of the face and mouth when a person speaks a word. Each viseme depicts the key facial poses for a specific set of phonemes.
For 3D characters, think of the characters as string puppets. The puppet master
## Map phonemes to visemes
-Visemes vary by language. Each language has a set of visemes that correspond to its specific phonemes. The following table shows the correspondence between International Phonetic Alphabet (IPA) phonemes and viseme IDs for English (US).
-
-| IPA | Example | Viseme ID |
-|--||--|
-| i | **ea**t | 6 |
-| ɪ | **i**f | 6 |
-| eɪ | **a**te | 4 |
-| ɛ | **e**very | 4 |
-|æ | **a**ctive |1|
-|ɑ | **o**bstinate |2|
-|ɔ | c**au**se |3|
-|ʊ | b**oo**k |4|
-|oʊ | **o**ld |8|
-|u | **U**ber |7|
-|ʌ | **u**ncle |1|
-|aɪ | **i**ce |11|
-|aʊ | **ou**t |9|
-|ɔɪ | **oi**l |10|
-|ju | **Yu**ma |[6, 7]|
-|ə | **a**go |1|
-|ɪɹ | **ear**s |[6, 13]|
-|ɛɹ | **air**plane |[4, 13]|
-|ʊɹ | c**ur**e |[4, 13]|
-|aɪ(ə)ɹ | **Ire**land |[11, 13]|
-|aʊ(ə)ɹ | **hour**s |[9, 13]|
-|ɔɹ | **or**ange |[3, 13]|
-|ɑɹ | **ar**tist |[2, 13]|
-|ɝ | **ear**th |[5, 13]|
-|ɚ | all**er**gy |[1, 13]|
-|w | **w**ith, s**ue**de |7|
-|j | **y**ard, f**e**w |6|
-|p | **p**ut |21|
-|b | **b**ig |21|
-|t | **t**alk |19|
-|d | **d**ig |19|
-|k | **c**ut |20|
-|g | **g**o |20|
-|m | **m**at, s**m**ash |21|
-|n | **n**o, s**n**ow |19|
-|ŋ | li**n**k |20|
-|f | **f**ork |18|
-|v | **v**alue |18|
-|╬╕ | **th**in |17|
-|├░ | **th**en |17|
-|s | **s**it |15|
-|z | **z**ap |15|
-|ʃ | **sh**e |16|
-|ʒ | **J**acques |16|
-|h | **h**elp |12|
-|tʃ | **ch**in |16|
-|dʒ | **j**oy |16|
-|l | **l**id, g**l**ad |14|
-|╔╣ | **r**ed, b**r**ing |13|
+Visemes vary by language and locale. Each locale has a set of visemes that correspond to its specific phonemes. The [SSML phonetic alphabets](speech-ssml-phonetic-sets.md) documentation maps viseme IDs to the corresponding International Phonetic Alphabet (IPA) phonemes.
## Next steps
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/role-based-access-control.md
+
+ Title: Role-based access control for Speech resources - Speech service
+
+description: Learn how to assign access roles for a Speech resource.
++++++ Last updated : 04/03/2022+++
+# Role-based access control for Speech resources
+
+You can manage access and permissions to your Speech resources with Azure role-based access control (Azure RBAC). Assigned roles can vary across Speech resources. For example, you can assign a role to a Speech resource that should only be used to train a Custom Speech model. You can assign another role to a Speech resource that is used to transcribe audio files. Depending on who can access each Speech resource, you can effectively set a different level of access per application or user. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md).
+
+> [!NOTE]
+> A Speech resource can inherit or be assigned multiple roles. The final level of access to this resource is a combination of all roles permissions from the operation level.
+
+## Roles for Speech resources
+
+A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in this table are assigned by default.
+
+| Role | Can list resource keys | Access to data, models, and endpoints|
+| | | |
+|**Owner** |Yes |View, create, edit, and delete |
+|**Contributor** |Yes |View, create, edit, and delete |
+|**Cognitive Services Contributor** |Yes |View, create, edit, and delete |
+|**Cognitive Services User** |Yes |View, create, edit, and delete |
+|**Cognitive Services Speech Contributor** |No | View, create, edit, and delete |
+|**Cognitive Services Speech User** |No |View only |
+|**Cognitive Services Data Reader (Preview)** |No |View only |
+
+> [!IMPORTANT]
+> Whether a role can list resource keys is important for [Speech Studio authentication](#speech-studio-authentication). To list resource keys, a role must have permission to run the `Microsoft.CognitiveServices/accounts/listKeys/action` operation. Please note that if key authentication is disabled in the Azure Portal, then none of the roles can list keys.
+
+Keep the built-in roles if your Speech resource can have full read and write access to the projects.
+
+For finer-grained resource access control, you can [add or remove roles](../../role-based-access-control/role-assignments-portal.md?tabs=current) using the Azure portal. For example, you could create a custom role with permission to upload Custom Speech datasets, but without permission to deploy a Custom Speech model to an endpoint.
+
+## Authentication with keys and tokens
+
+The [roles](#roles-for-speech-resources) define what permissions you have. Authentication is required to use the Speech resource.
+
+To authenticate with Speech resource keys, all you need is the key and region. To authenticate with an Azure AD token, the Speech resource must have a [custom subdomain](speech-services-private-link.md#create-a-custom-domain-name) and use a [private endpoint](speech-services-private-link.md#turn-on-private-endpoints). The Speech service uses custom subdomains with private endpoints only.
+
+### Speech SDK authentication
+
+For the SDK, you configure whether to authenticate with a Speech resource key or Azure AD token. For details, see [Azure Active Directory Authentication with the Speech SDK](how-to-configure-azure-ad-auth.md).
+
+### Speech Studio authentication
+
+Once you're signed into [Speech Studio](speech-studio-overview.md), you select a subscription and Speech resource. You don't choose whether to authenticate with a Speech resource key or Azure AD token. Speech Studio gets the key or token automatically from the Speech resource. If one of the assigned [roles](#roles-for-speech-resources) has permission to list resource keys, Speech Studio will authenticate with the key. Otherwise, Speech Studio will authenticate with the Azure AD token.
+
+If Speech Studio uses your Azure AD token, but the Speech resource doesn't have a custom subdomain and private endpoint, then you can't use some features in Speech Studio. In this case, for example, the Speech resource can be used to train a Custom Speech model, but you can't use a Custom Speech model to transcribe audio files.
+
+| Authentication credential | Feature availability |
+| | |
+|Speech resource key|Full access limited only by the assigned role permissions.|
+|Azure AD token with custom subdomain and private endpoint|Full access limited only by the assigned role permissions.|
+|Azure AD token without custom subdomain and private endpoint (not recommended)|Features are limited. For example, the Speech resource can be used to train a Custom Speech model or Custom Neural Voice. But you can't use a Custom Speech model or Custom Neural Voice.|
+
+## Next steps
+
+* [Azure Active Directory Authentication with the Speech SDK](how-to-configure-azure-ad-auth.md).
+* [Speech service encryption of data at rest](speech-encryption-of-data-at-rest.md).
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Speech service supports the [International Phonetic Alphabet (IPA)](https://en.w
| `ˌ` | Secondary stress | | `.` | Syllable boundary |
-For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The seven locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, and zh-TW. For those seven locales, you set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
+For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The eight locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, zh-HK, and zh-TW. For those eight locales, you set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
See the sections in this article for the phonemes that are specific to each locale.
cognitive-services Speech Studio Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-studio-role-based-access-control.md
- Title: Role-based access control in Speech Studio - Speech service-
-description: Learn how to assign access roles to the Speech service through Speech Studio.
------ Previously updated : 09/07/2021---
-# Azure role-based access control in Speech Studio
-
-Speech Studio supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you can assign different levels of permissions for your Speech Studio operations to different team members. For more information on Azure RBAC, see the [Azure RBAC documentation](../../role-based-access-control/overview.md).
-
-## Prerequisites
-
-* You must be signed into Speech Studio with your Azure account and Speech resource. See the [Speech Studio overview](speech-studio-overview.md).
-
-## Manage role assignments for Speech resources
-
-To grant access to an Azure speech resource, you add a role assignment through the Azure RBAC tool in the Azure portal.
-
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md?tabs=current).
-
-## Supported built-in roles in Speech Studio
-
-A role definition is a collection of permissions. Use the following recommended built-in roles if you don't have any unique custom requirements for permissions:
-
-| **Built-in role** | **Permission to list resource keys** | **Permission for Custom Speech operations** | **Permission for Custom Voice operations**| **Permission for other capabilities** |
-| | | | | --|
-|**Owner** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
-|**Contributor** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
-|**Cognitive Service Contributors** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
-|**Cognitive Service Users** |Yes |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
-|**Cognitive Service Speech Contributor** |No |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access to the projects, including the permission to create, edit, or delete project / data / model / endpoints |Full access |
-|**Cognitive Service Speech User** |No |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Full access |
-|**Cognitive Services Data Reader (preview)** |No |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Can view the projects / datasets / models / endpoints; cannot create, edit, delete |Full access |
-
-Alternatively, you can [create your own custom roles](../../role-based-access-control/custom-roles.md). For example, you could create a custom role with the permission to upload custom speech datasets, but without the ability to deploy a custom speech model to an endpoint.
-
-> [!NOTE]
-> Speech Studio supports key-based authentication. Roles that have permission to list resource keys (`Microsoft.CognitiveServices/accounts/listKeys/action`) will firstly be authenticated with a resource key and will have full access to the Speech Studio operations, as long as key authentication is enabled in Azure portal. If key authentication is disabled by the service admin, then those roles will lose all access to the Studio.
-
-> [!NOTE]
-> One resource could be assigned or inherited with multiple roles, and the final level of access to this resource is a combination of all your roles' permissions from the operation level.
-
-## Next steps
-
-Learn more about [Speech service encryption of data at rest](./speech-encryption-of-data-at-rest.md).
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
Previously updated : 11/02/2021 Last updated : 04/05/2022 # How to tag utterances
-Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance with the entities in your project. Your labels for entities should be consistent across the different utterances.
+Once you have [built a schema](build-schema.md) for your project, you should add training utterances to your project. The utterances should be similar to what your users will use when interacting with the project. When you add an utterance, you have to assign which intent it belongs to. After the utterance is added, label the words within your utterance with the entities in your project. Your labels for entities should be consistent across the different utterances.
Tagging is the process of assigning your utterances to intents, and labeling them with entities. You will want to spend time tagging your utterances - introducing and refining the data that will train the underlying machine learning models for your project. The machine learning models generalize based on the examples you provide it. The more examples you provide, the more data points the model has to make better generalizations. > [!NOTE] > An entity's learned components is only defined when you label utterances for that entity. You can also have entities that include _only_ list or prebuilt components without labelling learned components. see the [entity components](../concepts/entity-components.md) article for more information.
-When you enable multiple languages in your project, you must also specify the language of the utterance you are adding. As part of the multilingual capabilities of Conversational Language Understanding, you can train your project in a dominant language, and then predict in the other available languages. Adding examples to other languages increases the model's performance in these languages if you determine it isn't doing well, but avoid duplicating your data across all the languages you would like to support.
+When you enable multiple languages in your project, you must also specify the language of the utterance you're adding. As part of the multilingual capabilities of Conversational Language Understanding, you can train your project in a dominant language, and then predict in the other available languages. Adding examples to other languages increases the model's performance in these languages if you determine it isn't doing well, but avoid duplicating your data across all the languages you would like to support.
For example, to improve a calender bot's performance with users, a developer might add examples mostly in English, and a few in Spanish or French as well. They might add utterances such as:
For example, to improve a calender bot's performance with users, a developer mig
:::image type="content" source="../media/tag-utterances.png" alt-text="A screenshot of the page for tagging utterances in Language Studio." lightbox="../media/tag-utterances.png":::
-*Section 1* is where you add your utterances. You must select one of the intents from the drop-down list, the language of the utterance (if applicable) and the utterance itself. Press the *Enter* key in the utterance's text box to add the utterance.
+*Section 1* is where you choose which dataset you're viewing. You can add utterances in the training set or testing set.
-*Section 2* includes your project's entities. You can select any of the entities you've added, and then hover over the text to label the entities within your utterances, shown in *section 3*. You can also add new entities here by pressing the **+ Add Entity** button. You can also hide those entity's labels within your utterances.
+Your utterances are split into two sets:
+* Training set: These utterances are used to create your conversational model during training. The training set is processed as part of the training job to produce a trained model.
+* Testing set: These utterances are used to test the performance of your conversation model when the model is created. Testing set is also used to output the evaluation of the model.
-*Section 3* includes the utterances you've added. You can drag over the text you want to label and a contextual menu of the entities will appear.
+When adding utterances, you have the option to add to a specific set explicitly (training or testing). If you choose to do this you need to set your split type in train model page to **manual split of training and testing data**, otherwise keep all your utterances in the train set and use **Automatically split the testing set from training data**. See [How to train your model](train-model.md#train-model) for more information.
+*Section 2* is where you add your utterances. You must select one of the intents from the drop-down list, the language of the utterance (if applicable), and the utterance itself. Press the enter key in the utterance's text box to add the utterance.
+
+*Section 3* includes your project's entities and distribution of intents and entities across your training set and testing set.
+
+You can select the highlight button next to any of the entities you've added, and then hover over the text to label the entities within your utterances, shown in *section 4*. You can also add new entities here by clicking the **+ Add Entity** button.
+
+When you select your distribution, you can also view tag distribution across:
+
+* Total instances per tagged entity: The distribution of each of your entities across the training and testing sets.
+
+* Unique utterances per tagged entity: How your entities are distributed among the different utterances you have.
+* Utterances per intent: The distribution of utterances among intents across your training and testing sets.
++
+*Section 4* includes the utterances you've added. You can drag over the text you want to label, and a contextual menu of the entities will appear.
+ > [!NOTE] > Unlike LUIS, you cannot label overlapping entities. The same characters cannot be labeled by more than one entity.
+> list and prebuilt components are not shown in the tag utterances page, and all labels here only apply to the learned component.
## Filter Utterances
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/train-model.md
Previously updated : 11/02/2021 Last updated : 04/05/2022 # Train and evaluate models
-After you have completed [tagging your utterances](./tag-utterances.md), you can train your model. Training is the act of converting the current state of your project's training data to build a model that can be used for predictions. Every time you train, you have to name your training instance.
+After you have completed [tagging your utterances](tag-utterances.md), you can train your model. Training is the process of converting the current state of your project's training data to build a model that can be used for predictions. Every time you train, you must name your training instance.
You can create and train multiple models within the same project. However, if you re-train a specific model it overwrites the last state.
-The training times can be anywhere from a few seconds up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances. Before training, you will have the option to enable evaluation, which lets you view how your model performs.
+To train a model, you need to start a training job. The output of a successful training job is your trained model. Training jobs will be automatically deleted after seven days from creation, but the output trained model will not be deleted.
-## Train model
+The training times can be anywhere from a few seconds when dealing with orchestration workflow projects, up to a couple of hours when you reach the [maximum limit](../service-limits.md) of utterances. Before training, you will have the option to enable evaluation, which lets you view how your model performs.
-Select **Train model** on the left of the screen. Select **Start a training job** from the top menu.
+## Train model
-Enter a new model name or select an existing model from the **Model Name** dropdown.
+1. Go to your project page in Language Studio.
+2. Select **Train** from the left side menu.
+3. Select **Start a training job** from the top menu.
+4. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
+
+Choose if you want to evaluate and measure your model's performance. The **Run evaluation with training toggle** is enabled by default, meaning you want to measure your model's performance and you will have the option to choose how you want to split your training and testing utterances. You are provided with the two options below:
+
+* **Automatically split the testing set from training data**: A selected stratified sample from all training utterances according to the percentages that you configure in the text box. The default value is set to 80% for training and 20% for testing. Any utterances already assigned to the testing set will be ignored completely if you choose this option.
+
+* **Use a manual split of training and testing data**: The training and testing utterances that youΓÇÖve provided and assigned during tagging to create your custom model and measure its performance. Note that this option will only be enabled if you add utterances to the testing set in the tag data page. Otherwise, it will be disabled.
-Select whether you want to evaluate your model by changing the **Run evaluation with training** toggle. If enabled, your tagged utterances will be spilt into 2 parts; 80% for training, 20% for testing. Afterwards, you'll be able to see the model's evaluation results.
:::image type="content" source="../media/train-model.png" alt-text="A screenshot showing the Train model page for Conversational Language Understanding projects." lightbox="../media/train-model.png":::
-Click the **Train** button and wait for training to complete. You will see the training status of your model in the view model details page. Only successfully completed tasks will generate models.
+Click the **Train** button and wait for training to complete. You will see the training status of your training job. Only successfully completed jobs will generate models.
## Evaluate model
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/quickstart.md
Previously updated : 01/27/2022 Last updated : 04/05/2022 zone_pivot_groups: usage-custom-language-features
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/call-api.md
After you're satisfied with your model, and made any necessary improvements, you
## Prerequisites
-* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
+* [A custom text classification project](create-project.md) with a configured Azure blob storage account,
* Text data that has [been uploaded](create-project.md#prepare-training-data) to your storage account. * [Tagged data](tag-data.md) and successfully [trained model](train-model.md) * Reviewed the [model evaluation details](view-model-evaluation.md) to determine how your model is performing.
To delete a deployment, select the deployment you want to delete and click **Del
:::image type="content" source="../media/get-prediction-url-3.png" alt-text="run-inference-3" lightbox="../media/get-prediction-url-3.png"::: -
+You will need to use the REST API. Click on the **REST API** tab above for more information.
# [Using the API](#tab/rest-api)
First you will need to get your resource key and endpoint
### Submit a custom text classification task
-1. Start constructing a POST request by updating the following URL with your endpoint.
-
- `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze`
-2. In the header for the request, add your key to the `Ocp-Apim-Subscription-Key` header.
+### Get the results for a custom text classification task
-3. In the JSON body of your request, you will specify The documents you're inputting for analysis, and the parameters for the custom entity recognition task. `project-name` is case-sensitive.
-
- > [!tip]
- > See the [quickstart article](../quickstart.md?pivots=rest-api#submit-a-custom-text-classification-task) and [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) for more information about the JSON syntax.
-
- ```json
- {
- "displayName": "MyJobName",
- "analysisInput": {
- "documents": [
- {
- "id": "doc1",
- "text": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc tempus, felis sed vehicula lobortis, lectus ligula facilisis quam, quis aliquet lectus diam id erat. Vivamus eu semper tellus. Integer placerat sem vel eros iaculis dictum. Sed vel congue urna."
- },
- {
- "id": "doc2",
- "text": "Mauris dui dui, ultricies vel ligula ultricies, elementum viverra odio. Donec tempor odio nunc, quis fermentum lorem egestas commodo. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos."
- }
- ]
- },
- "tasks": {
- "customMultiClassificationTasks": [
- {
- "parameters": {
- "project-name": "MyProject",
- "deployment-name": "MyDeploymentName"
- "stringIndexType": "TextElements_v8"
- }
- }
- ]
- }
- }
- ```
-
-4. You will receive a 202 response indicating success. In the response headers, extract `operation-location`.
-`operation-location` is formatted like this:
-
- `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze/jobs/<jobId>`
-
- You will use this endpoint in the next step to get the custom recognition task results.
-
-5. Use the URL from the previous step to create a **GET** request to query the status/results of the custom recognition task. Add your key to the `Ocp-Apim-Subscription-Key` header for the request.
- # [Using the client libraries (Azure SDK)](#tab/client)
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/view-model-evaluation.md
The evaluation process uses the trained model to predict user-defined classes fo
## View the model details using Language Studio
-1. Go to your project page in [Language Studio](https://aka.ms/languageStudio).
-2. Select **View model details** from the left side menu.
-
-3. In this page you can only view the successfully trained models. You can select the model name for more details.
-
-4. You can find the **model-level** evaluation metrics under the **Overview** section and the **class-level** evaluation metrics under the **Class performance metrics** section. See [Evaluation metrics](../concepts/evaluation.md#model-level-and-class-level-evaluation-metrics) for more information.
-
- :::image type="content" source="../media/model-details-2.png" alt-text="Model performance metrics" lightbox="../media/model-details-2.png":::
-
-> [!NOTE]
-> If you don't find all the classes displayed here, it is because there were no tagged files of this class in the test set.
Under the **Test set confusion matrix**, you can find the confusion matrix for the model.
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/faq.md
Previously updated : 11/16/2021 Last updated : 04/05/2022
When you're ready to start [using your model to make predictions](#how-do-i-use-
## What is the recommended CI/CD process?
-You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you train a new model your dataset is [split](how-to/train-model.md#data-split) randomly into training and testing sets, so there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
+You can train multiple models on the same dataset within the same project. After you have trained your model successfully, you can [view its evaluation](how-to/view-model-evaluation.md). You can [deploy and test](quickstart.md#deploy-your-model) your model within [Language studio](https://aka.ms/languageStudio). You can add or remove tags from your data and train a **new** model and test it as well. View [service limits](service-limits.md)to learn about maximum number of trained models with the same project. When you [train your data](how-to/train-model.md) you can determine how your dataset is split into training and testing sets. You can also have your data split randomly into training and testing set where there is no guarantee that the reflected model evaluation is about the same test set, and the results are not comparable. It's recommended that you develop your own test set and use it to evaluate both models so you can measure improvement.
## Does a low or high model score guarantee bad or good performance in production?
See the [data selection and schema design](how-to/design-schema.md) article for
## Why do I get different results when I retrain my model?
-* When you train a new model your dataset is [split](how-to/train-model.md#data-split) randomly into train and test sets so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
+* When you [train your model](how-to/train-model.md), you can determine if you want your data to be split randomly into train and test sets. If you do, so there is no guarantee that the reflected model evaluation is on the same test set, so results are not comparable.
* If you're retraining the same model, your test set will be the same but you might notice a slight change in predictions made by the model. This is because the trained model is not robust enough and this is a factor of how representative and distinct your data is and the quality of your tagged data.
cognitive-services Improve Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/improve-model.md
Previously updated : 11/02/2021 Last updated : 04/05/2022
See the [application development lifecycle](../overview.md#application-developme
After you have reviewed your [model's evaluation](view-model-evaluation.md), you'll have formed an idea on what's wrong with your model's prediction. > [!NOTE]
-> This guide focuses on data from the [validation set](train-model.md#data-split) that was created during training.
+> This guide focuses on data from the [validation set](train-model.md) that was created during training.
### Review test set
cognitive-services Tag Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/tag-data.md
Previously updated : 11/02/2021 Last updated : 04/05/2022
The precision, consistency and completeness of your tagged data are key factors
3. You can find a list of all `.txt` files available in your projects to the left. You can select the file you want to start tagging or you can use the **Back** and **Next** button from the bottom of the page to navigate.
-4. To start tagging, click **Add entities** in the top-right corner. You can either view all files or only tagged files by changing the view from the **Viewing** drop down.
+4. To start tagging, click **Add entities** in the top-right corner. You can either view all files or only tagged files by changing the view from the **Viewing** drop down filter.
+
+ :::image type="content" source="../media/tagging-screen.png" alt-text="A screenshot showing the Language Studio screen for tagging data." lightbox="../media/tagging-screen.png":::
+
+ In the image above:
+
+ * *Section 1*: is where the content of the text file is displayed and tagging takes place. You have [two options for tagging](#tagging-options) your files.
+
+ * *Section 2*: includes your project's entities and distribution across your files and tags.
+ If you click **Distribution**, you can view your tag distribution across:
+
+ * Files: View the distribution of files across one single entity.
+ * Tags: view the distribution of tags across all files.
+
+ :::image type="content" source="../media/distribution-ner.png" alt-text="A screenshot showing the distribution section." lightbox="../media/distribution-ner.png":::
+
+
+ * *Section 3*: This is the split project data toggle. You can choose to add a selected text file to your training set or the testing set. By default, the toggle is off, and all text files are added to your training set.
+
+To add a text file to a training or testing set, simply choose from the radio buttons to which set it belongs.
>[!TIP]
-> * There is no standard number of tags you will need, Consider starting with 50 tags per entity. The number of tags you'll need depends on how distinct your entities are, and how easily they can be differentiated from each other. It also depends on your tagging, which should be consistent and complete.
-
+>It is recommended to define your testing set.
If you enabled multiple languages for your project, you will find a **Language** dropdown, which lets you select the language of each document.
cognitive-services Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/train-model.md
The time to train a model varies on the dataset, and may take up to several hour
See the [application development lifecycle](../overview.md#application-development-lifecycle) for more information.
-## Data split
-
-Before starting the training process, files in your dataset are divided into two groups at random:
-
-* The **training set** contains 80% of the files in your dataset. It is the main set that is used to train the model.
-
-* The **test set** contains 20% of the files available in your dataset. This set is used to provide an unbiased [evaluation](../how-to/view-model-evaluation.md) of the model. This set is not introduced to the model during training.
- ## Train model in Language studio 1. Go to your project page in [Language Studio](https://aka.ms/LanguageStudio).
Before starting the training process, files in your dataset are divided into two
4. To train a new model, select **Train a new model** and type in the model name in the text box below. You can **overwrite an existing model** by selecting this option and select the model you want from the dropdown below.
- :::image type="content" source="../media/train-model.png" alt-text="Create a new model" lightbox="../media/train-model.png":::
+ :::image type="content" source="../media/train-model.png" alt-text="Create a new training job" lightbox="../media/train-model.png":::
+
+If you have enabled [your project data to be split manually](tag-data.md) when you were tagging your data, you will see two training options:
+
+* **Automatic split the testing**: The data will be randomly split for each class between training and testing sets, according to the percentages you choose. The default value is 80% for training and 20% for testing. To change these values, choose which set you want to change and write the new value.
+* **Use a manual split**: Assign each document to either the training or testing set, this required first adding files in the test dataset.
+ 5. Click on the **Train** button.
-6. You can check the status of the training job in the same page. Only successfully completed tasks will generate models.
+6. You can check the status of the training job in the same page. Only successfully completed training jobs will generate models.
You can only have one training job running at a time. You cannot create or start other tasks in the same project.
cognitive-services View Model Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/view-model-evaluation.md
Previously updated : 11/02/2021 Last updated : 04/05/2022
# View the model's evaluation and details
-After your model has finished training, you can view the model details and see how well does it perform against the test set, which contains 10% of your data at random, which is created during [training](train-model.md#data-split). The test set consists of data that was not introduced to the model during the training process. For the evaluation process to complete there must be at least 10 files in your dataset. You must also have a [custom NER project](../quickstart.md) with a [trained model](train-model.md).
+After your model has finished training, you can view the model details and see how well does it perform against the test set, which contains 10% of your data at random, which is created during [training](train-model.md). The test set consists of data that was not introduced to the model during the training process. For the evaluation process to complete there must be at least 10 files in your dataset. You must also have a [custom NER project](../quickstart.md) with a [trained model](train-model.md).
## Prerequisites
See the [application development lifecycle](../overview.md#application-developme
2. Select **View model details** from the menu on the left side of the screen.
-3. In this page you can only view the sucessfuly trained models. You can click on the model name for more details.
+3. In this page you can only view the successfully trained models. You can click on the model name for more details.
4. You can find the **model-level** evaluation metrics under **Overview**, and the **entity-level** evaluation metrics under **Entity performance metrics**. The confusion matrix for the model is located under **Test set confusion matrix**
cognitive-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/service-limits.md
Previously updated : 11/02/2021 Last updated : 04/05/2022
Use this article to learn about the data and service limits when using Custom NE
* Maximum allowed length for your file is 128,000 characters, which is approximately 28,000 words or 56 pages.
-* Your [training dataset](how-to/train-model.md#data-split) should include at least 10 files and not more than 100,000 files.
+* Your [training dataset](how-to/train-model.md) should include at least 10 files and not more than 100,000 files.
## APIs limits
-* When using the Authoring API, there is a maximum of 10 POST requests and 100 GET requests per minute.
+* The Authoring API has a maximum of 10 POST requests and 100 GET requests per minute.
-* When using the Analyze API, there is a maximum of 20 GET or POST requests per minute.
+* The Analyze API has a maximum of 20 GET or POST requests per minute.
* The maximum file size per request is 125,000 characters. You can send up to 25 files as long as they collectively do not exceed 125,000 characters.
Custom text classification is only available select Azure regions. When you crea
* Model names have to be unique within the same project.
-* Model names must only contain alphnumeric characters,only letters and numbers, no spaces or special characters are allowed). Model name must have a maximum of 50 characters.
+* Model names must only contain alphanumeric characters, only letters and numbers, no spaces or special characters are allowed). Model name must have a maximum of 50 characters.
* You cannot rename your model after creation.
cognitive-services Smart Url Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/smart-url-refresh.md
+
+ Title: Smart URL refresh - question answering
+
+description: Use the question answering smart URL refresh feature to keep your knowledge base up to date.
+++++ Last updated : 02/28/2022++
+# Use smart URL refresh with a project
+
+Custom question answering gives you the ability to refresh your source contents by getting the latest content from a source URL and updating the corresponding knowledge base with one click. The service will ingest content from the URL and either create, merge, or delete question-and-answer pairs in the knowledge base.
+
+This functionality is provided to support scenarios where the content in the source URL changes frequently, such as the FAQ page of a product that's updated often. The service will refresh the source and update the knowledge base to the latest content while retaining any manual edits made previously.
+
+> [!NOTE]
+> This feature is only applicable to URL sources, and they must be refreshed individually, not in bulk.
+
+> [!IMPORTANT]
+> This feature is only available in the `2021-10-01` version of the Language API.
+
+## How it works
+
+If you have a knowledge base with a URL source that has changed, you can trigger a smart URL refresh to keep your knowledge base up to date. The service will scan the URL for updated content and generate QnA pairs. It will add any new QnA pairs to your knowledge base and also delete any pairs that have disappeared from the source (with exceptions&mdash;see below). It also merges old and new QnA pairs in some situations (see below).
+
+> [!IMPORTANT]
+> Because smart URL refresh can involve deleting old content from your knowledge base, you may want to [create a backup](./export-import-refresh.md) of your knowledge base before you do any refresh operations.
+
+You can trigger a URL refresh in Language Studio by opening your project, selecting the source in the **Manage sources** list, and selecting **Refresh URL**.
++
+You can also trigger a refresh programmatically using the REST API. See the **[Update Sources](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)** reference documentation for parameters and a sample request.
+
+## Smart refresh behavior
+
+When the user refreshes content using this feature, the knowledge base of QnA pairs may be updated in the following ways:
+
+### Delete old pair
+
+If the content of the URL is updated so that an existing QnA pair from the old content of the URL is no longer found in the source, that pair is deleted from the refreshed knowledge base. For example, if a QnA pair Q1A1 existed in the old knowledge base, but after refreshing, there's no A1 answer generated by the newly refreshed source, then the pair Q1A1 is considered outdated and is dropped from the knowledge base altogether.
+
+However, if the old QnA pairs have been manually edited in the authoring portal, they won't be deleted.
+
+### Add new pair
+
+If the content of the URL is updated in such a way that a new QnA pair exists which didn't exist in the old KB, then it's added to the KB. For example, if the service finds that a new answer A2 can be generated, then the QnA pair Q2A2 is inserted into the KB.
+
+### Merge pairs
+
+If the answer of a new QnA pair matches the answer of an old QnA pair, the two pairs are merged. The new pair's question is added as an alternate question to the old QnA pair. For example, consider Q3A3 exists in the old source. When you refresh the source, a new QnA pair Q3'A3 is introduced. In that case, the two QnA pairs are merged: Q3' is added to Q3 as an alternate question.
+
+If the old QnA pair has a metadata value, that data is retained and persisted in the newly merged pair.
+
+If the old QnA pair has follow-up prompts associated with it, then the following scenarios may arise:
+* If the prompt attached to the old pair is from the source being refreshed, then it's deleted, and the prompt of the new pair (if any exists) is appended to the newly merged QnA pair.
+* If the prompt attached to the old pair is from a different source, then it's maintained as-is and the prompt from the new question (if any exists) is appended to the newly merged QnA pair.
++
+#### Merge example
+See the following example of a merge operation with differing questions and prompts:
+
+|Source iteration|Question |Answer |Prompts |
+||||--|
+|old |"What is the new HR policy?" | "You may have to choose among the following options:" | P1, P2 |
+|new |"What is the new payroll policy?" | "You may have to choose among the following options:" | P3, P4 |
+
+The prompts P1 and P2 come from the original source and are different from prompts P3 and P4 of the new QnA pair. They both have the same answer, `You may have to choose among the following options:`, but it leads to different prompts. In this case, the resulting QnA pair would look like this:
+
+|Question |Answer |Prompts |
+|||--|
+|"What is the new HR policy?" </br>(alternate question: "What is the new payroll policy?") | "You may have to choose among the following options:" | P3, P4 |
+
+#### Duplicate answers scenario
+
+When the original source has two or more QnA pairs with the same answer (as in, Q1A1 and Q2A1), the merge behavior may be more complex.
+
+If these two QnA pairs have individual prompts attached to them (for example, Q1A1+P1 and Q2A1+P2), and the refreshed source content has a new QnA pair generated with the same answer A1 and a new prompt P3 (Q1'A1+P3), then the new question will be added as an alternate question to the original pairs (as described above). But all of the original attached prompts will be overwritten by the new prompt. So the final pair set will look like this:
+
+|Question |Answer |Prompts |
+|||--|
+|Q1 </br>(alternate question: Q1') | A1 | P3 |
+|Q2 </br>(alternate question: Q1') | A1 | P3 |
+
+## Next steps
+
+* [Question answering quickstart](/azure/cognitive-services/language-service/question-answering/quickstart/sdk?pivots=studio)
+* [Update Sources API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Yes, you can make one request with multiple recipients. Follow this [quickstart]
The 202 returned by the service means that your message has been queued to be sent and not delivered. Use this [quickstart](../../quickstarts/sms/handle-sms-events.md) to subscribe to delivery report events and troubleshoot. Once the events are configured, inspect the "deliveryStatus" field of your delivery report to verify delivery success/failure.
+### How to send shortened URLs in messages?
+Shortened URLs are a good way to keep messages short and readable. However, US carriers prohibit the use of free publicly available URL shortener services. This is because the ΓÇÿfree-publicΓÇÖ URL shorteners are used by bad-actors to evade detection and get their SPAM messages passed through text messaging platforms. When sending messages in US, we encourage using custom URL shorteners to create URLs with dedicated domain that belongs to your brand. Many US carriers block SMS traffic if they contain publicly available URL shorteners.
+
+Below is a list with examples of common URL shorteners you should avoid to maximize deliverability:
+- bit.ly
+- goo.gl
+- tinyurl.com
+- Tiny.cc
+- lc.chat
+- is.gd
+- soo.gd
+- s2r.co
+- Clicky.me
+- budurl.com
+- bc.vc
+ ## Opt-out handling ### How does Azure Communication Services handle opt-outs for toll-free numbers?
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
+
+ Title: Azure Communication Services Pre-Call diagnostics
+
+description: Overview of Pre-Call Diagnostic APIs
+++++ Last updated : 04/01/2021++++
+# Pre-Call diagnostic
++
+The Pre-Call API enables developers to programmatically validate a clientΓÇÖs readiness to join an Azure Communication Services Call. The Pre-Call APIs can be accessed through the Calling SDK. They provide multiple diagnostics including device, connection, and call quality. Pre-Call APIs are available only for Web (JavaScript). We will be enabling these capabilities across platforms in the future, please provide us feedback on what platforms you would like to see Pre-Call APIs on.
+
+## Accessing Pre-Call APIs
+
+To Access the Pre-Call API, you will need to initialize a `callClient` and provision an Azure Communication Services access token. Then you can access the `NetworkTest` feature and run it.
+
+```javascript
+import { CallClient, Features} from "@azure/communication-calling";
+import { AzureCommunicationTokenCredential } from '@azure/communication-common';
+
+const tokenCredential = new AzureCommunicationTokenCredential();
+const networkTest = await callClient.feature(Features.NetworkTest).beginTest(tokenCredential);
+
+```
+
+Once it finishes running, developers can access the result object.
+
+## Diagnostic results
+
+The Pre-Call API returns a full diagnostic of the device including details like device permissions, availability and compatibility, call quality stats and in-call diagnostics. The results are returned as a `CallDiagnosticsResult` object.
+
+```javascript
+
+export declare type CallDiagnosticsResult = {
+ deviceAccess: Promise<DeviceAccess>;
+ deviceEnumeration: Promise<DeviceEnumeration>;
+ inCallDiagnostics: Promise<InCallDiagnostics>;
+ browserSupport?: Promise<DeviceCompatibility>;
+ callMediaStatistics?: Promise<MediaStatsCallFeature>;
+};
+
+```
+
+Individual result objects can be accessed as such using the `networkTest` constant above.
+
+### Browser support
+Browser compatibility check. Checks for `Browser` and `OS` compatibility and provides a `Supported` or `NotSupported` value back.
+
+```javascript
+
+const browserSupport = await networkTest.browserSupport;
+ if(browserSupport) {
+ console.log(browserSupport.browser)
+ console.log(browserSupport.os)
+ }
+
+```
+
+In the case that the test fails and the browser being used by the user is `NotSupported`, the easiest way to fix that is by asking the user to switch to a supported browser. Refer to the supported browsers in our [documentation](./calling-sdk-features.md#javascript-calling-sdk-support-by-os-and-browser).
+
+### Device access
+Permission check. Checks whether video and audio devices are available from a permissions perspective. Provides `boolean` value for `audio` and `video` devices.
+
+```javascript
+
+ const deviceAccess = await networkTest.deviceAccess;
+ if(deviceAccess) {
+ console.log(deviceAccess.audio)
+ console.log(deviceAccess.video)
+ }
+
+```
+
+In the case that the test fails and the permissions are false for audio and video, the user shouldn't continue into joining a call. Rather you will need to prompt the user to enable the permissions. To do this, it is best to provide specific instruction on how to access permission access based on the OS, version and browser they are on. For more information on permissions check out our [recommendations](https://techcommunity.microsoft.com/t5/azure-communication-services/checklist-for-advanced-calling-experiences-in-mobile-web/ba-p/3266312).
+
+### Device enumeration
+Device availability. Checks whether microphone, camera and speaker devices are detected in the system and ready to use. Provides an `Available` or `NotAvailable` value back.
+
+```javascript
+
+ const deviceEnumeration = await networkTest.deviceEnumeration;
+ if(deviceEnumeration) {
+ console.log(deviceEnumeration.microphone)
+ console.log(deviceEnumeration.camera)
+ console.log(deviceEnumeration.speaker)
+ }
+
+```
+
+In the case that devices are not available, the user shouldn't continue into joining a call. Rather the user should be prompted to check device connections to ensure any headsets, cameras or speakers are properly connected. For more information on device management check out our [documentation](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#device-management)
+
+### InCall diagnostics
+Performs a quick call to check in-call metrics for audio and video and provides results back. Includes connectivity (`connected`, boolean), bandwidth quality (`bandWidth`, `'Bad' | 'Average' | 'Good'`) and call diagnostics for audio and video (`diagnostics`). Diagnostic are provided `jitter`, `packetLoss` and `rtt` and results are generated using a simple quality grade (`'Bad' | 'Average' | 'Good'`).
+
+```javascript
+
+ const inCallDiagnostics = await networkTest.inCallDiagnostics;
+ if(inCallDiagnostics) {
+ console.log(inCallDiagnostics.connected)
+ console.log(inCallDiagnostics.bandWidth)
+ console.log(inCallDiagnostics.diagnostics.audio)
+ console.log(inCallDiagnostics.diagnostics.video)
+ }
+
+```
+
+At this step, there are multiple failure points to watch out for:
+
+- If connection fails, the user should be prompted to recheck their network connectivity. Connection failures can also be attributed to network conditions like DNS, Proxies or Firewalls. For more information on recommended network setting check out our [documentation](network-requirements.md).
+- If bandwidth is `Bad`, the user should be prompted to try out a different network or verify the bandwidth availability on their current one. Ensure no other high bandwidth activities might be taking place.
+
+### Media stats
+For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `NetworkTest` feature. You can subscribe to the call media stats to get full collection of them.
+
+## Pricing
+
+When the Pre-Call diagnostic test runs, behind the scenes it uses calling minutes to run the diagnostic. The test lasts for roughly 1 minute, using up 1 minute of calling which is charged at the standard rate of $0.004 per participant per minute. For the case of Pre-Call diagnostic, the charge will be for 1 participant x 1 minutes = $0.004.
+
+## Next steps
+
+This feature is currently in private preview. Please provide feedback on the API design, capabilities and pricing. Feedback is key for the team to move forward and push the feature into public preview and general availability.
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
Azure Synapse Link is available for Azure Cosmos DB SQL API or for Azure Cosmos
* [Enable Azure Synapse Link for your Azure Cosmos DB accounts](#enable-synapse-link) * [Create an analytical store enabled container](#create-analytical-ttl)
-* [Enable analytical store on an existing container](#update-analytical-ttl)
-* [Optional - Update analytical store ttl for an container](#update-analytical-ttl)
+* [Enable analytical store in an existing container](#update-analytical-ttl)
+* [Optional - Update analytical store ttl for a container](#update-analytical-ttl)
+* [Optional - Disable analytical store in a container](#disable-analytical-store)
* [Connect your Azure Cosmos database to an Azure Synapse workspace](#connect-to-cosmos-database) * [Query the analytical store using Azure Synapse Spark Pool](#query-analytical-store-spark) * [Query the analytical store using Azure Synapse serverless SQL pool](#query-analytical-store-sql-on-demand)
The following options create a container with analytical store by using PowerShe
* [Create an Azure Cosmos DB SQL API container](/powershell/module/az.cosmosdb/new-azcosmosdbsqlcontainer)
-## <a id="update-analytical-ttl"></a> Enable analytical store on an existing container
+## <a id="update-analytical-ttl"></a> Enable analytical store in an existing container
> [!NOTE] > You can turn on analytical store on existing Azure Cosmos DB SQL API containers. This capability is general available and can be used for production workloads.
Please note the following details when enabling Azure Synapse Link on your exist
* Currently existing MongoDB API collections are not supported. The alternative is to migrate the data into a new collection, created with analytical store turned on.
-> [!NOTE]
-> Currently it is not possible to turn off analytical store from a container. Click [here](analytical-store-introduction.md#analytical-store-pricing) for more information about analytical store pricing.
- ### Azure portal 1. Sign in to the [Azure portal](https://portal.azure.com/).
Set the `analytical TTL` property to `-1` for infinite retention or use a positi
After the analytical store is enabled with a particular TTL value, you may want to update it to a different valid value. You can update the value by using the Azure portal, Azure CLI, PowerShell, or Cosmos DB SDKs. For information on the various Analytical TTL config options, see the [analytical TTL supported values](analytical-store-introduction.md#analytical-ttl) article. - ### Azure portal If you created an analytical store enabled container through the Azure portal, it contains a default `analytical TTL` of `-1`. Use the following steps to update this value:
The following links show how to update containers analytical TTL by using PowerS
* [Azure Cosmos DB API for Mongo DB](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollection) * [Azure Cosmos DB SQL API](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer)
+## <a id="disable-analytical-store"></a> Optional - Disable analytical store in a container
+
+Analytical store can be disabled in SQL API containers using PowerShell, by updating `-AnalyticalStorageTtl` (analytical Time-To-Live) to `0`. Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
+
+Currently you can't be disabled in MongoDB API collections.
## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
description: Azure Cosmos DB's point-in-time restore feature helps to recover da
Previously updated : 03/24/2022 Last updated : 04/06/2022 # Continuous backup with point-in-time restore in Azure Cosmos DB Azure Cosmos DB's point-in-time restore feature helps in multiple scenarios such as the following:
Azure Cosmos DB performs data backup in the background without consuming any ext
The available time window for restore (also known as retention period) is the lower value of the following two: *30 days back in past from now* or *up to the resource creation time*. The point in time for restore can be any timestamp within the retention period. In strong consistency mode, backup taken in the write region is more up to date when compared to the read regions. Read regions can lag behind due to network or other transient issues. While doing restore, you can [get latest restorable timestamp](get-latest-restore-timestamp.md) for a given resource in that region to ensure that the resource has taken backups up to the given timestamp and can restore in that region.
-Currently, you can restore the Azure Cosmos DB account for SQL API or MongoDB contents point in time to another account via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (az CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template).
+Currently, you can restore the Azure Cosmos DB account for SQL API or MongoDB contents point in time to another account via the [Azure portal](restore-account-continuous-backup.md#restore-account-portal), the [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (az CLI), [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell), or [Azure Resource Manager templates](restore-account-continuous-backup.md#restore-arm-template). Table API or Gremlin APIs are in preview and supported through [Azure CLI](restore-account-continuous-backup.md#restore-account-cli) (az CLI) and [Azure PowerShell](restore-account-continuous-backup.md#restore-account-powershell).
## Backup storage redundancy
See [How do customer-managed keys affect continuous backups?](./how-to-setup-cmk
Currently the point in time restore functionality has the following limitations:
-* Only Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra, Table, and Gremlin APIs are not yet supported.
+* Azure Cosmos DB APIs for SQL and MongoDB are supported for continuous backup. Cassandra API is not supported at present
+
+* Table API and Gremlin API are in preview and supported via PowerShell and Azure CLI.
* Multi-regions write accounts are not supported.
cosmos-db Get Latest Restore Timestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-latest-restore-timestamp.md
Previously updated : 11/18/2021 Last updated : 03/02/2022 -+ # Get the latest restorable timestamp for continuous backup accounts
-This article describes how to get the [latest restorable timestamp](latest-restore-timestamp-continuous-backup.md) for accounts with continuous backup mode. It explains how to get the latest restorable time for SQL containers and MongoDB collections using Azure PowerShell and Azure CLI. You can see the request and response format for the PowerShell and CLI commands.
+This article describes how to get the [latest restorable timestamp](latest-restore-timestamp-continuous-backup.md) for accounts with continuous backup mode. It explains how to get the latest restorable time for SQL containers, Table API Tables (in Preview), Graph API graphs(in Preview), and MongoDB collections using Azure PowerShell and Azure CLI. You can see the request and response format for the PowerShell and CLI commands.
## SQL container
Latest restorable timestamp for an account is minimum of restorable timestamps o
Wednesday, November 3, 2021 8:33:49 PM ```
+## Gremlin Graph Backup information
+
+### PowerShell
+
+```powershell
+Get-AzCosmosDBGremlinGraphBackupInformation `
+ -AccountName <System.String> `
+ -GremlinDatabaseName <System.String> `
+ [-DefaultProfile <Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer>] `
+ -Location <System.String> `
+ -Name <System.String> `
+ -ResourceGroupName <System.String> [<CommonParameters>]
+```
+
+**Sample request:**
+
+```powershell
+Get-AzCosmosDBGremlinGraphBackupInformation `
+ -ResourceGroupName "rg" `
+ -AccountName "amisigremlinpitracc1" `
+ -GremlinDatabaseName "db1" `
+ -Name "graph1" `
+ -Location "eastus"
+```
+
+**Sample response (In UTC Format):**
+
+```console
+LatestRestorableTimestamp
+-
+3/1/2022 2:19:14 AM
+```
+
+### CLI
+
+```azurecli
+az cosmosdb gremlin retrieve-latest-backup-time \
+ -g {resourcegroup} \
+ -a {accountname} \
+ -d {db_name} \
+ -c {graph_name} \
+ -l {location}
+```
+
+**Sample request:**
+
+```azurecli
+az cosmosdb gremlin retrieve-latest-backup-time \
+ -g "rg" \
+ -a "amisigremlinpitracc1" \
+ -d "db1" \
+ -c "graph1" \
+ -l "eastus"
+```
+
+**Sample response:**
+
+```console
+{
+ "continuousBackupInformation": {
+ "latestRestorableTimestamp": "3/2/2022 5:31:13 AM"
+ }
+}
+```
+
+## Table Backup information
+
+### PowerShell
+
+```powershell
+Get-AzCosmosDBTableBackupInformation `
+ -AccountName <System.String> `
+ [-DefaultProfile <Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer>] `
+ -Location <System.String> `
+ -Name <System.String> `
+ -ResourceGroupName <System.String> [<CommonParameters>]
+```
+
+**Sample request:**
+
+```powershell
+Get-AzCosmosDBTableBackupInformation `
+ -ResourceGroupName "rg" `
+ -AccountName "amisitablepitracc1" `
+ -Name "table1" `
+ -Location "eastus"
+```
+
+**Sample response (In UTC Format):**
+
+```console
+LatestRestorableTimestamp
+-
+3/2/2022 2:19:15 AM
+```
+
+### CLI
+
+```azurecli
+az cosmosdb table retrieve-latest-backup-time \
+ -g {resourcegroup} \
+ -a {accountname} \
+ -c {table_name} \
+ -l {location}
+```
+
+**Sample request:**
+
+```azurecli
+az cosmosdb table retrieve-latest-backup-time \
+ -g "rg" \
+ -a "amisitablepitracc1" \
+ -c "table1" \
+ -l "eastus"
+```
+
+**Sample response:**
+
+```console
+{
+ "continuousBackupInformation": {
+ "latestRestorableTimestamp": "3/2/2022 5:33:47 AM"
+ }
+}
+```
+ ## Next steps * [Introduction to continuous backup mode with point-in-time restore.](continuous-backup-restore-introduction.md)
cosmos-db Latest Restore Timestamp Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/latest-restore-timestamp-continuous-backup.md
Previously updated : 12/03/2021 Last updated : 03/03/2022 -+ # Latest restorable timestamp for Azure Cosmos DB accounts with continuous backup mode Azure Cosmos DB offers an API to get the latest restorable timestamp of a container. This API is available for accounts that have continuous backup mode enabled. Latest restorable timestamp represents the latest timestamp in UTC format up to which your data has been successfully backed up. Using this API, you can get the restorable timestamp to trigger the live account restore or monitor that your data is being backed up on time. This API also takes the account location as an input parameter and returns the latest restorable timestamp for the given container in this location. If an account exists in multiple locations, then the latest restorable timestamp for a container in different locations could be different because the backups in each location are taken independently.
-By default, the API only works at the container level, but it can be easily extended to work at the database or account level. This article helps you understand the semantics of latest restorable timestamp api, how it gets calculated and use cases for it. To learn more, see [how to get the latest restore timestamp](get-latest-restore-timestamp.md) for SQL and MongoDB accounts.
+By default, the API only works at the container level, but it can be easily extended to work at the database or account level. This article helps you understand the semantics of latest restorable timestamp api, how it gets calculated and use cases for it. To learn more, see [how to get the latest restore timestamp](get-latest-restore-timestamp.md) for SQL, Table (preview), Graph API (preview), and MongoDB accounts.
## Use cases
cosmos-db Provision Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/provision-account-continuous-backup.md
description: Learn how to provision an account with continuous backup and point
Previously updated : 07/29/2021 Last updated : 04/06/2022 -+ ms.devlang: azurecli
This article explains how to provision an account with continuous backup and poi
> > * If the account is of type SQL API or API for MongoDB. > * If the account has a single write region.
-> * If the account isn't enabled with customer managed keys(CMK).
+ ## <a id="provision-portal"></a>Provision using Azure portal
cosmos-db Sql Api Java Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-application.md
This Java web application tutorial shows you how to use the [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service to store and access data from a Java application hosted on Azure App Service Web Apps. In this article, you will learn: * How to build a basic JavaServer Pages (JSP) application in Eclipse.
-* How to work with the Azure Cosmos DB service using the [Azure Cosmos DB Java SDK](https://github.com/Azure/azure-documentdb-java).
+* How to work with the Azure Cosmos DB service using the [Azure Cosmos DB Java SDK](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos).
This Java application tutorial shows you how to create a web-based task-management application that enables you to create, retrieve, and mark tasks as complete, as shown in the following image. Each of the tasks in the ToDo list is stored as JSON documents in Azure Cosmos DB.
cosmos-db Sql Api Sdk Async Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-async-java.md
-# Azure Cosmos DB Async Java SDK for SQL API: Release notes and resources
+# Azure Cosmos DB Async Java SDK for SQL API (legacy): Release notes and resources
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md) > * [Node.js](sql-api-sdk-node.md) > * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
+> * [Async Java SDK v2 (legacy)](sql-api-sdk-async-java.md)
+> * [Sync Java SDK v2 (legacy)](sql-api-sdk-java.md)
+> * [Spring Data v2 (legacy)](sql-api-sdk-java-spring-v2.md)
> * [Spring Data v3](sql-api-sdk-java-spring-v3.md) > * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md) > * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
Release history is maintained in the Azure Cosmos DB Java SDK source repo. For a
[!INCLUDE [cosmos-db-sdk-faq](../includes/cosmos-db-sdk-faq.md)] ## See also
-To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
+To learn more about Cosmos DB, see [Microsoft Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) service page.
cosmos-db Sql Api Sdk Java Spring V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java-spring-v2.md
-# Spring Data Azure Cosmos DB v2 for Core (SQL) API: Release notes and resources
+# Spring Data Azure Cosmos DB v2 for Core (SQL) API (legacy): Release notes and resources
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"]
> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md) > * [Node.js](sql-api-sdk-node.md) > * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
+> * [Async Java SDK v2 (legacy)](sql-api-sdk-async-java.md)
+> * [Sync Java SDK v2 (legacy)](sql-api-sdk-java.md)
+> * [Spring Data v2 (legacy)](sql-api-sdk-java-spring-v2.md)
> * [Spring Data v3](sql-api-sdk-java-spring-v3.md) > * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md) > * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
Learn more about the [Spring Framework](https://spring.io/projects/spring-framew
Learn more about [Spring Boot](https://spring.io/projects/spring-boot).
-Learn more about [Spring Data](https://spring.io/projects/spring-data).
+Learn more about [Spring Data](https://spring.io/projects/spring-data).
cosmos-db Sql Api Sdk Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-sdk-java.md
-# Azure Cosmos DB Java SDK for SQL API: Release notes and resources
+# Azure Cosmos DB Java SDK for SQL API (legacy): Release notes and resources
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)] > [!div class="op_single_selector"] > * [.NET SDK v3](sql-api-sdk-dotnet-standard.md)
> * [.NET Change Feed SDK v2](sql-api-sdk-dotnet-changefeed.md) > * [Node.js](sql-api-sdk-node.md) > * [Java SDK v4](sql-api-sdk-java-v4.md)
-> * [Async Java SDK v2](sql-api-sdk-async-java.md)
-> * [Sync Java SDK v2](sql-api-sdk-java.md)
-> * [Spring Data v2](sql-api-sdk-java-spring-v2.md)
+> * [Async Java SDK v2 (legacy)](sql-api-sdk-async-java.md)
+> * [Sync Java SDK v2 (legacy)](sql-api-sdk-java.md)
+> * [Spring Data v2 (legacy)](sql-api-sdk-java-spring-v2.md)
> * [Spring Data v3](sql-api-sdk-java-spring-v3.md) > * [Spark 3 OLTP Connector](sql-api-sdk-java-spark-v3.md) > * [Spark 2 OLTP Connector](sql-api-sdk-java-spark.md)
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
Title: Analyze Azure costs with the Power BI App
description: This article explains how to install and use the Cost Management Power BI App. Previously updated : 10/07/2021 Last updated : 04/05/2022
The following reports are available in the app.
**Getting Started** - Provides useful links to documentation and links to provide feedback.
-**Account overview** - The report shows a monthly summary of information, including:
+**Account overview** - The report shows the current billing month summary of information, including:
- Charges against credits - New purchases
cost-management-billing Reservation Amortization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-amortization.md
+
+ Title: View amortized reservation costs
+
+description: This article helps you understand what amortized reservation costs are and how to view them in cost analysis.
+++++ Last updated : 04/04/2022+++
+# View amortized reservation costs
+
+This article helps you understand what amortized reservation costs are and how to view them in cost analysis. When you buy a reservation, you're normally committing to a one-year or three-years plan to save money compared to pay-as-you-go costs. You can choose to pay for the reservation up front or with monthly payments. If you pay up front, the one-time payment is charged to your subscription. If your organization needs to charge back or show back partial costs of the reservation to users or departments that use it, then you might need to determine what the monthly or daily cost of the reservation is. _Amortization_ is the process of breaking the one-time cost into periodic costs.
+
+However, if your organization doesn't charge back or show back reservation use to the users or departments that use them, then you might not need to worry about amortized costs.
+
+## How Azure calculates amortized costs
+
+To understand how amortized costs are shown in cost analysis, let's look at some examples.
+
+First, let's look at a one-year virtual machine reservation that was purchased on January 1. Depending on your view, instead of seeing a $365 purchase on January 1, 2022, you'll see a $1.00 purchase every day from January 1, 2022 to December 31, 2022. In addition to basic amortization, the costs are also reallocated and associated to the specific resources that used the reservation. For example, if the $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of _UnusedReservation_. Unused reservation costs can be seen only when viewing amortized cost.
+
+Now, let's look at a one-year reservation purchased at some other point in a month. For example, if you buy a reservation on May 26, 2022 with an upfront payment, the amortized cost is divided by 365 (assuming it's not a leap year) and spread from May 26, 2022 through May 25, 2023. In this example, the daily cost would be the same for every day. However, the monthly cost will vary because of the varying number of days in a month. Also, if the reservation period includes a leap year, costs for the leap year are divided evenly by 366.
+
+If you pay monthly, the monthly fee is divided by the number of days in that month and spread evenly across May 26, 2022 through June 25, 2022, with the next month's fee spread across June 26, 2022 through July 25, 2022, and so on. An upfront (one time) reservation purchase is shown as an example in this article. However, the amortization of daily costs is the same as a reservation bought with monthly payments.
+
+Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. Depending on your view in Cost analysis, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases currently.
+
+## Metrics affect how costs are shown
+
+In Cost analysis, you view costs with a metric. They include Actual cost and Amortized cost. Each metric affects how data is shown for your reservation charges.
+
+**Actual cost** - Shows the purchase as it appears on your bill. For example, if you bought a one-year reservation for $1200 in January 2022, cost analysis shows a $1200 cost in the month of January for the reservation. It doesn't show a reservation cost for other months of the year. If you group your actual costs by VM, then a VM that received the reservation benefit for a given month would have zero cost for the month.
+
+**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. With the same example above, cost analysis shows a different amount for each month depending on the number of days in the month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. However, _used reservation_ costs are attributed to the subscription used to buy the reservation because the unused portion isn't attributable to any specific resource or subscription.
+
+## View amortized costs
+
+By default, cost analysis shows charges as they appear on your bill. The charges are shown as actual costs or amortized over the course of your reservation period.
+
+> [!NOTE]
+> You can buy a reservation with a pay-as-you-go (MS-AZR-0003P) subscription. However, Cost Analysis doesn't support viewing amortized reservation costs for a pay-as-you-go subscription.
+
+Depending on the view you use in cost analysis, you'll see different reservation costs. For example:
+
+When you use the **DailyCosts** view with a date filter applied, you'll easily see when a reservation was purchased with an increase in actual daily costs. If you try to view costs with the **Amortized cost** metric, you'll see the same results as **Actual Cost**.
+
+Let's look at an example one-year reservation purchased for $12,016.00, purchased on October 23, 2019. The term ends on October 23, 2020, and a leap year day is included in the term, so the term's duration is 366 days.
+
+In the Azure portal, navigate to cost analysis for your scope. For example, **Cost Management** > **Cost analysis**.
+
+1. Select a date range that includes a period of the reservation term.
+2. Add a filter for **Pricing Model: Reservation** to see only reservation costs.
+3. Then, set **Granularity** to **Daily**. Here's an example showing the purchase with the date range set from October through November 2019.
+ :::image type="content" source="./media/reservation-amortization/reservation-purchase.png" alt-text="Screenshot showing a reservation purchase in Cost analysis." lightbox="./media/reservation-amortization/reservation-purchase.png" :::
+4. Under **Scope** and next to the cost shown, select the down arrow symbol and then select **Amortized cost** metric. Here's an example showing the daily cost of all reservations for the selected date range. For the highlighted day, the daily cost is about $37.90. Azure accounts for costs to further decimal places, but only shows costs to two decimal places.
+ :::image type="content" source="./media/reservation-amortization/daily-cost-all-reservations-amortized.png" alt-text="Screenshot showing daily amortized cost for all reservations in Cost analysis." lightbox="./media/reservation-amortization/daily-cost-all-reservations-amortized.png" :::
+5. If you have multiple reservations (the example above does), then use the **Group by** list to group the results by **Reservation name**. Here's an example showing the daily amortized cost of the reservation named `VM_WUS_DS3_Upfront` for $32.83. In this example, Azure determined the cost by: $12,016 / 366 = $32.83 per day. Because the reservation term includes a leap year (2020), 366 is used to divide the total cost, not 365.
+ :::image type="content" source="./media/reservation-amortization/daily-cost-amortized-specific-reservation.png" alt-text="Screenshot showing the daily amortized cost for a specific reservation in Cost analysis." lightbox="./media/reservation-amortization/daily-cost-amortized-specific-reservation.png" :::
+6. Next, change the **Granularity** to **Monthly** and expand the date range. The following example shows varying monthly costs for reservations. The cost varies because the number of days in each month differs. November has 30 days, so the daily cost of $32.83 \* 30 = ~$984.90.
+ :::image type="content" source="./media/reservation-amortization/monthly-cost-amortized-reservations.png" alt-text="Screenshot showing the monthly amortized cost for a specific reservation in Cost analysis." lightbox="./media/reservation-amortization/monthly-cost-amortized-reservations.png" :::
+
+## View reservation resource amortized costs
+
+To charge back or show back costs for a reservation, you need to know which resources used a reservation. Use the following steps to see amortized costs for individual resources. In this example, we'll examine November 2019, which was the first month of full reservation use.
+
+1. Select a date range in the reservation term where you want to view resources that used the reservation.
+2. Add a filter for **Pricing Model: Reservation** to see only reservation costs.
+3. Set **Granularity** to **Monthly**.
+4. Under **Scope** and next to the cost shown, select the down arrow symbol and then select the **Amortized** cost metric.
+5. If you have multiple reservations, use the **Group by** list to group the results by **Reservation name**.
+6. In the chart, select a reservation. A filter is added for the reservation name.
+7. In the **Group by** list, select **Resource**. The chart shows the resources that used the reservation. In the following example image, November 2019 had eight resources that used the reservation. There's one unused item, which is the subscription that was used to buy the reservation.
+ :::image type="content" source="./media/reservation-amortization/reservation-cost-resource-chart.png" alt-text="Screenshot showing the amortized cost of all the reservations for a specific month." lightbox="./media/reservation-amortization/reservation-cost-resource-chart.png" :::
+8. To see the cost more easily for individual resources, select **Table** in the chart list. Expand items as needed. Here's an example for November 2019 showing the amortized reservation costs for the eight resources that used the reservation. The highlighted cost is the unused portion of the reservation.
+ :::image type="content" source="./media/reservation-amortization/reservation-cost-resource-table.png" alt-text="Screenshot showing the amortized cost of all resources that used a reservation for a specific month." lightbox="./media/reservation-amortization/reservation-cost-resource-table.png" :::
+
+Another easy way to view reservation amortized cost is to use the **Reservations** preview view. To easily navigate to it, in Cost analysis in the top menu under **Cost by resource**, select the **Reservations (preview)** view.
+++
+## Next steps
+
+- Read [Charge back Azure Reservation costs](charge-back-usage.md) to learn more about charge back processes.
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/introduction.md
Additionally, you can publish your transformed data to data stores such as Azure
Data Factory contains a series of interconnected systems that provide a complete end-to-end platform for data engineers. This visual guide provides a detailed overview of the complete Data Factory architecture:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/31/2022 Last updated : 04/06/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in April include:
-[New Defender for Servers plans](#new-defender-for-servers-plans)
+- [New Defender for Servers plans](#new-defender-for-servers-plans)
+- [Relocation of custom recommendations](#relocation-of-custom-recommendations)
### New Defender for Servers plans
If you have been using Defender for Servers until now ΓÇô no action is required.
In addition, Defender for Cloud also begins gradual support for the [Defender for Endpoint unified agent for Windows Server 2012 R2 and 2016 (Preview)](https://techcommunity.microsoft.com/t5/microsoft-defender-for-endpoint/defending-windows-server-2012-r2-and-2016/ba-p/2783292). Defender for Servers Plan 1 deploys the new unified agent to Windows Server 2012 R2 and 2016 workloads. Defender for Servers Plan 2 deploy the legacy agent to Windows Server 2012 R2 and 2016 workloads, and will deploy the unified agent soon after it is approved for general use.
+### Relocation of custom recommendations
+
+Custom recommendations are those created by users and have no impact on the secure score. The custom recommendations can now be found under the All recommendations tab.
+
+Use the new "recommendation type" filter, to locate custom recommendations.
+
+Learn more in [Create custom security initiatives and policies](custom-security-policies.md).
+ ## March 2022 Updates in March include:
Updates in March include:
- [Deprecated preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecated-preview-alert-armmcas_activityfromanonymousipaddresses) - [Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moved-the-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices) - [Deprecated the recommendation to use service principals to protect your subscriptions](#deprecated-the-recommendation-to-use-service-principals-to-protect-your-subscriptions)-- [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013)
+- [Legacy implementation of ISO 27001 replaced with new ISO 27001:2013 initiative](#legacy-implementation-of-iso-27001-replaced-with-new-iso-270012013-initiative)
- [Deprecated Microsoft Defender for IoT device recommendations](#deprecated-microsoft-defender-for-iot-device-recommendations) - [Deprecated Microsoft Defender for IoT device alerts](#deprecated-microsoft-defender-for-iot-device-alerts) - [Posture management and threat protection for AWS and GCP released for general availability (GA)](#posture-management-and-threat-protection-for-aws-and-gcp-released-for-general-availability-ga)
Learn more:
- [Overview of Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) - [Workflow of Windows Azure classic VM Architecture - including RDFE workflow basics](../cloud-services/cloud-services-workflow-process.md)
-### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013
+### Legacy implementation of ISO 27001 replaced with new ISO 27001:2013 initiative
The legacy implementation of ISO 27001 has been removed from Defender for Cloud's regulatory compliance dashboard. If you're tracking your ISO 27001 compliance with Defender for Cloud, onboard the new ISO 27001:2013 standard for all relevant management groups or subscriptions.
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 03/20/2022 Last updated : 04/06/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--| | [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 |
-| [Relocation of custom recommendations](#relocation-of-custom-recommendations) | March 2022 |
| [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | May 2022 | ### Changes to recommendations for managing endpoint protection solutions
Learn more:
- [Defender for Cloud's supported endpoint protection solutions](supported-machines-endpoint-solutions-clouds-servers.md#endpoint-supported) - [How these recommendations assess the status of your deployed solutions](endpoint-protection-recommendations-technical.md)
-### Relocation of custom recommendations
-
-**Estimated date for change:** March 2022
-
-Custom recommendations are those created by a user, and have no impact on the secure score. Therefore, the custom recommendations are being relocated from the Secure score recommendations tab to the All recommendations tab.
-
-When the move occurs, the custom recommendations will be found via a new "recommendation type" filter.
-
-Learn more in [Create custom security initiatives and policies](custom-security-policies.md).
- ### Multiple changes to identity recommendations **Estimated date for change:** May 2022
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
Before you deploy your Enterprise IoT sensor, you will need to configure your se
| Tier | Requirements | |--|--|
- | **Minimum** | To support at least 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16 GB RAM of DDR4 or better<br>- 250 GB HDD |
+ | **Minimum** | To support up to 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16 GB RAM of DDR4 or better<br>- 250 GB HDD |
| **Recommended** | To support up to 15 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32 GB RAM of DDR4 or better<br>- 500 GB HDD | Make sure that your server or VM also has:
education-hub Azure Students Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/azure-students-program.md
services, including compute, network, storage, and databases. Any charges incurr
deducted from the credit. You can see your remaining credit on the [Azure Sponsorships portal](https://www.microsoftazuresponsorships.com/).
-After you exhaust your available credit or reach the end of 12 months, your Azure subscription becomes
-disabled. The Azure for Students subscription is not renewable. To continue using Azure, you may upgrade
-to a pay-as-you-go subscription in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade). If you decide
-not to upgrade at the end of 12 months or after you've exhausted your $100 credit, whichever occurs first,
-any products you've deployed are decommissioned and you won't be able to access them. You have 90 days
-from the end of your free subscription to upgrade to a pay-as-you-go subscription.
+After you exhaust your available credit or reach the end of 12 months, your Azure subscription becomes disabled.
+If you've reached the end of your 12 months and are still a student, you'll be able to renew your Azure for Students offer.
+We'll notify you shortly before your 12-month anniversary to let you know how to renew.
+If your student status is no longer valid after the offer expired, you will not be able to renew.
+In such case, if you wish to continue using Azure services, you may upgrade to a pay-as-you-go subscription in the Azure portal.
To get detailed terms of use for Azure for Students, see the [offer terms](https://azure.microsoft.com/offers/ms-azr-0170p/).
To get detailed terms of use for Azure for Students, see the [offer terms](https
- [Get help with login errors](troubleshoot-login.md) - [Download software (Azure for Students)](download-software.md) - [Azure for Students Starter overview](azure-students-starter-program.md)-- [Microsoft Learn: a free online learning platform](/learn/)
+- [Microsoft Learn: a free online learning platform](/learn/)
event-grid Onboard Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md
For step #5, you should decide what kind of user experience you want to provide.
This article shows you how to **onboard as an Azure Event Grid partner** using the **Azure portal**. ## Communicate your interest in becoming a partner
-Fill out [this form](https://aka.ms/gridpartnerform) and contact the Event Grid team at [GridPartner@microsoft.com](mailto:GridPartner@microsoft.com). We'll have a conversation with you providing detailed information on Partner EventsΓÇÖ use cases, personas, onboarding process, functionality, pricing, and more.
+Contact the Event Grid team at [GridPartner@microsoft.com](mailto:GridPartner@microsoft.com) communicating your interest in becoming a partner. We'll have a conversation with you providing detailed information on Partner EventsΓÇÖ use cases, personas, onboarding process, functionality, pricing, and more.
## Prerequisites To complete the remaining steps, make sure you have:
expressroute Expressroute Howto Set Global Reach Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-portal.md
Previously updated : 03/05/2021 Last updated : 04/05/2022
This article helps you configure ExpressRoute Global Reach using PowerShell. For more information, see [ExpressRouteRoute Global Reach](expressroute-global-reach.md).
+> [!NOTE]
+> IPv6 support for ExpressRoute Global Reach is now in Public Preview.
+ ## Before you begin Before you start configuration, confirm the following criteria:
Before you start configuration, confirm the following criteria:
* If your subscription owns both circuits, you can choose either circuit to run the configuration in the following sections. * If the two circuits are in different Azure subscriptions, you need authorization from one Azure subscription. Then you pass in the authorization key when you run the configuration command in the other Azure subscription.
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/expressroute-circuit-global-reach-list.png" alt-text="Screenshot of ExpressRoute circuits list.":::
- ## Enable connectivity Enable connectivity between your on-premises networks. There are separate sets of instructions for circuits that are in the same Azure subscription, and circuits that are different subscriptions. ### ExpressRoute circuits in the same Azure subscription
-1. Select the **Azure private** peering configuration.
+1. Select the **Overview** tab of your ExpressRoute circuit and then select **Add Global Reach** to open the *Add Global Reach* configuration page.
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/expressroute-circuit-private-peering.png" alt-text="Screenshot of ExpressRoute overview page.":::
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/overview.png" alt-text="Screenshot of ExpressRoute overview page.":::
-1. Select **Add Global Reach** to open the *Add Global Reach* configuration page.
+1. On the *Add Global Reach* configuration page, give a name to this configuration. Select the *ExpressRoute circuit* you want to connect this circuit to and enter in a **/29 IPv4** for the *Global Reach IPv4 subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, private peering subnet, or on-premises network. Select **Add** to add the circuit to the private peering configuration.
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png" alt-text="Enable global reach from private peering":::
+ > [!NOTE]
+ > IPv6 support for ExpressRoute Global Reach is now in Public Preview. If you want to enable this feature for test workloads, select "Both" for the *Subnets* field and include a **/125 IPv6** subnet for the *Global Reach IPv6 subnet*.
-1. On the *Add Global Reach* configuration page, give a name to this configuration. Select the *ExpressRoute circuit* you want to connect this circuit to and enter in a **/29 IPv4** for the *Global Reach subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, private peering subnet, or on-premises network. Select **Add** to add the circuit to the private peering configuration.
-
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png" alt-text="Screenshot of adding Global Reach in private peering.":::
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration.png" alt-text="Screenshot of adding Global Reach in Overview tab.":::
1. Select **Save** to complete the Global Reach configuration. When the operation completes, you'll have connectivity between your two on-premises networks through both ExpressRoute circuits.
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/save-private-peering-configuration.png" alt-text="Screenshot of saving private peering configurations.":::
- ### ExpressRoute circuits in different Azure subscriptions If the two circuits aren't in the same Azure subscription, you'll need authorization. In the following configuration, authorization is generated from circuit 2's subscription. The authorization key is then passed to circuit 1.
If the two circuits aren't in the same Azure subscription, you'll need authoriza
Make a note of the circuit resource ID of circuit 2 and the authorization key.
-1. Select the **Azure private** peering configuration.
+1. Select the **Overview** tab of ExpressRoute circuit 1. Select **Add Global Reach** to open the *Add Global Reach* configuration page.
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/expressroute-circuit-private-peering.png" alt-text="Screenshot of private peering on overview page.":::
+ :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/overview.png" alt-text="Screenshot of Global Reach button on the overview page.":::
-1. Select **Add Global Reach** to open the *Add Global Reach* configuration page.
+1. On the *Add Global Reach* configuration page, give a name to this configuration. Check the **Redeem authorization** box. Enter the **Authorization Key** and the **ExpressRoute circuit ID** generated and obtained in Step 1. Then provide a **/29 IPv4** for the *Global Reach IPv4 subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, or in your on-premises network. Select **Add** to add the circuit to the private peering configuration.
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/private-peering-enable-global-reach.png" alt-text="Screenshot of add Global Reach in private peering.":::
-
-1. On the *Add Global Reach* configuration page, give a name to this configuration. Check the **Redeem authorization** box. Enter the **Authorization Key** and the **ExpressRoute circuit ID** generated and obtained in Step 1. Then provide a **/29 IPv4** for the *Global Reach subnet*. We use IP addresses in this subnet to establish connectivity between the two ExpressRoute circuits. DonΓÇÖt use the addresses in this subnet in your Azure virtual networks, or in your on-premises network. Select **Add** to add the circuit to the private peering configuration.
+ > [!NOTE]
+ > IPv6 support for ExpressRoute Global Reach is now in Public Preview. If you want to enable this feature for test workloads, select "Both" for the *Subnets* field and include a **/125 IPv6** subnet for the *Global Reach IPv6 subnet*.
:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/add-global-reach-configuration-with-authorization.png" alt-text="Screenshot of Add Global Reach with authorization key."::: 1. Select **Save** to complete the Global Reach configuration. When the operation completes, you'll have connectivity between your two on-premises networks through both ExpressRoute circuits.
- :::image type="content" source="./media/expressroute-howto-set-global-reach-portal/save-private-peering-configuration.png" alt-text="Screenshot of saving private peering configuration with Global Reach.":::
- ## Verify the configuration
-Verify the Global Reach configuration by selecting *Private peering* under the ExpressRoute circuit configuration. When configured correctly your configuration should look as followed:
+Verify the Global Reach configuration by reviewing the list of Global Reach connections in the **Overview** tab of your ExpressRoute circuit. When configured correctly your configuration should look as follows:
:::image type="content" source="./media/expressroute-howto-set-global-reach-portal/verify-global-reach-configuration.png" alt-text="Screenshot of Global Reach configured."::: ## Disable connectivity
-To disable connectivity between an individual circuit, select the delete button next to the *Global Reach name* to remove connectivity between them. Then select **Save** to complete the operation.
-
+To disable connectivity between an individual circuit, select the delete button to the right of the Global Reach connection to remove connectivity between them. Then select **Save** to complete the operation.
After the operation is complete, you no longer have connectivity between your on-premises network through your ExpressRoute circuits.
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In CLI, run the following command: ```azurecli-interactive
- az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037 --role Contributor
+ SP_ID=$(az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 --query objectId -o tsv)
+ az role assignment create --assignee $SP_ID --role Contributor
``` #### Grant Azure Front Door access to your key vault
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
Once your request is approved, a private IP address gets assigned from the Azure
## Region availability
-Azure Front Door private endpoints are available in the following regions:
+Azure Front Door private link is available in the following regions:
| Americas | Europe | Asia Pacific | |--|--|--|
frontdoor How To Enable Private Link Internal Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-internal-load-balancer.md
In this section, you'll map the Private Link service to a private endpoint creat
| Setting | Value | | - | -- |
- | Name | Enter a name to identify this storage blog origin. |
- | Origin Type | Storage (Azure Blobs) |
+ | Name | Enter a name to identify this custom origin. |
+ | Origin Type | Custom |
| Host name | Select the host from the dropdown that you want as an origin. | | Origin host header | You can customize the host header of the origin or leave it as default. | | HTTP port | 80 (default) |
In this section, you'll map the Private Link service to a private endpoint creat
| Priority | Different origin can have different priorities to provide primary, secondary, and backup origins. | | Weight | 1000 (default). Assign weights to your different origin when you want to distribute traffic.| | Resource | If you select **In my directory**, specify the ILB resource in your subscription. |
- | ID/alias | If you select **By ID or alias**, specify the resource ID of the ILB resource you want to enable private link to. |
+ | ID/alias | If you select **By ID or alias**, specify the resource ID of the Private Link Service resource you want to enable private link to. |
| Region | Select the region that is the same or closest to your origin. | | Request message | Customize message or choose the default. |
governance Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/caf-foundation/index.md
estate. This sample will deploy and enforce resources, policies, and templates t
organization to confidently get started with Azure. :::image type="complex" source="../../media/caf-blueprints/caf-foundation-architecture.png" alt-text="C A F Foundation, image describes what gets installed as part of C A F guidance for creating a foundation to get started with Azure." border="false":::
- Describes an Azure architecture which is achieved by deploying the C A F Foundation blueprint. It's applicable to a subscription with resource groups which consists of a storage account for storing logs, Log Analytics configured to store in the storage account. It also depicts Azure Key Vault configured with Azure Security Center standard setup. All these core infrastructures are accessed using Azure Active Directory and enforced using Azure Policy.
+ Describes an Azure architecture which is achieved by deploying the C A F Foundation blueprint. It's applicable to a subscription with resource groups which consists of a storage account for storing logs, Log Analytics configured to store in the storage account. It also depicts Azure Key Vault configured with Microsoft Defender for Cloud standard setup. All these core infrastructures are accessed using Azure Active Directory and enforced using Azure Policy.
:::image-end::: This implementation incorporates several Azure services used to provide a secure, fully monitored,
enterprise-ready foundation. This environment is composed of:
- Deploy [Log Analytics](../../../../azure-monitor/overview.md) is deployed to ensure all actions and services log to a central location from the moment you start your secure deployment in to [Storage Accounts](../../../../storage/common/storage-introduction.md) for diagnostic logging-- Deploy [Azure Security Center](../../../../security-center/security-center-introduction.md) (standard
+- Deploy [Microsoft Defender for Cloud](../../../../security-center/security-center-introduction.md) (standard
version) provides threat protection for your migrated workloads - The blueprint also defines and deploys [Azure Policy](../../../policy/overview.md) definitions: - Policy definitions:
enterprise-ready foundation. This environment is composed of:
- Require Azure Storage Account Secure transfer Encryption - Deny resource types (choose while deploying) - Policy initiatives:
- - Enable Monitoring in Azure Security Center (100+ policy definitions)
+ - Enable Monitoring in Microsoft Defender for Cloud (100+ policy definitions)
All these elements abide to the proven practices published in the [Azure Architecture Center - Reference Architectures](/azure/architecture/reference-architectures/).
hdinsight Apache Ambari Troubleshoot Directory Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-directory-alerts.md
Title: Apache Ambari directory alerts in Azure HDInsight
description: Discussion and analysis of possible reasons and solutions for Apache Ambari directory alerts in HDInsight. Previously updated : 01/22/2020 Last updated : 04/06/2022 # Scenario: Apache Ambari directory alerts in Azure HDInsight
Manually create missing directories on the affected worker node(s).
## Next steps
hdinsight Apache Ambari Troubleshoot Down Hosts Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-down-hosts-services.md
Title: Apache Ambari UI shows down hosts and services in Azure HDInsight
description: Troubleshooting an Apache Ambari UI issue when it shows down hosts and services in Azure HDInsight Previously updated : 08/02/2019 Last updated : 04/06/2022 # Scenario: Apache Ambari UI shows down hosts and services in Azure HDInsight
Usually rebooting the active headnode will mitigate this issue. If not please co
## Next steps
hdinsight Apache Ambari Troubleshoot Fivezerotwo Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-fivezerotwo-error.md
Title: Apache Ambari UI 502 error in Azure HDInsight
description: Apache Ambari UI 502 error when you try to access your Azure HDInsight cluster Previously updated : 08/05/2019 Last updated : 04/06/2022 # Scenario: Apache Ambari UI 502 error in Azure HDInsight
Error Processing URI: /api/v1/clusters/xxxxxx/host_components - (java.lang.OutOf
## Next steps
hdinsight Apache Ambari Troubleshoot Heartbeat Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-heartbeat-issues.md
Title: Apache Ambari heartbeat issues in Azure HDInsight
description: Review of various reasons for Apache Ambari heartbeat issues in Azure HDInsight Previously updated : 02/06/2020 Last updated : 04/06/2022 # Apache Ambari heartbeat issues in Azure HDInsight
OMS logs are causing high CPU utilization.
## Next steps
hdinsight Apache Ambari Troubleshoot Stale Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-stale-alerts.md
Title: Apache Ambari stale alerts in Azure HDInsight
description: Discussion and analysis of possible reasons and solutions for Apache Ambari stale alerts in HDInsight. Previously updated : 01/22/2020 Last updated : 04/06/2022 # Scenario: Apache Ambari stale alerts in Azure HDInsight
If your problem wasn't mentioned here or you're unable to solve it, visit one of
* If you need more help, submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). To get there, select Help (**?**) from the portal menu or open the **Help + support** pane. For more information, see [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
- Support for subscription management and billing is included with your Microsoft Azure subscription. Technical support is available through the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
+ Support for subscription management and billing is included with your Microsoft Azure subscription. Technical support is available through the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Interactive Query Troubleshoot Tez View Slow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-tez-view-slow.md
Title: Apache Ambari Tez View loads slowly in Azure HDInsight
description: Apache Ambari Tez View may load slowly or may not load at all in Azure HDInsight Previously updated : 07/30/2019 Last updated : 04/06/2020 # Scenario: Apache Ambari Tez View loads slowly in Azure HDInsight
This is an issue that has been fixed in Oct 2017. Recreating your cluster will m
## Next steps
healthcare-apis Events Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md
Previously updated : 03/02/2022 Last updated : 04/06/2022
Events are a notification and subscription feature in the Azure Health Data Serv
> > For more information about the features, configurations, and to learn about the use cases of the Azure Event Grid service, see [Azure Event Grid](../../event-grid/overview.md) > [!IMPORTANT] >
iot-central Concepts Faq Start Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-start-iot-central.md
You should start your IoT journey with Azure IoT Central. Starting as high as possible in the Azure IoT technology stack lets you focus your time on using IoT data to create business value instead of simply getting your IoT data.
+The following three minute video explains why you should start with IoT Central:
+
+> [!VIDEO https://aka.ms/docs/player?id=1c6d5f00-a66f-4ff9-a681-4e04614d70b4]
+ ## Start with Azure IoT Central An application platform as a service (aPaaS) streamlines many of the complex decisions you face when you build an IoT solution. Many IoT projects are de-funded because of early-stage requirements in simply getting IoT data. Use the capabilities and experiences in IoT Central to showcase the value of your IoT data without overburdening yourself with building the infrastructure for device connectivity and management.
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data.md
Blob Storage destinations let you configure the connection with a *connection st
[!INCLUDE [iot-central-managed-identities](../../../includes/iot-central-managed-identities.md)]
+This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
+ # [Service Bus](#tab/service-bus) Both queues and topics are supported for Azure Service Bus destinations.
Service Bus destinations let you configure the connection with a *connection str
[!INCLUDE [iot-central-managed-identities](../../../includes/iot-central-managed-identities.md)]
+This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
+ # [Event Hubs](#tab/event-hubs) IoT Central exports data in near real time. The data is in the message body and is in JSON format encoded as UTF-8.
Event Hubs destinations let you configure the connection with a *connection stri
[!INCLUDE [iot-central-managed-identities](../../../includes/iot-central-managed-identities.md)]
+This article shows how to create a managed identity in the Azure portal. You can also use the Azure CLI to create a manged identity. To learn more, see [Assign a managed identity access to a resource using Azure CLI](../../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md).
+ # [Azure Data Explorer](#tab/data-explorer) You can use an [Azure Data Explorer cluster](/azure/data-explorer/data-explorer-overview) or an [Azure Synapse Data Explorer pool](../../synapse-analytics/data-explorer/data-explorer-overview.md). To learn more, see [What is the difference between Azure Synapse Data Explorer and Azure Data Explorer?](../..//synapse-analytics/data-explorer/data-explorer-compare.md).
IoT Central exports data in near real time to a database table in the Azure Data
To query the exported data in the Azure Data Explorer portal, navigate to the database and select **Query**.
-### Connection options
-
-Azure Data Explorer destinations let you configure the connection with a *service principal* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
-
-Managed identities are more secure because:
+The following video walks you through exporting data to Azure Data Explorer:
-- You don't store the credentials for your resource in your IoT Central application.-- The credentials are automatically tied to the lifetime of your IoT Central application.-- Managed identities automatically rotate their security keys regularly.
+> [!VIDEO https://aka.ms/docs/player?id=9e0c0e58-2753-42f5-a353-8ae602173d9b]
-IoT Central currently uses [system-assigned managed identities](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+### Connection options
-When you configure a managed identity, the configuration includes a *scope* and a *role*:
+Azure Data Explorer destinations let you configure the connection with a *service principal* or a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md).
-- The scope defines where you can use the managed identity.-- The role defines what permissions the IoT Central application is granted in the destination service. This article shows how to create a managed identity using the Azure CLI. You can also use the Azure portal to create a manged identity.
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
To view an individual device:
## Monitor your devices
-USe the **Devices** page to monitor and manage your devices
+Use the **Devices** page to monitor and manage your devices.
+
+The following video walks you through monitoring device connectivity status:
+
+> [!VIDEO https://aka.ms/docs/player?id=75d0de58-9cc0-4505-9fa1-a0a7da8bb466]
### Device status values
iot-central Howto Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-map-data.md
Data mapping lets you transform complex device telemetry into structured data in
:::image type="content" source="media/howto-map-data/map-data-summary.png" alt-text="Diagram that summarizes the mapping process in IoT Central." border="false":::
+The following video walks you through the data mapping process:
+
+> [!VIDEO https://aka.ms/docs/player?id=d8e684a7-deda-47d1-9d6c-36939adc57bb]
+ ## Map telemetry for your device A mapping uses a [JSONPath](https://www.npmjs.com/package/jsonpath) expression to identify the value in an incoming telemetry message to map to an alias.
iot-central Howto Transform Data Internally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data-internally.md
Transformations in an IoT Central data export definition let you manipulate the
Use transformations to restructure JSON payloads, rename fields, filter out fields, and run simple calculations on telemetry values. For example, use a transformation to convert your messages into a tabular format that matches the schema of a destination such as an Azure Data Explorer table.
+The following video introduces you to IoT Central data transformations:
+
+> [!VIDEO https://aka.ms/docs/player?id=f1752a73-89e6-42c2-8298-e9d6ce212daa]
+ ## Add a transformation To add a transformation for a destination in your data export, select **+ Transform** as shown in the following screenshot:
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
This article provides an overview of the features of Azure IoT Central.
## Create an IoT Central application
-[Quickly deploy a new IoT Central application](quick-deploy-iot-central.md) and then customize it to your specific requirements. Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use application templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
+You can quickly deploy a new IoT Central application and then customize it to your specific requirements. Application templates in Azure IoT Central are a tool to help you kickstart your IoT solution development. You can use application templates for everything from getting a feel for what is possible, to fully customizing your application to resell to your customers.
Start with a generic _application template_ or with one of the industry-focused application templates:
Start with a generic _application template_ or with one of the industry-focused
- [Government](../government/tutorial-connected-waste-management.md) - [Healthcare](../healthcare/tutorial-continuous-patient-monitoring.md)
-See the [Create a new application](quick-deploy-iot-central.md) quickstart for a walk-through of how to create your first application.
+See the [Use your smartphone as a device to send telemetry to an IoT Central application](quick-deploy-iot-central.md) quickstart to learn how to create your first application and connect a device.
## Connect devices
-After you create your application, the next step is to create and connect devices. Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
+After you create your application, the next step is to create and connect devices. The following video walks you through the process of connecting a device to an IoT Central application:
+
+> [!VIDEO https://aka.ms/docs/player?id=66834fbb-7006-4f2b-b73f-540239fd2784]
+
+Every device connected to IoT Central uses a _device template_. A device template is the blueprint that defines the characteristics and behavior of a type of device such as the:
- Telemetry it sends. Examples include temperature and humidity. Telemetry is streaming data. - Business properties that an operator can modify. Examples include a customer address and a last serviced date.
iot-edge How To Provision Devices At Scale Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-symmetric.md
Have the following information ready:
method = "symmetric_key" registration_id = "PASTE_YOUR_REGISTRATION_ID_HERE"
- symmetric_key = "PASTE_YOUR_PRIMARY_KEY_OR_DERIVED_KEY_HERE"
+ symmetric_key = { value = "PASTE_YOUR_PRIMARY_KEY_OR_DERIVED_KEY_HERE" }
``` 1. Update the values of `id_scope`, `registration_id`, and `symmetric_key` with your DPS and device information.
iot-edge Iot Edge As Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-as-gateway.md
+ # How an IoT Edge device can be used as a gateway [!INCLUDE [iot-edge-version-201806-or-202011](../../includes/iot-edge-version-201806-or-202011.md)] IoT Edge devices can operate as gateways, providing a connection between other devices on the network and IoT Hub.
-The IoT Edge hub module acts like IoT Hub, so can handle connections from other devices that have an identity with the same IoT hub. This type of gateway pattern is called *transparent* because messages can pass from downstream devices to IoT Hub as though there were not a gateway between them.
+The IoT Edge hub module acts like IoT Hub, so it can handle connections from other devices that have an identity with the same IoT hub. This type of gateway pattern is called *transparent* because messages can pass from downstream devices to IoT Hub as though there were not a gateway between them.
For devices that don't or can't connect to IoT Hub on their own, IoT Edge gateways can provide that connection. This type of gateway pattern is called *translation* because the IoT Edge device has to perform processing on incoming downstream device messages before they can be forwarded to IoT Hub. These scenarios require additional modules on the IoT Edge gateway to handle the processing steps.
In the transparent gateway pattern, devices that theoretically could connect to
For more information about how the IoT Edge hub manages communication between downstream devices and the cloud, see [Understand the Azure IoT Edge runtime and its architecture](iot-edge-runtime.md). <!-- 1.1 -->+ ::: moniker range="iotedge-2018-06" ![Diagram - Transparent gateway pattern](./media/iot-edge-as-gateway/edge-as-gateway-transparent.png)
->[!NOTE]
->In IoT Edge version 1.1 and older, IoT Edge devices cannot be downstream of an IoT Edge gateway.
+> [!NOTE]
+> In IoT Edge version 1.1 and older, IoT Edge devices cannot be downstream of an IoT Edge gateway.
>
->Beginning with version 1.2 of IoT Edge, transparent gateways can handle connections from downstream IoT Edge devices. For more information, switch to the [IoT Edge 1.2](?view=iotedge-2020-11&preserve-view=true) version of this article.
+> Beginning with version 1.2 of IoT Edge, transparent gateways can handle connections from downstream IoT Edge devices. For more information, switch to the [IoT Edge 1.2](?view=iotedge-2020-11&preserve-view=true) version of this article.
::: moniker-end <!-- 1.2 -->+ ::: moniker range=">=iotedge-2020-11" Beginning with version 1.2 of IoT Edge, transparent gateways can handle connections from downstream IoT Edge devices.
The parent/child relationship is established at three points in the gateway conf
All devices in a transparent gateway scenario need cloud identities so they can authenticate to IoT Hub. When you create or update a device identity, you can set the device's parent or child devices. This configuration authorizes the parent gateway device to handle authentication for its child devices.
->[!NOTE]
->Setting the parent device in IoT Hub used to be an optional step for downstream devices that use symmetric key authentication. However, starting with version 1.1.0 every downstream device must be assigned to a parent device.
+> [!NOTE]
+> Setting the parent device in IoT Hub used to be an optional step for downstream devices that use symmetric key authentication. However, starting with version 1.1.0 every downstream device must be assigned to a parent device.
>
->You can configure the IoT Edge hub to go back to the previous behavior by setting the environment variable **AuthenticationMode** to the value **CloudAndScope**.
+> You can configure the IoT Edge hub to go back to the previous behavior by setting the environment variable **AuthenticationMode** to the value **CloudAndScope**.
Child devices can only have one parent. By default, a parent can have up to 100 children. You can change this limit by setting the **MaxConnectedClients** environment variable in the parent device's edgeHub module. <!-- 1.2.0 -->+ ::: moniker range=">=iotedge-2020-11" IoT Edge devices can be both parents and children in transparent gateway relationships. A hierarchy of multiple IoT Edge devices reporting to each other can be created. The top node of a gateway hierarchy can have up to five generations of children. For example, an IoT Edge device can have five layers of IoT Edge devices linked as children below it. But the IoT Edge device in the fifth generation cannot have any children, IoT Edge or otherwise. ::: moniker-end
A child device needs to be able to find its parent device on the local network.
On downstream IoT devices, use the **gatewayHostname** parameter in the connection string to point to the parent device. <!-- 1.2.0 -->+ ::: moniker range=">=iotedge-2020-11" On downstream IoT Edge devices, use the **parent_hostname** parameter in the config file to point to the parent device. ::: moniker-end
On downstream IoT Edge devices, use the **parent_hostname** parameter in the con
Parent and child devices also need to authenticate their connections to each other. Each device needs a copy of a shared root CA certificate which the child devices use to verify that they are connecting to the proper gateway. <!-- 1.2.0 -->+ ::: moniker range=">=iotedge-2020-11" When multiple IoT Edge gateways connect to each other in a gateway hierarchy, all the devices in the hierarchy should use a single certificate chain. ::: moniker-end
All IoT Hub primitives that work with IoT Edge's messaging pipeline also support
Use the following table to see how different IoT Hub capabilities are supported for devices compared to devices behind gateways. <!-- 1.1 -->+ ::: moniker range="iotedge-2018-06" | Capability | IoT device | IoT behind a gateway |
Use the following table to see how different IoT Hub capabilities are supported
::: moniker-end <!-- 1.2.0 -->+ ::: moniker range=">=iotedge-2020-11" | Capability | IoT device | IoT behind a gateway | IoT Edge device | IoT Edge behind a gateway |
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
The Data Science Virtual Machine comes with the most useful data-science tools p
| Dlib | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | | | Docker | <span class='green-check'>&#9989;</span> <br/> (Windows containers only) | <span class='green-check'>&#9989;</span> | | | Nccl | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
-| Rattle | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
+| Rattle | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
| ONNX Runtime | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | |
The Data Science Virtual Machine comes with the most useful data-science tools p
| [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | | | [Azure CLI](/cli/azure) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | | | [AzCopy](../../storage/common/storage-use-azcopy-v10.md) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span> | [AzCopy on the DSVM](./dsvm-tools-ingestion.md#azcopy) |
-| [Blob FUSE driver](https://github.com/Azure/azure-storage-fuse) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | [blobfuse on the DSVM](./dsvm-tools-ingestion.md#blobfuse) |
+| [Blob FUSE driver](https://github.com/Azure/azure-storage-fuse) | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span></br> | [blobfuse on the DSVM](./dsvm-tools-ingestion.md#blobfuse) |
| [Azure Cosmos DB Data Migration Tool](../../cosmos-db/import-data.md) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | [Cosmos DB on the DSVM](./dsvm-tools-ingestion.md#azure-cosmos-db-data-migration-tool) | | Unix/Linux command-line tools | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | | | Apache Spark 3.1 (standalone) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | |
The Data Science Virtual Machine comes with the most useful data-science tools p
| &nbsp;&nbsp;&nbsp;&nbsp; PySpark | | | [pySpark Jupyter Samples](./dsvm-samples-and-walkthroughs.md#sparkml) | **Ubuntu 18.04 DSVM and Windows Server 2019 DSVM** has the following Jupyter Kernels:-</br>
-* Python 3.8 - default</br> 
-* Python 3.8 - PyTorch</br> 
-* Python 3.8 - TensorFlow</br>ΓÇ»
-* Python 3.6 - AzureML - TensorFlow</br> 
-* Python 3.6 - AzureML - PyTorch</br> 
-* Python 3.6 - AzureML – AutoML</br> 
+* Python3.8-default</br>ΓÇ»
+* Python3.8-Tensorflow-Pytorch</br>ΓÇ»
+* Python3.8-AzureML</br>ΓÇ»
* R</br>ΓÇ» * Python 3.7 - Spark (local)</br>ΓÇ»
-* Julia 1.2.0</br>ΓÇ»
+* Julia 1.6.0</br>ΓÇ»
* R Spark – HDInsight</br>  * Scala Spark – HDInsight</br>  * Python 3 Spark – HDInsight</br>  **Ubuntu 18.04 DSVM and Windows Server 2019 DSVM** has the following conda environments:-</br>
-* py38_defaultΓÇ» </br>
-* py38_tensorflowΓÇ»</br>
-* py38_pytorchΓÇ» </br>
-* azureml_py36_tensorflowΓÇ» </br>
-* azureml_py36_pytorchΓÇ» </br>
-* azureml_py36_automlΓÇ» </br>
-
+* Python3.8-defaultΓÇ» </br>
+* Python3.8-Tensorflow-PytorchΓÇ»</br>
+* Python3.8-AzureMLΓÇ» </br>
## Use your preferred editor or IDE
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
Once you've configured ACR without admin user as described earlier, you can acce
## Create workspace with user-assigned managed identity
-When creating workspace, you can specify a user-assigned managed identity that will be used to access the associated resources: ACR, KeyVault, Storage, and App Insights.
+When creating a workspace, you can bring your own [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md) that will be used to access the associated resources: ACR, KeyVault, Storage, and App Insights.
-First [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-cli.md), and take note of the ARM resource ID of the managed identity.
+> [!IMPORTANT]
+> When creating workspace with user-assigned managed identity, you must create the associated resources yourself, and grant the managed identity roles on those resources. Use the [role assignment ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-dependencies-role-assignment) to make the assignments.
-Then, use Azure CLI or Python SDK to create the workspace. When using the CLI, specify the ID using the `--primary-user-assigned-identity` parameter. When using the SDK, use `primary_user_assigned_identity`. The following are examples of using the Azure CLI and Python to create a new workspace using these parameters:
+Use Azure CLI or Python SDK to create the workspace. When using the CLI, specify the ID using the `--primary-user-assigned-identity` parameter. When using the SDK, use `primary_user_assigned_identity`. The following are examples of using the Azure CLI and Python to create a new workspace using these parameters:
__Azure CLI__
ws = Workspace.create(name="workspace name",
You can also use [an ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/) to create a workspace with user-assigned managed identity.
-> [!IMPORTANT]
-> If you bring your own associated resources, instead of having Azure Machine Learning service create them, you must grant the managed identity roles on those resources. Use the [role assignment ARM template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices/machine-learning-dependencies-role-assignment) to make the assignments.
- For a workspace with [customer-managed keys for encryption](concept-data-encryption.md), you can pass in a user-assigned managed identity to authenticate from storage to Key Vault. Use argument __user-assigned-identity-for-cmk-encryption__ (CLI) or __user_assigned_identity_for_cmk_encryption__ (SDK) to pass in the managed identity. This managed identity can be the same or different as the workspace primary user assigned managed identity.
-If you have an existing workspace, you can update it from system-assigned to user-assigned managed identity using ```az ml workspace update``` CLI command, or ```Workspace.update``` Python SDK method.
-- ## Next steps * Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md)
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Previously updated : 04/04/2022 Last updated : 04/06/2022
In this tutorial, you accomplish the following tasks:
> [!TIP] > If you're looking for a template (Microsoft Bicep or Hashicorp Terraform) that demonstrates how to create a secure workspace, see [Tutorial - Create a secure workspace using a template](tutorial-create-secure-workspace-template.md).
-> [!IMPORTANT]
-> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot use Azure Container Instances inside the VNet for deploying models. For more information, see [Secure the inference environment](how-to-secure-inferencing-vnet.md).
- ## Prerequisites * Familiarity with Azure Virtual Networks and IP networking. If you are not familiar, try the [Fundamentals of computer networking](/learn/modules/network-fundamentals/) module. * While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2.
+## Limitations
+
+The steps in this article put Azure Container Registry behind the VNet. In this configuration, you can't deploy models to Azure Container Instances inside the VNet. For more information, see [Secure the inference environment](how-to-secure-inferencing-vnet.md).
## Create a virtual network To create a virtual network, use the following steps:
When Azure Container Registry is behind the virtual network, Azure Machine Learn
## Use the workspace
+> [!IMPORTANT]
+> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment](how-to-secure-inferencing-vnet.md).
+ At this point, you can use studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [run a Python script](tutorial-1st-experiment-hello-world.md). ## Stop compute instance and jump box
marketplace Isv Csp Reseller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-csp-reseller.md
Previously updated : 02/10/2022 Last updated : 04/06/2022 # ISV to CSP partner private offers
Private offers can be created for all transactable marketplace offer types. This
## Private offers dashboard
-The **Private offers** dashboard in the left-nav menu of Partner Center is your centralized location to create and manage private offers. This dashboard has two tabs:
+Create and manage private offers from the **Private offers** dashboard in Partner CenterΓÇÖs left-nav menu. This dashboard has two tabs:
-- **Customers**: *Coming soon*-- **CSP partners**: Opens the CSP partners private offer dashboard, which lets you:
+- **Customers**: Create a private offer for a customer in Azure Marketplace. See [ISV to customer private offers](isv-customer.md).
+- **CSP partners**: Create a private offer for a CSP partner in Azure Marketplace. This opens the **CSP Partner** private offer dashboard, which lets you:
- Create new private offers - View the status of all your private offers - Clone existing private offers - Withdraw private offers - Delete private offers
-## Create a new Private Offer for CSP partners
+## Create a private offer for a CSP partner
1. Sign in to Partner Center. 2. Select **Private offers** from the left-nav menu to open the dashboard.
The **Private offers** dashboard in the left-nav menu of Partner Center is your
4. Select **+ New Private offer**. 5. Enter a private offer name. This is a descriptive name that you will use to refer to your private offer within Partner Center. This name will not be visible to CSP partners.
-### Offer Setup
+### Offer setup
The offer setup page lets you define private offer terms, notification contact, pricing, and CSP partners.
The offer setup page lets you define private offer terms, notification contact,
- To have your private offer start in an upcoming month, select **Specific month** and make a selection. The start date for this option will always be the first of the month. - Choose the month for your private offer's **End date**. This will always be the last date of the month.
-2. Provide up to five emails as **Notification Contacts** to receive email updates on the status of your private offer. These emails are sent when your private offer moves to **Live**, **Ended** or is **Withdrawn**.
+2. Provide up to five emails as **Notification Contacts** to receive email updates on the status of your private offer. These emails are sent when your private offer moves to **Live**, **Ended**, or is **Withdrawn**.
-3. Configure a **Pricing** margin percentage for up to 10 offers/plans in a private offer. The margin the CSP partner receives will be a percentage off your plan's list price in the marketplace.
+3. Configure the percentage-based margins or absolute **Pricing** for up to ten offers/plans in a private offer.
- - Select **+ Add Offers/plans** to choose the offers/plans you want to provide a private offer for.
- - Choose to provide a private offer at an offer level (all current and future plans under that offer will have a private price associated with it) or at a plan level (only the plan you selected will have a private price associated with it).
- - Choose up to 10 offers/plans and then **Add**.
- - Enter the margin percentage to provide to your CSP partners. The margin you provide will be calculated as a percentage off your plan's list price in the marketplace.
+ - *Percentage-based* margin can be given at an offer level so it applies to all plans within the offer, or it can be given only for a specific plan. The margin the CSP partner receives will be a percentage off your plan's list price in the marketplace.
+ - *Absolute price* can be used to specify a price point higher, lower, or equal to the publicly listed plan price; it can only be applied at a plan level and does not apply to Virtual Machine offer types. You can only customize the price based on the same pricing model, billing term, and dimensions of the public offer; you cannot change to a new pricing model or billing term or add dimensions.<br><br>
+
+ 1. Select **+ Add Offers/plans** to choose the offers/plans you want to provide a private offer for.
+ 1. Choose to provide a custom price or margin at either an offer level (all current and future plans under that offer will have a margin associated to it) or at a plan level (only the plan you selected will have a private price associated with it).
+ 1. Choose up to ten offers/plans and select **Add**.
+ 1. Enter the margin percentage or configure the absolute price for each item in the pricing table.
> [!NOTE] > Only offers/plans that are transactable in Microsoft AppSource or Azure Marketplace appear in the selection menu.
The offer setup page lets you define private offer terms, notification contact,
3. Enter the customerΓÇÖs tenant ID. You can add up to 25 customers for the CSP partner, who will need to provide the customer tenant IDs. 4. Click **Add**.
-### Review and Submit
-
-This page is where you can review all the information you've provided.
+### Review and submit
-Once submitted, private offers cannot be modified. Ensure your information is accurate.
+This page is where you can review all the information you've provided. Once submitted, private offers cannot be modified. Ensure your information is accurate.
When you're ready, select **Submit**. You will be returned to the dashboard where you can view the status of your private offer. The notification contact(s) will receive an email once the private offer is live.
To view the status of your private offer:
The status of the private offer will be one of the following: -- **Draft**: You have started the process of creating a private offer but have not yet submitted it.-- **In Progress**: You have submitted a private offer and it is currently being published in our systems.-- **Live**: Your private offer is discoverable and transactable by CSP partners.-- **Ended**: Your private offer has expired or passed its end date.
+- **Draft** ΓÇô You have started the process of creating a private offer but have not yet submitted it.
+- **In Progress** ΓÇô You have submitted a private offer and it is currently being published in our systems.
+- **Live** ΓÇô Your private offer is discoverable and transactable by CSP partners.
+- **Ended** ΓÇô Your private offer has expired or passed its end date.
## Clone a private offer
While your private offer publish is in progress, you can view additional details
The additional details will be one of the following: -- **CSP partner authorization in progress**: We are currently authorizing the given CSP partner to be able to sell your offer.-- **Private offer publish in progress**: We are currently publishing the given CSP partner's private price.-- **Live**: The private offer is now Live for this CSP partner.
+- **CSP partner authorization in progress** ΓÇô We are currently authorizing the given CSP partner to be able to sell your offer.
+- **Private offer publish in progress** ΓÇô We are currently publishing the given CSP partner's private price.
+- **Live** ΓÇô The private offer is now Live for this CSP partner.
## Reporting on private offers
marketplace Isv Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-customer.md
+
+ Title: Configure ISV to customer private offers in Microsoft Partner Center for Azure Marketplace
+description: Configure ISV to customer private offers in Microsoft Partner Center for Azure Marketplace.
+++++ Last updated : 04/06/2022++
+# ISV to customer private offers
+
+Private offers allow publishers and customers to transact one or more products in Azure Marketplace by creating time-bound pricing with customized terms. This article explains the requirements and steps for a publisher to create a private offer for a customer in Azure Marketplace. Private offers aren't yet available in Microsoft AppSource.
+
+This is what the private offer experience looks like from the publisher's perspective:
++
+This is what the private offer experience looks like from the customer's perspective:
++
+## Benefits of private offers
+
+Private offers provide new deal-making capabilities to the marketplace that can't be achieved with private plans.
+
+- **Time-bound discount** ΓÇô Specify a start/end date for the discounted price. When the private offer ends, customers fall back to the publicly listed price.
+- **Custom terms and contract upload** ΓÇô Extend unique terms to each customer privately. By accepting your offer, the customer is accepting your terms. Attaching a PDF of your contract to the private offer is easy; no more plain text or amending the Microsoft Standard Contract.
+- **Send by email** ΓÇô Rather than coaching customers on where to find their offer in the Azure portal, email customers a link directly to their private offer. Save time by sending this email to anyone in the customer's company responsible for accepting the offer.
+- **Deals expire** ΓÇô Add urgency to the sales process by specifying the date by which the customer must accept the offer or it expires.
+- **Faster arrival** ΓÇô Private offers are available for purchase within 15 minutes (private plans take up to 48 hours to arrive).
+- **Bundle discounts** ΓÇô Select multiple products/plans to receive a discount; customers can accept the private offer for all of them at once.
+- **Target a company** ΓÇô Private offers are sent to an organization, not a tenant.
+
+## Private offer prerequisites
+
+Creating a private offer for a customer has these prerequisites:
+
+- You've created a [commercial marketplace account](create-account.md) in Partner Center.
+- Your account is enrolled in the commercial marketplace program.
+- The offer you want to sell privately has been published to the marketplace and is publicly transactable.
+
+## Supported offer types
+
+Private Offers can be created for all transactable marketplace offer types: SaaS, Azure Virtual Machines, and Azure Applications.
+
+> [!NOTE]
+> Discounts are applied on all custom meter dimensions your offer may use. They are only applied on the software charges set by you, not on the associated Azure infrastructure hardware charges.
+
+## Private offers dashboard
+
+Create and manage private offers from the **Private Offers** dashboard in Partner Center's left-nav menu. This dashboard has two tabs:
+
+- **Customers** ΓÇô Create a private offer for a customer in Azure Marketplace. This opens the Customers private offer dashboard, which lets you:
+
+ - Create new private offers
+ - View the status of all your private offers
+ - Clone existing private offers
+ - Withdraw private offers
+ - Delete private offers
+
+- **CSP Partners** ΓÇô Create a private offer for a CSP partner. See [ISV to CSP partner private offers](isv-csp-reseller.md).
+
+ The **Customers** tab looks like this:
+
+ :::image type="content" source="media/isv-customer/customer-tab.png" alt-text="Shows the private offers customer tab in Partner Center.":::
+
+## Create a private offer for a customer
+
+1. Sign in to [Partner Center](https://partner.microsoft.com/dashboard/commercial-marketplace/overview).
+2. Select the **Marketplace offers** workspace.
+3. Select **Private Offers** from the left-nav menu.
+4. Select the **Customers** tab.
+5. Select **+ New Private Offer**.
+6. Enter a private offer name. This is a descriptive name for use within Partner Center and will be visible to your customer in the Azure portal.
+
+### Offer setup
+
+Use this page to define private offer terms, notification contacts, and pricing for your customer.
+
+- **Customer Information** ΓÇô Specify the billing account for the customer receiving this private offer. This will only be available to the configured customer billing account and the customer will need to be an owner or contributor or signatory on the billing account to accept the offer.
+
+ > [!NOTE]
+ > Customers can find their billing account in the [Azure portal ](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** > **ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. See [Billing account scopes in the Azure portal](/azure/cost-management-billing/manage/view-all-accounts).
+
+ :::image type="content" source="media/isv-customer/customer-properties.png" alt-text="Shows the offer Properties tab in Partner Center.":::
+
+- **Private offer terms** ΓÇô Specify the duration, accept-by date, and terms:
+
+ - **Start date** ΓÇô Choose **Accepted date** if you want the private offer to start as soon as the customer accepts it. If a private offer is extended to an existing customer of a Pay-as-you-go product, this will make the private price applicable for the entire month. To have your private offer start in an upcoming month, select **Specific month** and choose one. The start date for this option will always be the first day of the selected month.
+ - **End date** ΓÇô Choose the month for your private offer's **End date**. This will always be the last day of the selected month.
+ - **Accept by** ΓÇô Choose the expiration date for your private offer. Your customer must accept the private offer prior to this date.
+ - **Terms and conditions** ΓÇô Optionally, upload a PDF with terms and conditions your customer must accept as part of the private offer.
+
+ > [!NOTE]
+ > Your terms and conditions must adhere to Microsoft supported billing models, offer types, and the [Microsoft Publisher Agreement](https://aka.ms/PrivateOfferPublisherAgreement).
+
+- **Notification Contacts** ΓÇô Provide up to five emails in your organization as **Notification Contacts** to receive email updates on the status of your private offer. These emails are sent when your offer status changes to **Pending acceptance**, **Accepted**, or **Expired**. You must also provide a **Prepared by** email address, which will be displayed to the customer in the private offer listing in the Azure portal.
+
+- **Pricing** ΓÇô Configure the percentage-based discount or absolute price for up to 10 offers/plans in a private offer. For a percentage-based discount, the customer will receive this discount off your plan's list price in the marketplace.
+
+ - Select **+ Add Offers/plans** to choose the offers/plans you want to provide a private offer for.
+ - Choose to provide a custom price or discount at either an offer level (all current and future plans under that offer will have a discount associated to it) or at a plan level (only the plan you selected will have a private price associated with it).
+ - Choose up to 10 offers/plans and select **Add**.
+ - Enter the discount percentage or configure the absolute price for each item in the pricing table.
+
+ > [!NOTE]
+ > Only public offers/plans that are transactable in Microsoft Azure Marketplace appear in the selection menu.
+
+### Review and submit
+
+Use this page to review the information you've provided. Once submitted, a private offer is locked for edits. You can still withdraw a private offer while it's pending acceptance by the customer.
+
+When you're ready, select **Submit**. You'll be returned to the dashboard where you can view the offer's status. The notification contact(s) will be emailed once the offer is ready to be shared with your customer.
+
+> [!NOTE]
+> Microsoft will not send an email to your customer. You can copy the private offer link and share it with your customer for acceptance. Your customer will also be able to see the private offer under the **Private Offer Management** blade in the Azure portal.
+
+## Clone a private offer
+
+You can clone an existing offer and update its customer information to send it to different customers so you don't have to start from scratch. Or, update the offer/plan pricing to send additional discounts to the same customer.
+
+1. Select **Private Offers** from the left-nav menu.
+2. Select the **Customers** tab.
+3. Check the box of the private offer to clone.
+4. Select **Clone**.
+5. Enter a new private offer name.
+6. Select **Clone**.
+7. Edit the details on the **Offer setup** page as needed.
+8. **Submit** the new private offer.
+
+## Withdraw a private offer
+
+Withdrawing a private offer means your customer will no longer be able to access it. A private offer can only be withdrawn if your customer hasn't accepted it.
+
+To withdraw a private offer:
+
+1. Select **Private Offers** from the left-nav menu.
+2. Select the **Customers** tab.
+3. Check the box of the private offer to withdraw.
+4. Select **Withdraw**.
+5. Select **Request withdraw**.
+6. Your offer status will be updated to **Draft** and can now be edited, if desired.
+
+Once you withdraw a private offer, your customer will no longer be able to access it in the commercial marketplace.
+
+## Delete a private offer
+
+To delete a private offer in **Draft** status:
+
+1. Select **Private Offers** from the left-nav menu.
+2. Select the **Customers** tab.
+3. Check the box of the private offer to delete.
+4. Select **Delete**.
+5. Select **Confirm**.
+
+This action will permanently delete your private offer. You can only delete private offers in **Draft** status.
+
+## View private offer status
+
+To view the status of a private offer:
+
+1. Select **Private Offers** from the left-nav menu.
+2. Select the **Customer** tab.
+3. Check the **Status** column.
+
+The status of the private offer will be one of the following:
+
+- **Draft** ΓÇô You've started the process of creating a private offer but haven't submitted it yet.
+- **In Progress** ΓÇô A private offer you submitted is currently being published; this can take up to 15 minutes.
+- **Pending acceptance** ΓÇô Your private offer is pending customer acceptance. Ensure you've sent the private offer link to your customer.
+- **Accepted** ΓÇô Your private offer was accepted by your customer. Once accepted, the private offer can't be changed.
+- **Expired** ΓÇô Your private offer expired before the customer accepted it. You can withdraw the private offer to make changes and submit it again.
+- **Ended** ΓÇô Your private offer has passed its end date.
+
+## Reporting on private offers
+
+The payout amount and agency fee that Microsoft charges is based on the private price after the percentage-based discount or absolute price was applied to the products in your private offer.
+
+## Next steps
+
+- [Frequently Asked Questions](isv-customer-faq.yml) about configuring ISV to customer private offers
mysql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-supported-versions.md
In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7
| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](./flexible-server/overview.md) <br/> Current minor version | |:-|:-|:| |MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html) (Retired) | Not supported|
-|MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html) | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html)|
-|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.21](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html)|
+|MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-32.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
+|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql)
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
Azure Database for MySQL currently supports the following major and minor versio
| Version | [Single Server](overview.md) <br/> Current minor version |[Flexible Server](./flexible-server/overview.md) <br/> Current minor version | |:-|:-|:| |MySQL Version 5.6 | [5.6.47](https://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-47.html)(Retired) | Not supported|
-|MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html)|
-|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.21](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html)|
+|MySQL Version 5.7 | [5.7.29](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.37](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)|
+|MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.28](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)|
> [!NOTE] > In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version)
mysql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-connect-tls-ssl.md
Following are the different configurations of SSL and TLS settings you can have
| Scenario | Server parameter settings | Description | ||--|| |Disable SSL enforcement | require_secure_transport = OFF |If your legacy application doesn't support encrypted connections to MySQL server, you can disable enforcement of encrypted connections to your flexible server by setting require_secure_transport=OFF.|
-|Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLSV1 or TLSV1.1| If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections but configure your flexible server to allow connections with the tls version (v1.0 or v1.1) supported by your application|
+|Enforce SSL with TLS version < 1.2 | require_secure_transport = ON and tls_version = TLSV1 or TLSV1.1| If your legacy application supports encrypted connections but requires TLS version < 1.2, you can enable encrypted connections but configure your flexible server to allow connections with the tls version (v1.0 or v1.1) supported by your application. Supported only with Azure Database for MySQL ΓÇô Flexible Server version v5.7|
|Enforce SSL with TLS version = 1.2(Default configuration)|require_secure_transport = ON and tls_version = TLSV1.2| This is the recommended and default configuration for flexible server.|
-|Enforce SSL with TLS version = 1.3(Supported with MySQL v8.0 and above)| require_secure_transport = ON and tls_version = TLSV1.3| This is useful and recommended for new applications development|
+|Enforce SSL with TLS version = 1.3| require_secure_transport = ON and tls_version = TLSV1.3| This is useful and recommended for new applications development. Supported only with Azure Database for MySQL ΓÇô Flexible Server version v8.0|
> [!Note]
-> Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
+> * Changes to SSL Cipher on flexible server is not supported. FIPS cipher suites is enforced by default when tls_version is set to TLS version 1.2 . For TLS versions other than version 1.2, SSL Cipher is set to default settings which comes with MySQL community installation.
+> * MySQL open-source community editions starting with the release of MySQL versions 8.0.26 and 5.7.35, the TLSv1 and TLSv1.1 protocols are deprecated. These protocols released in 1996 and 2006, respectively to encrypt data in motion, are considered weak, outdated, and vulnerable to security threats. For more information, see [Removal of Support for the TLSv1 and TLSv1.1 Protocols.](https://dev.mysql.com/doc/refman/8.0/en/encrypted-connection-protocols-ciphers.html#encrypted-connection-deprecated-protocols).Azure Database for MySQL – Flexible Server will also stop supporting TLS versions once the community stops the support for the protocol, to align with modern security standards.
-In this article, you will learn how to:
+In this article, you'll learn how to:
* Configure your flexible server * With SSL disabled
- * With SSL enforced with TLS version < 1.2
+ * With SSL enforced with TLS version
* Connect to your flexible server using mysql command-line * With encrypted connections disabled * With encrypted connections enabled
In this article, you will learn how to:
## Disable SSL enforcement on your flexible server
-If your client application doesn't support encrypted connections, you will need to disable encrypted connections enforcement on your flexible server. To disable encrypted connections enforcement, you will need to set require_secure_transport server parameter to OFF as shown in the screenshot and save the server parameter configuration for it to take effect. require_secure_transport is a **dynamic server parameter** which takes effect immediately and doesn't require server restart to take effect.
+If your client application doesn't support encrypted connections, you'll need to disable encrypted connections enforcement on your flexible server. To disable encrypted connections enforcement, you'll need to set require_secure_transport server parameter to OFF as shown in the screenshot and save the server parameter configuration for it to take effect. require_secure_transport is a **dynamic server parameter** which takes effect immediately and doesn't require server restart to take effect.
> :::image type="content" source="./media/how-to-connect-tls-ssl/disable-ssl.png" alt-text="Screenshot showing how to disable SSL with Azure Database for MySQL flexible server.":::
The following example shows how to connect to your server using the mysql comman
mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=DISABLED ```
-It is important to note that setting require_secure_transport to OFF doesn't mean encrypted connections will not supported on server side. If you set require_secure_transport to OFF on flexible server but if the client connects with encrypted connection, it will still be accepted. The following connection using mysql client to a flexible server configured with require_secure_transport=OFF will also work as shown below.
+It's important to note that setting require_secure_transport to OFF doesn't mean encrypted connections won't supported on server side. If you set require_secure_transport to OFF on flexible server but if the client connects with encrypted connection, it will still be accepted. The following connection using mysql client to a flexible server configured with require_secure_transport=OFF will also work as shown below.
```bash mysql.exe -h mydemoserver.mysql.database.azure.com -u myadmin -p --ssl-mode=REQUIRED
mysql> show global variables like '%require_secure_transport%';
In summary, require_secure_transport=OFF setting relaxes the enforcement of encrypted connections on flexible server and allows unencrypted connections to the server from client in addition to the encrypted connections.
-## Enforce SSL with TLS version < 1.2
+## Enforce SSL with TLS version
-If your application supports connections to MySQL server with SSL, but supports TLS version < 1.2 you will require to set the TLS versions server parameter on your flexible server. To set TLS versions which you want your flexible server to support, you will need to set tls_version server parameter to TLSV1, TLSV1.1, or TLSV1 and TLSV1.1 as shown in the screenshot and save the server parameter configuration for it to take effect. tls_version is a **static server parameter** which will require a server restart for the parameter to take effect.
+To set TLS versions on your flexible server, you'll need to set *tls_version* server parameter. The default setting for TLS protocol is TLSv1.2. If your application supports connections to MySQL server with SSL, but require any protocol other than TLSv1.2, you'll require to set the TLS versions in [server parameter](./how-to-configure-server-parameters-portal.md). *tls_version* is a **static server parameter** which will require a server restart for the parameter to take effect. Following are the Supported protocols for the available versions of Azure Database for MySQL ΓÇô Flexible Server
-> :::image type="content" source="./media/how-to-connect-tls-ssl/tls-version.png" alt-text="Screenshot showing how to set tls version for a Azure Database for MySQL flexible server.":::
+| Flexible Server version | Supported Values of tls_version | Default Setting |
+||--||
+|MySQL 5.7 |TLSv1, TLSv1.1, TLSv1.2 | TLSv1.2|
+|MySQL 8.0 | TLSv1.2, TLSv1.3 | TLSv1.2|
## Connect using mysql command-line client with TLS/SSL ### Download the public SSL certificate
-To use encrypted connections with your client applications,you will need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) which is also available in Azure portal Networking blade as shown in the screenshot below.
+To use encrypted connections with your client applications,you'll need to download the [public SSL certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem) which is also available in Azure portal Networking blade as shown in the screenshot below.
> :::image type="content" source="./media/how-to-connect-tls-ssl/download-ssl.png" alt-text="Screenshot showing how to download public SSL certificate from Azure portal."::: Save the certificate file to your preferred location. For example, this tutorial uses `c:\ssl` or `\var\www\html\bin` on your local environment or the client environment where your application is hosted. This will allow applications to connect securely to the database over SSL.
-If you created your flexible server with *Private access (VNet Integration)*, you will need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server.
+If you created your flexible server with *Private access (VNet Integration)*, you'll need to connect to your server from a resource within the same VNet as your server. You can create a virtual machine and add it to the VNet created with your flexible server.
If you created your flexible server with *Public access (allowed IP addresses)*, you can add your local IP address to the list of firewall rules on your server.
mysql -h mydemoserver.mysql.database.azure.com -u mydemouser -p --ssl-mode=REQUI
-If you try to connect to your server with unencrypted connections, you will see error stating connections using insecure transport are prohibited similar to one below:
+If you try to connect to your server with unencrypted connections, you'll see error stating connections using insecure transport are prohibited similar to one below:
```output ERROR 3159 (HY000): Connections using insecure transport are prohibited while --require_secure_transport=ON.
ERROR 3159 (HY000): Connections using insecure transport are prohibited while --
## Verify the TLS/SSL connection
-Execute the mysql **status** command to verify that you have connected to your MySQL server using TLS/SSL:
+Execute the mysql **status** command to verify that you've connected to your MySQL server using TLS/SSL:
```dos mysql> status
mysql> status
Confirm the connection is encrypted by reviewing the output, which should show: **SSL: Cipher in use is**. This cipher suite shows an example and based on the client, you can see a different cipher suite.
+**How to identify the TLS protocols configured on your server ?**
+
+You can run the command SHOW GLOBAL VARIABLES LIKE 'tls_version'; and check the value to understand what all protocols are configured.
+
+```sql
+mysql> SHOW GLOBAL VARIABLES LIKE 'tls_version';
+```
+**How to find which TLS protocol are being used by my clients to connect to the server ?**
+
+You can run the below command and look at tls_version for the session to identify which TLS version is used to connect
+```sql
+SELECT sbt.variable_value AS tls_version, t2.variable_value AS cipher,
+processlist_user AS user, processlist_host AS host
+FROM performance_schema.status_by_thread AS sbt
+JOIN performance_schema.threads AS t ON t.thread_id = sbt.thread_id
+JOIN performance_schema.status_by_thread AS t2 ON t2.thread_id = t.thread_id
+WHERE sbt.variable_name = 'Ssl_version' and t2.variable_name = 'Ssl_cipher' ORDER BY tls_version;
+```
+ ## Connect to your flexible server with encrypted connections using various application frameworks Connection strings that are pre-defined in the "Connection Strings" page available for your server in the Azure portal include the required parameters for common languages to connect to your database server using TLS/SSL. The TLS/SSL parameter varies based on the connector. For example, "useSSL=true", "sslmode=required", or "ssl_verify_cert=true" and other variations.
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
Last updated 10/12/2021
This article summarizes new releases and features in Azure Database for MySQL - Flexible Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.
+## April 2022
+
+- **Minor version upgrade for Azure Database for MySQL - Flexible server to 8.0.28**
+ Azure Database for MySQL - Flexible Server 8.0 now is running on minor version 8.0.28*, to learn more about changes coming in this minor version [visit Changes in MySQL 8.0.28 (2022-01-18, General Availability)](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-28.html)
+
+- **Minor version upgrade for Azure Database for MySQL - Single server to 5.7.37**
+ Azure Database for MySQL - Flexible Server 5.7 now is running on minor version 5.7.37*, to learn more about changes coming in this minor version [visit Changes in MySQL 5.7.37 (2022-01-18, General Availability](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-37.html)
+
+* Please note that some regions are still running older minor versions of the Azure Database for MySQL and will be patched by end of April 2022.
+
+- **Deprecation of TLSv1 or TLSv1.1 protocols with Azure Database for MySQL - Flexible Server (8.0.28)**
+
+ Starting version 8.0.28, MySQL community edition supports TLS protocol TLSv1.2 or TLSv1.3 only. Azure Database for MySQL ΓÇô Flexible Server will also stop supporting TLSv1 and TLSv1.1 protocols, to align with modern security standards. You will no longer be able to configure TLSv1 or TLSv1.1 from the server parameter blade for newly created resources as well as for resources created previously. The default will be TLSv1.2. Resources created before the upgrade will still support communication through TLS protocol TLSv1 or TLSv1.1 through 1 May 2022.
+ ## March 2022 This release of Azure Database for MySQL - Flexible Server includes the following updates.
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
While the logs that Suricata produces contain valuable information about what's
#### Install Elasticsearch
-1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you do not have Java installed, refer to documentation on the [Azure-suppored JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
+1. The Elastic Stack from version 5.0 and above requires Java 8. Run the command `java -version` to check your version. If you do not have Java installed, refer to documentation on the [Azure-supported JDKs](/azure/developer/java/fundamentals/java-support-on-azure).
1. Download the correct binary package for your system:
object-anchors Get Started Model Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/quickstarts/get-started-model-conversion.md
You'll learn how to:
To complete this quickstart, make sure you have:
-* A Windows machine with <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a>.
+* A Windows machine with <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2022</a>.
* <a href="https://git-scm.com" target="_blank">Git for Windows</a>.
-* The <a href="https://dotnet.microsoft.com/download/dotnet-core/3.1">.NET Core 3.1 SDK</a>.
+* The <a href="https://dotnet.microsoft.com/download/dotnet/6.0">.NET 6.0 SDK</a>.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
# Deploy a Java application with Red Hat JBoss Enterprise Application Platform on an Azure Red Hat OpenShift 4 cluster
-This guide demonstrates how to deploy a Microsoft SQL Server database driven Jakarta EE application, running on Red Hat JBoss Enterprise Application Platform (JBoss EAP) to an Azure Red Hat OpenShift (ARO) 4 cluster by using [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts).
+This article shows you how to deploy a Red Hat JBoss Enterprise Application Platform (JBoss EAP) app to an Azure Red Hat OpenShift (ARO) 4 cluster. The application is a Jakarta EE application that uses Microsoft SQL server database. The app is deployed using [JBoss EAP Helm Charts](https://jbossas.github.io/eap-charts).
-The guide takes a traditional Jakarta EE application and walks you through the process of migrating it to a container orchestrator such as Azure Red Hat OpenShift. First, it describes how you can package your application as a [Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/the-bootable-jar_default) to run it locally, connecting the application to a docker Microsoft SQL Server Container. Finally, it shows you how you can deploy the Microsoft SQL Server on OpenShift and how to deploy three replicas of the JBoss EAP application by using Helm Charts.
+The guide takes a traditional Jakarta EE application and walks you through the process of migrating it to a container orchestrator such as Azure Red Hat OpenShift. First, it describes how you can package your application as a [Bootable JAR](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/the-bootable-jar_default) to run it locally. Finally, it shows you how you can deploy on OpenShift with three replicas of the JBoss EAP application by using Helm Charts.
The application is a stateful application that stores information in an HTTP Session. It makes use of the JBoss EAP clustering capabilities and uses the following Jakarta EE 8 and MicroProfile 4.0 technologies:
The application is a stateful application that stores information in an HTTP Ses
* MicroProfile Health > [!IMPORTANT]
-> This article uses a Microsoft SQL Server docker image running on a Linux container on Red Hat OpenShift. Before choosing to run a SQL Server container for production use cases, please review the [support policy for SQL Server Containers](https://support.microsoft.com/help/4047326/support-policy-for-microsoft-sql-server) to ensure that you are running on a supported configuration.
+> This article assumes you have access to a Microsoft SQL Server instance accessible to your ARO cluster. Please review the [support policy for SQL Server Containers](https://support.microsoft.com/help/4047326/support-policy-for-microsoft-sql-server) to ensure that you are running on a supported configuration.
> [!IMPORTANT] > This article deploys an application by using JBoss EAP Helm Charts. At the time of writing, this feature is still offered as a [Technology Preview](https://access.redhat.com/articles/6290611). Before choosing to deploy applications with JBoss EAP Helm Charts on production environments, ensure that this feature is a supported feature for your JBoss EAP/XP product version.
The application is a stateful application that stores information in an HTTP Ses
1. Prepare a local machine with a Unix-like operating system that is supported by the various products installed. 1. Install a Java SE implementation (for example, [Oracle JDK 11](https://www.oracle.com/java/technologies/downloads/#java11)). 1. Install [Maven](https://maven.apache.org/download.cgi) 3.6.3 or higher.
-1. Install [Docker](https://docs.docker.com/get-docker/) for your OS.
1. Install [Azure CLI](/cli/azure/install-azure-cli) 2.29.2 or later. 1. Clone the code for this demo application (todo-list) to your local system. The demo application is at [GitHub](https://github.com/Azure-Samples/jboss-on-aro-jakartaee). 1. Follow the instructions in [Create an Azure Red Hat OpenShift 4 cluster](./tutorial-create-cluster.md).
The application is a stateful application that stores information in an HTTP Ses
## Prepare the application
-At this stage, you have cloned the `Todo-list` demo application and your local repository is on the `main` branch. The demo application is a simple Jakarta EE 8 application that creates, reads, updates, and deletes records on a Microsoft SQL Server. This application can be deployed as it is on a JBoss EAP server installed in your local machine. You just need to configure the server with the required database driver and data source. You also need a database server available in your local environment.
+At this stage, you have cloned the `Todo-list` demo application and your local repository is on the `main` branch. The demo application is a simple Jakarta EE 8 application that creates, reads, updates, and deletes records on a Microsoft SQL Server. This application can be deployed as it is on a JBoss EAP server installed in your local machine. You just need to configure the server with the required database driver and data source. You also need a database server accessible from your local environment.
However, when you are targeting OpenShift, you might want to trim the capabilities of your JBoss EAP server. For example, to reduce the security exposure of the provisioned server and reduce the overall footprint. You might also want to include some MicroProfile specs to make your application more suitable for running on an OpenShift environment. When using JBoss EAP, one way to accomplish this is by packaging your application and your server in a single deployment unit known as a Bootable JAR. Let's do that by adding the required changes to our demo application.
$
Let's do a quick review about what we have changed: -- We have added the `wildfly-jar-maven` plugin to provision the server and the application in a single executable JAR file. The OpenShift deployment unit will be now our server with our application.-- On the maven plugin, we have specified a set of Galleon layers. This configuration allows us to trim the server capabilities to only what we need. For complete documentation on Gallean, see [the WildFly documentation](https://docs.wildfly.org/galleon/).-- Our application uses Jakarta Faces with Ajax requests, which means there will be information stored in the HTTP Session. We don't want to lose such information if a pod is removed. We could save this information on the client and send it back on each request. However, there are cases where you may decide not to distribute certain information to the clients. For this demo, we have chosen to replicate the session across all pod replicas. To do it, we have added `<distributable />` to the `web.xml` that together with the server clustering capabilities will make the HTTP Session distributable across all pods.-- We have added two MicroProfile Health Checks that allow identifying when the application is live and ready to receive requests.
+* We have added the `wildfly-jar-maven` plugin to provision the server and the application in a single executable JAR file. The OpenShift deployment unit is our server with our application.
+* On the maven plugin, we have specified a set of Galleon layers. This configuration allows us to trim the server capabilities to only what we need. For complete documentation on Galleon, see [the WildFly documentation](https://docs.wildfly.org/galleon/).
+* Our application uses Jakarta Faces with Ajax requests, which means there will be information stored in the HTTP Session. We don't want to lose such information if a pod is removed. We could save this information on the client and send it back on each request. However, there are cases where you may decide not to distribute certain information to the clients. For this demo, we have chosen to replicate the session across all pod replicas. To do it, we have added `<distributable />` to the `web.xml`. That, together with the server clustering capabilities will make the HTTP Session distributable across all pods.
+* We have added two MicroProfile Health Checks that allow identifying when the application is live and ready to receive requests.
## Run the application locally
-Before deploying the application on OpenShift, we are going to verify it locally geg. For the database, we are going to use a containerized Microsoft SQL Server running on Docker.
+Before deploying the application on OpenShift, we are going to run it locally to verify how it works. The following steps assume you have a Microsoft SQL Server running and available from your local environment. This database must be created using the following information:
-### Run the Microsoft SQL database on Docker
+* Database name: `todos_db`
+* SA password: `Passw0rd!`
-Follow the next steps to get the database server running on Docker and configured for the demo application:
+To create the database, follow the steps in [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart?tabs=azure-portal), but use the following substitutions.
-1. Start a Docker container running the Microsoft SQL Server. For more information, see [Run SQL Server container images with Docker](/sql/linux/quickstart-install-connect-docker) quickstart.
+* For **Database name** use `todos_db`.
+* For **Password** use `Passw0rd!`.
- ```bash
- $ sudo docker run \
- -e 'ACCEPT_EULA=Y' \
- -e 'SA_PASSWORD=Passw0rd!' \
- -p 1433:1433 --name mssqlserver -h mssqlserver \
- -d mcr.microsoft.com/mssql/server:2019-latest
- ```
-
-1. Connect to the server and create the `todos_db` database.
-
- ```bash
- $ sudo docker exec -it mssqlserver "bash"
- mssql@mssqlserver:/$ /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw0rd!'
- 1> CREATE DATABASE todos_db
- 2> GO
- 1> exit
- mssql@mssqlserver:/$ exit
- ```
+On the **Additional settings** page, you don't have to choose the option to pre-populate the database with sample data, but there is no harm in doing so.
-### Run the demo application locally
+Once the database has been created with the above database name and password, obtain the value for the `MSSQLSERVER_HOST` from the overview page for the database resource in the portal. Hover the mouse over the value of the **Server name** field and select the copy icon that appears beside the value. Save this aside for use later.
Follow the next steps to build and run the application locally.
Follow the next steps to build and run the application locally.
MSSQLSERVER_PASSWORD=Passw0rd! \ MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \ MSSQLSERVER_DATABASE=todos_db \
- MSSQLSERVER_HOST=localhost \
+ MSSQLSERVER_HOST=<server name saved aside earlier> \
MSSQLSERVER_PORT=1433 \ mvn wildfly-jar:run ```
Follow the next steps to build and run the application locally.
MSSQLSERVER_PASSWORD=Passw0rd! \ MSSQLSERVER_JNDI=java:/comp/env/jdbc/mssqlds \ MSSQLSERVER_DATABASE=todos_db \
- MSSQLSERVER_HOST=localhost \
+ MSSQLSERVER_HOST=<server name saved aside earlier> \
MSSQLSERVER_PORT=1433 \ mvn wildfly-jar:run -Dwildfly.bootable.arguments="-Djboss.node.name=node2 -Djboss.socket.binding.port-offset=1000" ```
Follow the next steps to build and run the application locally.
``` 1. Press **Control-C** to stop the application.
-1. Execute the following command to stop the database server:
-
- ```bash
- docker stop mssqlserver
- ```
-
-1. If you don't plan to use the Docker database again, execute the following command to remove the database server from your Docker registry:
-
- ```bash
- docker rm mssqlserver
- ```
## Deploy to OpenShift
-Before deploying the demo application on OpenShift we will deploy the database server. The database server will be deployed by using a [DeploymentConfig OpenShift API resource](https://docs.openshift.com/container-platform/4.8/applications/deployments/what-deployments-are.html#deployments-and-deploymentconfigs_what-deployments-are). The database server deployment configuration is available as a YAML file in the application source code.
-
-To deploy the application, we are going to use the JBoss EAP Helm Charts already available in ARO. We also need to supply the desired configuration, for example, the database user, the database password, the driver version we want to use, and the connection information used by the data source. Since this information contains sensitive information, we will use [OpenShift Secret objects](https://docs.openshift.com/container-platform/4.8/nodes/pods/nodes-pods-secrets.html#nodes-pods-secrets-about_nodes-pods-secrets) to store it.
+To deploy the application, we are going to use the JBoss EAP Helm Charts already available in ARO. We also need to supply the desired configuration, for example, the database user, the database password, the driver version we want to use, and the connection information used by the data source. The following steps assume you have a MicrosoftSQL database server running and exposed by an OpenShift service, and you have stored the database user name, password and database name in an [OpenShift Secret object](https://docs.openshift.com/container-platform/4.8/nodes/pods/nodes-pods-secrets.html#nodes-pods-secrets-about_nodes-pods-secrets) under the following name `mssqlserver-secret`.
> [!NOTE] > You can also use the [JBoss EAP Operator](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.3/html/getting_started_with_jboss_eap_for_openshift_container_platform/eap-operator-for-automating-application-deployment-on-openshift_default) to deploy this example, however, notice that the JBoss EAP Operator will deploy the application as `StatefulSets`. Use the JBoss EAP Operator if your application requires one or more one of the following.
$
Let's do a quick review about what we have changed: - We have added a new maven profile named `bootable-jar-openshift` that prepares the Bootable JAR with a specific configuration for running the server on the cloud, for example, it enables the JGroups subsystem to use TCP requests to discover other pods by using the KUBE_PING protocol.-- We have added a set of configuration files in the _jboss-on-aro-jakartaee/deployment_ directory. In this directory, you will find the configuration files to deploy the database server and the application.-
-### Deploy the database server on OpenShift
-
-The file to deploy the Microsoft SQL Server to OpenShift is _deployment/mssqlserver/mssqlserver.yaml_. This file is composed by three configuration objects:
-
-* A Service: To expose the SQL server port.
-* A DeploymentConfig: To deploy the SQL server image.
-* A PersistentVolumeClaim: To reclaim persistent disk space for the database. It uses the storage class named `managed-premium` which is available at your ARO cluster.
-
-This file expects the presence of an OpenShift Secret object named `mssqlserver-secret` to supply the database administrator password. In the next steps, we will use the OpenShift CLI to create this Secret, deploy the server, and create the `todos_db`:
-
-1. To create the Secret object with the information relative to the database, execute the following command on the `eap-demo` project created before at the pre-requisite steps section:
-
- ```bash
- $ oc create secret generic mssqlserver-secret \
- --from-literal db-password=Passw0rd! \
- --from-literal db-user=sa \
- --from-literal db-name=todos_db
- secret/mssqlserver-secret created
- ```
-
-1. Deploy the database server by executing the following:
-
- ```bash
- $ oc apply -f ./deployment/msqlserver/mssqlserver.yaml
- service/mssqlserver created
- deploymentconfig.apps.openshift.io/mssqlserver created
- persistentvolumeclaim/mssqlserver-pvc created
- ```
-
-1. Monitor the status of the pods and wait until the database server is running:
-
- ```bash
- $ oc get pods -w
- NAME READY STATUS RESTARTS AGE
- mssqlserver-1-deploy 0/1 Completed 0 34s
- mssqlserver-1-gw7qw 1/1 Running 0 31s
- ```
-
-1. Connect to the database pod and create the database `todos_db`:
-
- ```bash
- $ oc rsh mssqlserver-1-gw7qw
- sh-4.4$ /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P 'Passw0rd!'
- 1> CREATE DATABASE todos_db
- 2> GO
- 1> exit
- sh-4.4$ exit
- exit
- ```
+- We have added a set of configuration files in the _jboss-on-aro-jakartaee/deployment_ directory. In this directory, you will find the configuration files to deploy the application.
### Deploy the application on OpenShift
-Now that we have the database server ready, we can deploy the demo application via JBoss EAP Helm Charts. The Helm Chart application configuration file is available at _deployment/application/todo-list-helm-chart.yaml_. You could deploy this file via the command line; however, to do so you would need to have Helm Charts installed on your local machine. Instead of using the command line, the next steps explain how you can deploy this Helm Chart by using the OpenShift web console.
+We can deploy the demo application via JBoss EAP Helm Charts. The Helm Chart application configuration file is available at _deployment/application/todo-list-helm-chart.yaml_. You could deploy this file via the command line; however, to do so you would need to have Helm Charts installed on your local machine. Instead of using the command line, the next steps explain how you can deploy this Helm Chart by using the OpenShift web console.
-Before deploying the application, let's create the expected Secret object that will hold specific application configuration. The Helm Chart will get the database user, password and name from the `mssqlserver-secret` Secret created before, and the driver version, the datasource JNDI name and the cluster password from the following Secret:
+Before deploying the application, let's create the expected Secret object that will hold specific application configuration. The Helm Chart will get the database user, password and name from a secret named `mssqlserver-secret`, and the driver version, the datasource JNDI name and the cluster password from the following Secret:
1. Execute the following to create the OpenShift secret object that will hold the application configuration:
Before deploying the application, let's create the expected Secret object that w
> You decide the cluster password you want to use, the pods that want to join to your cluster need such a password. Using a password prevents that any pods that are not under your control can join to your JBoss EAP cluster. > [!NOTE]
- > You may have noticed from the above Secret that we are not supplying the database Hostname and Port. That's not necessary. If you take a closer look at the Helm Chart application file, you will see that the database Hostname and Port are passed by using the following notations \$(MSSQLSERVER_SERVICE_HOST) and \$(MSSQLSERVER_SERVICE_PORT). This is a standard OpenShift notation that will ensure the application variables (MSSQLSERVER_HOST, MSSQLSERVER_PORT) get assigned to the values of the pod environment variables (MSSQLSERVER_SERVICE_HOST, MSSQLSERVER_SERVICE_PORT) that are available at runtime. These pod environment variables are passed by OpenShift when the pod is launched. These variables are available to any pod because we have created a Service to expose the SQL server in the previous steps.
+ > You may have noticed from the above Secret that we are not supplying the database Hostname and Port. That's not necessary. If you take a closer look at the Helm Chart application file, you will see that the database Hostname and Port are passed by using the following notations \$(MSSQLSERVER_SERVICE_HOST) and \$(MSSQLSERVER_SERVICE_PORT). This is a standard OpenShift notation that will ensure the application variables (MSSQLSERVER_HOST, MSSQLSERVER_PORT) get assigned to the values of the pod environment variables (MSSQLSERVER_SERVICE_HOST, MSSQLSERVER_SERVICE_PORT) that are available at runtime. These pod environment variables are passed by OpenShift when the pod is launched. These variables are available to any pod when you create an OpenShift service exposing the database server.
2. Open the OpenShift console and navigate to the developer view (in the **</> Developer** perspective in the left hand menu)
Before deploying the application, let's create the expected Secret object that w
:::image type="content" source="media/howto-deploy-java-enterprise-application-platform-app/console-eap-helm-charts.png" alt-text="Screenshot of OpenShift console EAP Helm Charts.":::
-5. Since our application uses MicroProfile capabilities, we are going to select select for this demo the Helm Chart for EAP XP (at the time of this writing, the exact version of the Helm Chart is **EAP Xp3 v1.0.0**). The `Xp3` stands for Expansion Pack version 3.0.0. With the JBoss Enterprise Application Platform expansion pack, developers can use Eclipse MicroProfile application programming interfaces (APIs) to build and deploy microservices-based applications.
+5. Since our application uses MicroProfile capabilities, we are going to select for this demo the Helm Chart for EAP XP (at the time of this writing, the exact version of the Helm Chart is **EAP Xp3 v1.0.0**). The `Xp3` stands for Expansion Pack version 3.0.0. With the JBoss Enterprise Application Platform expansion pack, developers can use Eclipse MicroProfile application programming interfaces (APIs) to build and deploy microservices-based applications.
6. Open the **EAP Xp** Helm Chart, and then select **Install Helm Chart**.
$ oc delete secrets/todo-list-secret
secret "todo-list-secret" deleted ```
-### Delete the database
-
-If you want to delete the database and the related objects, execute the following command:
-
-```bash
-$ oc delete all -l app=mssql2019
-replicationcontroller "mssqlserver-1" deleted
-service "mssqlserver" deleted
-deploymentconfig.apps.openshift.io "mssqlserver" deleted
-
-$ oc delete secrets/mssqlserver-secret
-secret "mssqlserver-secret" deleted
-```
- ### Delete the OpenShift project You can also delete all the configuration created for this demo by deleting the `eap-demo` project. To do so, execute the following:
openshift Howto Secure Openshift With Front Door Feb 22 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-secure-openshift-with-front-door-feb-22.md
- Title: Secure access to Azure Red Hat OpenShift with Front Door
-description: This article explains how to use Front Door to secure access to Azure Red Hat OpenShift applications.
---- Previously updated : 12/07/2021
-keywords: azure, openshift, red hat, front, door
-#Customer intent: I need to understand how to secure access to Azure Red Hat OpenShift applications with Azure Front Door.
--
-# Secure access to Azure Red Hat OpenShift with Front Door
-
-This article explains how to use Azure Front Door Premium to secure access to Azure Red Hat OpenShift.
-
-## Prerequisites
-
-The following prerequisites are required:
--- You have an existing Azure Red Hat OpenShift cluster. For information on creating an Azure Red Hat OpenShift Cluster, learn how to [create-an-aks-cluster](../aks/kubernetes-walkthrough-portal.md#create-an-aks-cluster).--- The cluster is configured with private ingress visibility.--- A custom domain name is used, for example:-
- `example.com`
-
-> [!NOTE]
-> The initial state doesn't have DNS configured.
-> No applications are exposed externally from the Azure Red Hat OpenShift cluster.
-
-## Create an Azure Private Link service
-
-This section explains how to create an Azure Private Link service. An Azure Private Link service is a reference to your own service that is powered by Azure Private Link.
-
-Your service, which is running behind the Azure Standard Load Balancer, can be enabled for Private Link access so that consumers to your service can access it privately from their own VNets. Your customers can create a private endpoint inside their VNet and map it to this service.
-
-For more information about the Azure Private Link service and how it's used, see [Azure Private Link service](../private-link/private-link-service-overview.md).
-
-Create an **AzurePrivateLinkSubnet**. This subnet includes a netmask that permits visibility of the subnet to the control plane and worker nodes of the Azure cluster. Don't delegate this new subnet to any services or configure any service endpoints.
-
-For example, if the virtual network is 10.10.0.0/16 and:
-
- - Existing Azure Red Hat OpenShift control plane subnet = 10.10.0.0/24
- - Existing Azure Red Hat OpenShift worker subnet = 10.10.1.0/24
- - New AzurePrivateLinkSubnet = 10.10.2.0/24
-
- Create a new Private Link at [Azure Private Link service](https://portal.azure.com/#create/Microsoft.PrivateLinkservice), as explained in the following steps:
-
-1. On the **Basics** tab, configure the following options:
- - **Project Details**
- * Select your Azure subscription.
- * Select the resource group in which your Azure Red Hat OpenShift cluster was deployed.
- - **Instance Details**
- - Enter a **Name** for your Azure Private Link service, as in the following example: *example-com-private-link*.
- - Select a **Region** for your Private Link.
-
-2. On the **Outbound Settings** tab:
- - Set the **Load Balancer** to the **-internal** load balancer of the cluster for which you're enabling external access. The choices are populated in the drop-down list.
- - Set the **Load Balancer frontend IP address** to the IP address of the Azure Red Hat OpenShift ingress controller, which typically ends in **.254**. If you're unsure, use the following command.
-
- ```azurecli
- az aro show -n <cluster-name> -g <resource-group> -o tsv --query ingressProfiles[].ip
- ```
-
- - The **Source NAT subnet** should be the **AzurePrivateLinkSubnet**, which you created.
- - No items should be changed in **Outbound Settings**.
-
-3. On the **Access Security** tab, no changes are required.
-
- - At the **Who can request access to your service?** prompt, select **Anyone with your alias**.
- - Don't add any subscriptions for auto-approval.
-
-4. On the **Tags** tab, select **Review + create**.
-
-5. Select **Create** to create the Azure Private Link service, and then wait for the process to complete.
-
-6. When your deployment is complete, select **Go to resource group** under **Next steps**.
-
-In the Azure portal, enter the Azure Private Link service that was deployed. Retain the **Alias** that was generated for the Azure Private Link service. It will be used later.
-
-## Register domain in Azure DNS
-
-This section explains how to register a domain in Azure DNS.
-
-1. Create a global [Azure DNS](https://portal.azure.com/#create/Microsoft.DnsZone) zone for example.com.
-
-2. Create a global [Azure DNS](https://portal.azure.com/#create/Microsoft.DnsZone) zone for apps.example.com.
-
-3. Note the four nameservers that are present in Azure DNS for apps.example.com.
-
-4. Create a new **NS** record set in the example.com zone that points to **app** and specify the four nameservers that were present when the **apps** zone was created.
-
-## Create a New Azure Front Door Premium service
-
-To create a new Azure Front Door Premium service:
-
-1. On [Microsoft Azure Compare offerings](https://ms.portal.azure.com/#create/Microsoft.AFDX) select **Azure Front Door**, and then select **Continue to create a Front Door**.
-
-2. On the **Create a front door profile** page in the **Subscription** > **Resource group**, select the resource group in which your Azure Red Hat OpenShift cluster was deployed to house your Azure Front Door Premium resource.
-
-3. Name your Azure Front Door Premium service appropriately. For example, in the **Name** field, enter the following name:
-
- `example-com-frontdoor`
-
-4. Select the **Premium** tier. The Premium tier is the only choice that supports Azure Private Link.
-
-5. For **Endpoint name**, choose an endpoint name that is appropriate for Azure Front Door.
-
- For each application deployed, a CNAME will be created in the Azure DNS to point to this hostname. Therefore, it's important to choose a name that is agnostic to applications. For security, the name shouldn't suggest the applications or architecture that youΓÇÖve deployed, such as **example01**.
-
- The name you choose will be prepended to the **.z01.azurefd.net** domain.
-
-6. For **Origin type**, select **Custom**.
-
-7. For **Origin Host Name**, enter the following placeholder:
-
- `changeme.com`
-
- This placeholder will be deleted later.
-
- At this stage, don't enable the Azure Private Link service, caching, or the Web Application Firewall (WAF) policy.
-
-9. Select **Review + create** to create the Azure Front Door Premium resource, and then wait for the process to complete.
-
-## Initial configuration of Azure Front Door Premium
-
-To configure Azure Front Door Premium:
-
-1. In the Azure portal, enter the Azure Front Door Premium service that was deployed.
-
-2. In the **Endpoint Manager** window, modify the endpoint by selecting **Edit endpoint**.
-
-3. Delete the default route, which was created as **default-route**.
-
-4. Close the **Endpoint Manager** window.
-
-5. In the **Origin Groups** window, delete the default origin group that was named **default-origin-group**.
-
-## Exposing an application route in Azure Red Hat OpenShift
-
-Azure Red Hat OpenShift must be configured to serve the application with the same hostname that Azure Front Door will be exposing externally (\*.apps.example.com). In our example, we'll expose the Reservations application with the following hostname:
-
-`reservations.apps.example.com`
-
-Also, create a secure route in Azure Red Hat OpenShift that exposes the hostname.
-
-## Configure Azure DNS
-
-To configure the Azure DNS:
-
-1. Enter the public **apps** DNS zone previously created.
-
-2. Create a new CNAME record set named **reservation**. This CNAME record set is an alias for our example Azure Front Door endpoint:
-
- `example01.z01.azurefd.net`
-
-## Configure Azure Front Door Premium
-
-The following steps explain how to configure Azure Front Door Premium.
-
-1. In the Azure portal, enter the Azure Front Door Premium service you created previously:
-
- `example-com-frontdoor`
-
- **In the Domains window**:
-
- 1. Because all DNS servers are hosted on Azure, leave **DNS Management** set to **Azure managed DNS**.
-
-3. Select the example domain:
-
- `apps.example.com`
-
-4. Select the CNAME in our example:
-
- `reservations.apps.example.com`
-
-5. Use the default values for **HTTPS** and **Minimum TLS version**.
-
-6. Select **Add**.
-
-7. When the **Validation stat** changes to **Pending**, select **Pending**.
-
-8. To authenticate ownership of the DNS zone, for **DNS record status**, select **Add**.
-
-9. Select **Close**.
-
-10. Continue to select **Refresh** until the **Validation state** of the domain changes to **Approved** and the **Endpoint association** changes to **Unassociated**.
-
-**In the Origin Groups window**:
-
-1. Select **Add**.
-
-2. Give your **Origin Group** an appropriate name, such as **Reservations-App**.
-
-3. Select **Add an origin**.
-
-4. Enter the name of the origin, such as **ARO-Cluster-1**.
-
-5. Choose an **Origin type** of **Custom**.
-
-6. Enter the fully qualified domain name (FQDN) hostname that was exposed in your Azure Red Hat OpenShift cluster, such as:
-
- `reservations.apps.example.com`
-
-7. Enable the **Private Link** service.
-
-8. Enter the **Alias** that was obtained from the Azure Private Link service.
-
-9. Select **Add** to return to the origin group creation window.
-
-10. Select **Add** to add the origin group and return to the Azure portal.
-
-## Grant approval in Azure Private Link
-
-To grant approval to the **example-com-private-link**, which is the **Azure Private Link** service you created previously, complete the following steps.
-
-1. On the **Private endpoint connections** tab, select the checkbox that now exists from the resource described as **do from AFD**.
-
-2. Select **Approve**, and then select **Yes** to verify the approval.
-
-## Complete Azure Front Door Premium configuration
-
-The following steps explain how to complete the configuration of Azure Front Door Premium.
-
-1. In the Azure portal, enter the Azure Front Door Premium service you previously created:
-
- `example-com-frontdoor`
-
-2. In the **Endpoint Manager** window, select **Edit endpoint** to modify the endpoint.
-
-3. Select **+Add** under **Routes**.
-
-4. Give your route an appropriate name, such as **Reservations-App-Route-Config**.
-
-5. Under **Domains**, then under **Available validated domains**, select the fully qualified domain name, for example:
-
- `reservations.apps.example.com`
--
-6. To redirect HTTP traffic to use HTTPS, leave the **Redirect** checkbox selected.
-
-7. Under **Origin group**, select **Reservations-App**, the origin group you previously created.
-
-8. You can enable caching, if appropriate.
-
-9. Select **Add** to create the route.
-After the route is configured, the **Endpoint manager** populates the **Domains** and **Origin groups** panes with the other elements created for this application.
-
-Because Azure Front Door is a global service, the application can take up to 30 minutes to deploy. During this time, you may choose to create a WAF for your application. When your application goes live, it can be accessed using the URL used in this example:
-
-`https://reservations.apps.example.com`
-
-## Next steps
-
-Create a Azure Web Application Firewall on Azure Front Door using the Azure portal:
-> [!div class="nextstepaction"]
-> [Tutorial: Create a Web Application Firewall policy on Azure Front Door using the Azure portal](../web-application-firewall/afds/waf-front-door-create-portal.md)
openshift Howto Secure Openshift With Front Door https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-secure-openshift-with-front-door.md
keywords: azure, openshift, red hat, front, door
#Customer intent: I need to understand how to secure access to Azure Red Hat OpenShift applications with Azure Front Door.
-# Secure access to Azure Red Hat OpenShift with Azure Front Door
+# Secure access to Azure Red Hat OpenShift with Azure Front Door
This article explains how to use Azure Front Door Premium to secure access to Azure Red Hat OpenShift.
This section explains how to register a domain in Azure DNS.
To create a new Azure Front Door Premium service:
-1. On [Microsoft Azure (PREVIEW) Compare offerings](https://ms.portal.azure.com/#create/Microsoft.AFDX) select **Azure Front Door**, and then select **Continue to create a Front Door**.
+1. On [Microsoft Azure Compare offerings](https://ms.portal.azure.com/#create/Microsoft.AFDX) select **Azure Front Door**, and then select **Continue to create a Front Door**.
-2. On the **Create a front door profile** page in the **Subscription** > **Resource group**, select the resource group in which your Azure Red Hat OpenShift cluster was deployed to house your Azure Front Door Premium (PREVIEW) resource.
+2. On the **Create a front door profile** page in the **Subscription** > **Resource group**, select the resource group in which your Azure Red Hat OpenShift cluster was deployed to house your Azure Front Door Premium resource.
3. Name your Azure Front Door Premium service appropriately. For example, in the **Name** field, enter the following name:
To create a new Azure Front Door Premium service:
At this stage, don't enable the Azure Private Link service, caching, or the Web Application Firewall (WAF) policy.
-9. Select **Review + create** to create the Azure Front Door Premium (PREVIEW) resource, and then wait for the process to complete.
+9. Select **Review + create** to create the Azure Front Door Premium resource, and then wait for the process to complete.
## Initial configuration of Azure Front Door Premium
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-supported-versions.md
Previously updated : 02/17/2021 Last updated : 03/10/2022 # Supported PostgreSQL major versions
Please see [Azure Database for PostgreSQL versioning policy](concepts-version-po
Azure Database for PostgreSQL currently supports the following major versions: ## PostgreSQL version 11
-The current minor release is 11.11. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-11.html) to learn more about improvements and fixes in this minor release.
+The current minor release is 11.12. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-12.html) to learn more about improvements and fixes in this minor release.
## PostgreSQL version 10
-The current minor release is 10.16. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/static/release-10-16.html) to learn more about improvements and fixes in this minor release.
+The current minor release is 10.17. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/static/release-10-17.html) to learn more about improvements and fixes in this minor release.
## PostgreSQL version 9.6 (retired)
-The current minor release is 9.6.21. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/9.6/static/release-9-6-21.html) to learn more about improvements and fixes in this minor release.
+Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.6 as of November 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
## PostgreSQL version 9.5 (retired)
-Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.5 as of February 11, 2021. Please see [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you are running this major version, please upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
+Aligning with Postgres community's [versioning policy](https://www.postgresql.org/support/versioning/), Azure Database for PostgreSQL has retired PostgreSQL version 9.5 as of February 11, 2021. See [Azure Database for PostgreSQL versioning policy](concepts-version-policy.md) for more details and restrictions. If you're running this major version, upgrade to a higher version, preferably to PostgreSQL 11 at your earliest convenience.
## Managing upgrades The PostgreSQL project regularly issues minor releases to fix reported bugs. Azure Database for PostgreSQL automatically patches servers with minor releases during the service's monthly deployments.
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
Previously updated : 10/20/2020 Last updated : 04/06/2022 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
You use the bastion host to connect securely to the VM for testing the private e
1. On the **Create virtual network** pane, select the **Basics** tab, and then enter the following values:
- | Setting | Value |
- | | |
- | **Project&nbsp;details** | |
- | Subscription | Select your Azure subscription. |
- | Resource group | Select **CreatePrivateEndpointQS-rg**. |
- | **Instance&nbsp;details** | |
- | Name | Enter **myVNet**. |
- | Region | Select **West Europe**.|
+ | Setting | Value |
+ |||
+ | **Project&nbsp;details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Create New**. </br> Enter **CreatePrivateEndpointQS-rg**. </br> Select **OK**. |
+ | **Instance&nbsp;details** | |
+ | Name | Enter **myVNet**. |
+ | Region | Select **West Europe**. |
1. Select the **IP Addresses** tab. 1. On the **IP Addresses** pane, enter this value:
- | Setting | Value |
- | | |
+ | Setting | Value |
+ |--||
| IPv4 address space | Enter **10.1.0.0/16**. |
-1. Under **Subnet name**, select the **default** link.
+1. Under **Subnet name**, select the **Add subnet** link.
1. On the **Edit subnet** right pane, enter these values:
- | Setting | Value |
- | | |
- | Subnet name | Enter **mySubnet**. |
+ | Setting | Value |
+ |-||
+ | Subnet name | Enter **mySubnet**. |
| Subnet address range | Enter **10.1.0.0/24**. |
-1. Select **Save**.
+1. Select **Add**.
1. Select the **Security** tab. 1. For **BastionHost**, select **Enable**, and then enter these values:
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost**. |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
- | Public IP Address | Select **Create new** and then, for **Name**, enter **myBastionIP**, and then select **OK**. |
+ | Setting | Value |
+ |-|-|
+ | Bastion name | Enter **myBastionHost**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/24**. |
+ | Public IP Address | Select **Create new** and then, for **Name**, enter **myBastionIP**, and then select **OK**. |
1. Select the **Review + create** tab.
Next, create a VM that you can use to test the private endpoint.
1. On the **Create a virtual machine** pane, select the **Basics** tab, and then enter the following values:
- | Setting | Value |
- | | |
- | **Project&nbsp;details** | |
- | Subscription | Select your Azure subscription. |
- | Resource group | Select **CreatePrivateEndpointQS-rg**. |
- | **Instance&nbsp;details** | |
- | Virtual machine name | Enter **myVM**. |
- | Region | Select **West Europe**. |
- | Availability options | Select **No infrastructure redundancy required**. |
- | Image | Select **Windows Server 2019 Datacenter - Gen1**. |
- | Azure Spot instance | Clear the checkbox. |
- | Size | Select the VM size or use the default setting. |
- | **Administrator&nbsp;account** | |
- | Authentication type | Select **Password** |
- | Username | Enter a username. |
- | Password | Enter a password. |
- | Confirm password | Reenter the password. |
+ | Setting | Value |
+ |--||
+ | **Project&nbsp;details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **CreatePrivateEndpointQS-rg**. |
+ | **Instance&nbsp;details** | |
+ | Virtual machine name | Enter **myVM**. |
+ | Region | Select **West Europe**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
+ | Azure Spot instance | Clear the checkbox. |
+ | Size | Select the VM size or use the default setting. |
+ | **Administrator&nbsp;account** | |
+ | Authentication type | Select **Password** |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter the password. |
1. Select the **Networking** tab. 1. On the **Networking** pane, enter the following values:
- | Setting | Value |
- | | |
- | **Network&nbsp;interface** | |
- | Virtual network | Enter **myVNet**. |
- | Subnet | Enter **mySubnet**. |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Basic**. |
- | Public inbound ports | Select **None**. |
+ |Setting | Value |
+ |-||
+ | **Network&nbsp;interface** | |
+ | Virtual network | Enter **myVNet**. |
+ | Subnet | Enter **mySubnet**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **None**. |
1. Select **Review + create**.
Next, you create a private endpoint for the web app that you created in the "Pre
1. On the **Create a private endpoint** pane, select the **Basics** tab, and then enter the following values:
- | Setting | Value |
- | - | -- |
- | **Project&nbsp;details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreatePrivateEndpointQS-rg**. You created this resource group in an earlier section.|
- | **Instance&nbsp;details** | |
- | Name | Enter **myPrivateEndpoint**. |
- | Region | Select **West Europe**. |
+ | Setting | Value |
+ ||--|
+ | **Project&nbsp;details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreatePrivateEndpointQS-rg**. You created this resource group in an earlier section. |
+ | **Instance&nbsp;details** | |
+ | Name | Enter **myPrivateEndpoint**. |
+ | Region | Select **West Europe**. |
1. Select the **Resource** tab. 1. On the **Resource** pane, enter the following values:
- | Setting | Value |
- | - | -- |
- | Connection method | Select **Connect to an Azure resource in my directory**. |
- | Subscription | Select your subscription. |
- | Resource type | Select **Microsoft.Web/sites**. |
- | Resource | Select **\<your-web-app-name>**. </br> Select the name of the web app that you created in the "Prerequisites" section. |
- | Target sub-resource | Select **sites**. |
-
-1. Select the **Configuration** tab.
-
-1. On the **Configuration** pane, enter the following values:
-
- | Setting | Value |
- | - | -- |
- | **Networking** | |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **mySubnet**. |
- | **Private&nbsp;DNS&nbsp;integration** | |
- | Integrate with private DNS zone | Keep the default of **Yes**. |
- | Subscription | Select your subscription. |
- | Private DNS zones | Keep the default of **(New) privatelink.azurewebsites.net**.
+ | Setting | Value |
+ |||
+ | Connection method | Select **Connect to an Azure resource in my directory**. |
+ | Subscription | Select your subscription. |
+ | Resource type | Select **Microsoft.Web/sites**. |
+ | Resource | Select **\<your-web-app-name>**. </br> Select the name of the web app that you created in the "Prerequisites" section. |
+ | Target sub-resource | Select **sites**. |
+
+1. Click **Next** to the **Virtual Network** tab.
+
+1. On the **Virtual Network** pane, enter the following values:
+
+ | Setting | Value |
+ ||--|
+ | **Networking** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **mySubnet**. |
+ | **Private&nbsp;DNS&nbsp;integration** | |
+ | Integrate with private DNS zone | Keep the default of **Yes**. |
+ | Subscription | Select your subscription. |
+ | Resource Group | Select Resource Group **CreatePrivateEndpointQS-rg**. |
+ | Private DNS zones | Keep the default of **(New) privatelink.azurewebsites.net**. |
-1. Select **Review + create**.
+1. Click **Next** to **Review + create**.
1. Select **Create**.
purview Concept Guidelines Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-guidelines-pricing.md
Previously updated : 10/03/2021 Last updated : 04/06/2022
Direct costs impacting Azure Purview pricing are based on the following three di
- If the number of assets reduces in the data estate, and are then removed in the data map through subsequent incremental scans, the storage component automatically reduces and so the data map scales down
+#### Automated scanning, classification and ingestion
-#### Automated scanning & classification
+There are two major automated processes that can trigger ingestion of metadata into Azure Purview:
+1. Automatic scans using native [connectors](/azure-purview-connector-overview.md). This process includes three main steps:
+ - Metadata scan
+ - Automatic classification
+ - Ingestion of metadata into Azure Purview
+2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines. This process includes:
+ - Ingestion of metadata and lineage into Azure Purview if Azure Purview account is connected to any Azure Data Factory or Azure Synapse pipelines.
+
+##### 1. Automatic scans using native connectors
- A **full scan** processes all assets within a selected scope of a data source whereas an **incremental scan** detects and processes assets, which have been created, modified, or deleted since the previous successful scan - All scans (full or Incremental scans) will pick up **updated, modified, or deleted** assets
Direct costs impacting Azure Purview pricing are based on the following three di
- Align your scan schedules with Self-Hosted Integration Runtime (SHIR) VMs (Virtual Machines) size to avoid extra costs linked to virtual machines
+##### 2. Automated ingestion using Azure Data Factory and/or Azure Synapse pipelines
+
+- metadata and lineage is ingested from Azure Data Factory or Azure Synapse pipelines every time the pipelines run in the source system.
#### Advanced resource sets
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
This article explains how to find the IP address of your search service and conf
## Get IP addresses for "AzureCognitiveSearch" service tag
-If your search service workloads include skillset execution, create an inbound rule that allows requests from the [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment). This step explains how to get the range of IP addresses needed for this inbound rule.
+We also require customers to create an inbound rule that allows requests from the [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment) to ensure we optimize the resource availability for search services. This step explains how to get the range of IP addresses needed for this inbound rule.
An IP address range is defined for each region that supports Azure Cognitive Search. You can get this IP address range from the `AzureCognitiveSearch` service tag.
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
When search and storage are in different regions, you can use the previously men
Azure Cognitive Search indexers are capable of efficiently extracting content from data sources, adding enrichments to the extracted content, optionally generating projections before writing the results to the search index.
-For optimum processing, a search service will determine an internal execution environment to set up the operation. You can't control or configure the environment, but it's important to know they exist so that you can account for them when setting up IP firewall rules.
-
-Depending on the number and types of tasks assigned, the indexer will run in one of two environments:
+For optimum processing, a search service will determine an internal execution environment to set up the operation. Depending on the number and types of tasks assigned, the indexer will run in one of two environments:
- An environment private to a specific search service. Indexers running in such environments share resources with other workloads (such as other customer-initiated indexing or querying workloads). Typically, only indexers that perform text-based indexing (for example, do not use a skillset) run in this environment. -- A multi-tenant environment hosting indexers that are resource intensive, such as those with skillsets. This environment is used to offload computationally intensive processing, leaving service-specific resources available for routine operations. This multi-tenant environment is managed and secured by Microsoft, at no extra cost to the customer.
+- A multi-tenant environment hosting indexers that are resource intensive - such as indexers with skillsets, indexers processing big documents, indexers processing a lot of documents and so on. This environment is used to offload computationally intensive processing, leaving service-specific resources available for routine operations. This multi-tenant environment is managed and secured by Microsoft, at no extra cost to the customer.
For any given indexer run, Azure Cognitive Search determines the best environment in which to run the indexer. If you're using an IP firewall to control access to Azure resources, knowing about execution environments will help you set up an IP range that is inclusive of both, as discussed in the next section.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Maximum limits on storage, workloads, and quantities of indexes and other object
<sup>3</sup> An upper limit exists for elements because having a large number of them significantly increases the storage required for your index. An element of a complex collection is defined as a member of that collection. For example, assume a [Hotel document with a Rooms complex collection](search-howto-complex-data-types.md#indexing-complex-types), each room in the Rooms collection is considered an element. During indexing, the indexing engine can safely process a maximum of 3000 elements across the document as a whole. [This limit](search-api-migration.md#upgrade-to-2019-05-06) was introduced in `api-version=2019-05-06` and applies to complex collections only, and not to string collections or to complex fields.
-You might find some variation in maximum limits if your service happens to be provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications will be portable across equivlaent service tiers in any region.
+You might find some variation in maximum limits if your service happens to be provisioned on a more powerful cluster. The limits here represent the common denominator. Indexes built to the above specifications will be portable across equivalent service tiers in any region.
<a name="document-limits"></a>
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
Shared private link resources that have failed Azure Resource Manager deployment
| Target resource not found | Existence of the target resource specified in `privateLinkResourceId` is checked only during the commencement of the Azure Resource Manager deployment. If the target resource is no longer available, then the deployment will fail. | You should ensure that the target resource is present in the specified subscription and resource group and isn't moved or deleted. | | Transient/other errors | The Azure Resource Manager deployment can fail if there is an infrastructure outage or because of other unexpected reasons. This should be rare and usually indicates a transient state. | Retry creating this resource at a later time. If the problem persists, reach out to Azure Support. |
+## Issues approving the backing private endpoint
+
+A private endpoint is created to the target Azure resource as specified in the shared private link creation request. This is one of the final steps in the asynchronous Azure Resource Manager deployment operation, but Azure Cognitive Search needs to link the private endpoint's private IP address as part of its network configuration. Once this link is done, the `provisioningState` of the shared private link resource will go to a terminal success state `Succeeded`. Customers should only approve or deny(or in general modify the configuration of the backing private endpoint) after the state has transitioned to `Succeeded`. Modifying the private endpoint in any way before this could result in an incomplete deployment operation and can cause the shared private link resource to end up (either immediately, or usually within a few hours) in a `Failed` state.
+ ## Resource stalled in an "Updating" or "Incomplete" state Typically, a shared private link resource should go a terminal state (`Succeeded` or `Failed`) in a few minutes after the request has been accepted by the search RP.
-In rare circumstances, Azure Cognitive Search can fail to correctly mark the state of the shared private link resource to a terminal state (`Succeeded` or `Failed`). This usually occurs due to an unexpected or catastrophic failure in the search RP. Shared private link resources are automatically transitioned to a `Failed` state if it has been "stuck" in a non-terminal state for more than 8 hours.
+In rare circumstances, Azure Cognitive Search can fail to correctly mark the state of the shared private link resource to a terminal state (`Succeeded` or `Failed`). This usually occurs due to an unexpected or catastrophic failure in the search RP. Shared private link resources are automatically transitioned to a `Failed` state if it has been "stuck" in a non-terminal state for more than a few hours.
-If you observe that the shared private link resource has not transitioned to a terminal state, wait for 8 hours to ensure that it becomes `Failed` before you can delete it and re-create it. Alternatively, instead of waiting you can try to create another shared private link resource with a different name (keeping all other parameters the same).
+If you observe that the shared private link resource has not transitioned to a terminal state, wait for a few hours to ensure that it becomes `Failed` before you can delete it and re-create it. Alternatively, instead of waiting you can try to create another shared private link resource with a different name (keeping all other parameters the same).
## Updating a shared private link resource
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-syslog.md
Having already set up [data collection from your CEF sources](connect-common-eve
1. You must run the following command on those machines to disable the synchronization of the agent with the Syslog configuration in Microsoft Sentinel. This ensures that the configuration change you made in the previous step does not get overwritten.
- ```c
- sudo su omsagent -c 'python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable'
+ ```bash
+ sudo -u omsagent python /opt/microsoft/omsconfig/Scripts/OMS_MetaConfigHelper.py --disable
``` ## Configure your device's logging settings
service-fabric Service Fabric Application Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-scenarios.md
Consider using the Service Fabric platform for the following types of applicatio
* **Data gathering, processing, and IoT**: Service Fabric handles large scale and has low latency through its stateful services. It can help process data on millions of devices where the data for the device and the computation are colocated.
- Customers who have built IoT services by using Service Fabric include [Honeywell](https://customers.microsoft.com/story/honeywell-manufacturing-hololens), [PCL Construction](https://customers.microsoft.com/story/pcl-construction-professional-services-azure), [Crestron](https://customers.microsoft.com/story/crestron-partner-professional-services-azure), [BMW](https://customers.microsoft.com/story/bmw-enables-driver-mobility-via-azure-service-fabric/),
-[Schneider Electric](https://customers.microsoft.com/story/schneider-electric-powers-engergy-solutions-on-azure-service-fabric), and
-[Mesh Systems](https://customers.microsoft.com/story/mesh-systems-lights-up-the-market-with-iot-based-azure-solutions).
+ Customers who have built IoT services by using Service Fabric include [PCL Construction](https://customers.microsoft.com/story/pcl-construction-professional-services-azure), [Citrix](https://customers.microsoft.com/story/citrix), [ASOS](https://customers.microsoft.com/story/asos-retail-and-consumer-goods-azure),
+[Oman Data Park](https://customers.microsoft.com/story/821095-oman-data-park-partner-professional-services-azure),
+[Kohler](https://customers.microsoft.com/story/kohler-konnect-azure-iot), and
+[Dover Fueling Systems](https://customers.microsoft.com/story/775087-microsoft-country-corner-dover-fueling-solutions-oil-and-gas-azure).
* **Gaming and session-based interactive applications**: Service Fabric is useful if your application requires low-latency reads and writes, such as in online gaming or instant messaging. Service Fabric enables you to build these interactive, stateful applications without having to create a separate store or cache. Visit [Azure gaming solutions](https://azure.microsoft.com/solutions/gaming/) for design guidance on [using Service Fabric in gaming services](/gaming/azure/reference-architectures/multiplayer-synchronous-sf).
- Customers who have built gaming services include [Next Games](https://customers.microsoft.com/story/next-games-media-telecommunications-azure) and [Digamore](https://customers.microsoft.com/story/digamore-entertainment-scores-with-a-new-gaming-platform-based-on-azure-service-fabric/). Customers who have built interactive sessions include [Honeywell with Hololens](https://customers.microsoft.com/story/honeywell-manufacturing-hololens).
+ Customers who have built gaming services include [Next Games](https://customers.microsoft.com/story/next-games-media-telecommunications-azure).
+ Customers who have built interactive sessions include [Honeywell with Hololens](https://customers.microsoft.com/story/honeywell-manufacturing-hololens).
* **Data analytics and workflow processing**: Applications that must reliably process events or streams of data benefit from the optimized reads and writes in Service Fabric. Service Fabric also supports application processing pipelines, where results must be reliable and passed on to the next processing stage without any loss. These pipelines include transactional and financial systems, where data consistency and computation guarantees are essential.
- Customers who have built business workflow services include [Zeiss Group](https://customers.microsoft.com/story/zeiss-group-focuses-on-azure-service-fabric-for-key-integration-platform), [Quorum Business Solutions](https://customers.microsoft.com/en-us/story/quorum-business-solutions-expand-energy-managemant-solutions-using-azure-service-fabric), and [Société General](https://customers.microsoft.com/en-us/story/societe-generale-speeds-real-time-market-quotes-using-azure-service-fabric).
+ Customers who have built business workflow services include [Zeiss Group](https://customers.microsoft.com/story/1366745613299736251-zeiss-group-focuses-on-azure-service-fabric-for-key-integration-platform) and
+ [PCL Construction](https://customers.microsoft.com/story/pcl-construction-professional-services-azure).
* **Computation on data**: Service Fabric enables you to build stateful applications that do intensive data computation. Service Fabric allows the colocation of processing (computation) and data in applications.
Consider using the Service Fabric platform for the following types of applicatio
For example, consider an application that performs near real-time recommendation selections for customers, with a round-trip time requirement of less than 100 milliseconds. The latency and performance characteristics of Service Fabric services provide a responsive experience to the user, compared with the standard implementation model of having to fetch the necessary data from remote storage. The system is more responsive because the computation of recommendation selection is colocated with the data and rules.
- Customers who have built computation services include [Solidsoft Reply](https://customers.microsoft.com/story/solidsoft-reply-platform-powers-e-verification-of-pharmaceuticals) and [Infosupport](https://customers.microsoft.com/story/service-fabric-customer-profile-info-support-and-fudura).
+ Customers who have built computation services include [ASOS](https://customers.microsoft.com/story/asos-retail-and-consumer-goods-azure) and [CCC](https://customers.microsoft.com/story/862085-ccc-information-services-partner-professional-services-azure-service-fabric).
* **Highly available services**: Service Fabric provides fast failover by creating multiple secondary service replicas. If a node, process, or individual service goes down due to hardware or other failure, one of the secondary replicas is promoted to a primary replica with minimal loss of service.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas
Germany | Germany Central, Germany Northeast China | China East, China North, China North2, China East2 Brazil | Brazil South
-Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers.
+Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers.<br/><br/> To use restricted regions as your primary or recovery region, please get yourselves allowlisted by raising a request [here](https://docs.microsoft.com/troubleshoot/azure/general/region-access-request-process).
>[!NOTE] >
-> - To protect your VMs from or to any of the Restricted Regions, please get yourselves allowlisted by raising a request [here](https://docs.microsoft.com/troubleshoot/azure/general/region-access-request-process).
> - For **Brazil South**, you can replicate and fail over to these regions: Brazil Southeast, South Central US, West Central US, East US, East US 2, West US, West US 2, and North Central US. > - Brazil South can only be used as a source region from which VMs can replicate using Site Recovery. It can't act as a target region. Note that if you fail over from Brazil South as a source region to a target, failback to Brazil South from the target region is supported. Brazil Southeast can only be used as a target region. > - If the region in which you want to create a vault doesn't show, make sure your subscription has access to create resources in that region.
site-recovery Deploy Vmware Azure Replication Appliance Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-preview.md
You deploy an on-premises replication appliance when you use [Azure Site Recover
- The replication appliance coordinates communications between on-premises VMware and Azure. It also manages data replication. - [Learn more](vmware-azure-architecture-preview.md) about the Azure Site Recovery replication appliance components and processes.
-## Hardware requirements
+## Pre-requisites
+
+### Hardware requirements
**Component** | **Requirement** |
CPU cores | 8
RAM | 32 GB Number of disks | 3, including the OS disk - 80 GB, data disk 1 - 620 GB, data disk 2 - 620 GB
-## Software requirements
+### Software requirements
**Component** | **Requirement** |
Group policies | Don't enable these group policies: <br> - Prevent access to the
IIS | - No pre-existing default website <br> - No pre-existing website/application listening on port 443 <br>- Enable [anonymous authentication](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731244(v=ws.10)) <br> - Enable [FastCGI](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753077(v=ws.10)) setting FIPS (Federal Information Processing Standards) | Do not enable FIPS mode|
-## Network requirements
+### Network requirements
|**Component** | **Requirement**| | | |
FIPS (Federal Information Processing Standards) | Do not enable FIPS mode|
|NIC type | VMXNET3 (if the appliance is a VMware VM)|
-### Allow URLs
+#### Allow URLs
Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity:
Ensure the following URLs are allowed and reachable from the Azure Site Recovery
> [!NOTE] > Private links are not supported with the preview release.
-## Folder exclusions from Antivirus program
+### Folder exclusions from Antivirus program
-### If Antivirus Software is active on appliance
+#### If Antivirus Software is active on appliance
Exclude following folders from Antivirus software for smooth replication and to avoid connectivity issues.
C:\Program Files\Microsoft Azure VMware Discovery Service <br>
C:\Program Files\Microsoft On-Premise to Azure Replication agent <br> E:\ <br>
-### If Antivirus software is active on Source machine
+#### If Antivirus software is active on Source machine
If source machine has an Antivirus software active, installation folder should be excluded. So, exclude folder C:\ProgramData\ASR\agent for smooth replication.
+## Sizing and capacity
+An appliance that uses an in-built process server to protect the workload can handle up to 200 virtual machines, based on the following configurations:
+
+ |CPU | Memory | Cache disk size | Data change rate | Protected machines |
+ ||-|--||-|
+ |16 vCPUs (2 sockets * 8 cores @ 2.5 GHz) | 32 GB | 1 TB | >1 TB to 2 TB | Use to replicate 151 to 200 machines.|
+
+- You can perform discovery of all the machines in a vCenter server, using any of the replication appliances in the vault.
+
+- You can [switch a protected machine](switch-replication-appliance-preview.md), between different appliances in the same vault, given the selected appliance is healthy.
+
+For detailed information about how to use multiple appliances and failover a replication appliance, see [this article](switch-replication-appliance-preview.md)
++ ## Prepare Azure account To create and register the Azure Site Recovery replication appliance, you need an Azure account with:
To create and register the Azure Site Recovery replication appliance, you need a
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner for the required permissions.
-## Prerequisites
+## Required permissions
**Here are the required key vault permissions**:
In case of any organizational restrictions, you can manually set up the Site Rec
4. After saving connectivity details, Select **Continue** to proceed to registration with Microsoft Azure.
-5. Ensure the [prerequisites](#prerequisites) are met, proceed with registration.
+5. Ensure the [prerequisites](#pre-requisites) are met, proceed with registration.
![Register appliance](./media/deploy-vmware-azure-replication-appliance-preview/app-setup-register.png)
You will also be able to see a tab for **Discovered items** that lists all of th
![Replication appliance preview](./media/deploy-vmware-azure-replication-appliance-preview/discovered-items.png)
-## Sizing and capacity
-An appliance that uses an inbuilt process server to protect the workload can handle up to 200 virtual machines, based on the following configurations:
-
- |CPU | Memory | Cache disk size | Data change rate | Protected machines |
- ||-|--||-|
- |16 vCPUs (2 sockets * 8 cores @ 2.5 GHz) | 32 GB | 1 TB | >1 TB to 2 TB | Use to replicate 151 to 200 machines.|
--- You can perform discovery of all the machines in a vCenter server, using any of the replication appliances in the vault.--- You can [switch a protected machine](switch-replication-appliance-preview.md), between different appliances in the same vault, given the selected appliance is healthy.-
-For detailed information about how to use multiple appliances and failover a replication appliance, see [this article](switch-replication-appliance-preview.md)
## Next steps Set up disaster recovery of [VMware VMs](vmware-azure-set-up-replication-tutorial-preview.md) to Azure.
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support for Azure Bl
> > To enroll in the preview, complete [this form](https://forms.office.com/r/gZguN0j65Y) AND request to join via 'Preview features' in Azure portal.
-## Client support
+## Known unsupported clients
-### Known supported clients
+The following clients are known to be incompatible with SFTP for Azure Blob Storage (preview). See [Supported algorithms](secure-file-transfer-protocol-support.md#supported-algorithms) for more information.
-- OpenSSH 7.4+-- WinSCP 5.17.10+-- PuTTY 0.74+-- FileZilla 3.53.0+-- SSH.NET 2020.0.0+-- libssh 1.8.2+-- Cyberduck 7.8.2+-- Maverick Legacy 1.7.15+-
-### Known unsupported clients
--- SSH.NET 2016.1.0-- libssh2 1.7.0
+- Axway
+- Five9
+- Kemp
+- Moveit
+- Mule
- paramiko 1.16.0-- AsyncSSH 2.1.0-- SSH Go
+- Salesforce
+- SSH.NET 2016.1.0
+- Workday
+- XFB.Gateway
> [!NOTE]
-> The client support lists above are not exhaustive and may change over time.
+> The unsupported client list above is not exhaustive and may change over time.
+
+## Unsupported operations
+
+| Category | Unsupported operations |
+|||
+| ACLs | <li>`chgrp` - change group<li>`chmod` - change permissions/mode<li>`chown` - change owner<li>`put/get -p` - preserving permissions |
+| Resume operations |<li>`reget`, `get -a`- resume download<li>`reput`. `put -a` - resume upload |
+| Random writes and appends | <li>Operations that include both READ and WRITE flags. For example: [SSH.NET create API](https://github.com/sshnet/SSH.NET/blob/develop/src/Renci.SshNet/SftpClient.cs#:~:text=public%20SftpFileStream-,Create,-(string%20path))<li>Operations that include APPEND flag. For example: [SSH.NET append API](https://github.com/sshnet/SSH.NET/blob/develop/src/Renci.SshNet/SftpClient.cs#:~:text=public%20void-,AppendAllLines,-(string%20path%2C%20IEnumerable%3Cstring%3E%20contents)). |
+| Links |<li>`symlink` - creating symbolic links<li>`ln` - creating hard links<li>Reading links not supported |
+| Capacity Information | `df` - usage info for filesystem |
+| Extensions | Unsupported extensions include but are not limited to: fsync@openssh.com, limits@openssh.com, lsetstat@openssh.com, statvfs@openssh.com |
+| SSH Commands | SFTP is the only supported subsystem. Shell requests after the completion of the key exchange will fail. |
+| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) are not allowed from other protocols on blobs that are created by using SFTP. Full overwrites are allowed.|
## Authentication and authorization
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
You can use many different SFTP clients to securely connect and then transfer fi
SFTP support for Azure Blob Storage currently limits its cryptographic algorithm support based on security considerations. We strongly recommend that customers utilize Microsoft Security Development Lifecycle (SDL) approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations).
-SFTP clients commonly found to not support algorithms listed above include Apache SFTP server, Axway, Moveit, Five9, Workday, Mule, Kemp, Salesforce, XFB.
+### Known supported clients
+
+The following clients have compatible algorithm support with SFTP for Azure Blob Storage (preview). See [Limitations and known issues with SSH File Transfer Protocol (SFTP) support for Azure Blob Storage](secure-file-transfer-protocol-known-issues.md) if you are having trouble connecting.
+
+- AsyncSSH 2.1.0+
+- Cyberduck 7.8.2+
+- edtFTPjPRO 7.0.0+
+- FileZilla 3.53.0+
+- libssh 0.9.5+
+- Maverick Legacy 1.7.15+
+- OpenSSH 7.4+
+- paramiko 2.8.1+
+- PuTTY 0.74+
+- QualysML 12.3.41.1+
+- RebexSSH 5.0.7119.0+
+- ssh2js 0.1.20+
+- sshj 0.27.0+
+- SSH.NET 2020.0.0+
+- WinSCP 5.10+
+
+> [!NOTE]
+> The supported client list above is not exhaustive and may change over time.
## Connecting with SFTP
storage Configure Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/configure-network-routing-preference.md
To change your routing preference to Internet routing:
2. Navigate to your storage account in the portal.
-3. Under **Settings**, choose **Networking**.
-
- > [!div class="mx-imgBorder"]
- > ![Networking menu option](./media/configure-network-routing-preference/networking-option.png)
+3. Under **Security + networking**, choose **Networking**.
4. In the **Firewalls and virtual networks** tab, under **Network Routing**, change the **Routing preference** setting to **Internet routing**.
This preference affects only the route-specific endpoint. This preference doesn'
1. Navigate to your storage account in the portal.
-2. Under **Settings**, choose **Networking**.
+2. Under **Security + networking**, choose **Networking**.
3. In the **Firewalls and virtual networks** tab, under **Publish route-specific endpoints**, choose the routing preference of your route-specific endpoint, and then click **Save**.
If you configured a route-specific endpoint, you can find the endpoint in the pr
### [Portal](#tab/azure-portal)
-1. Under **Settings**, choose **Properties**.
-
- > [!div class="mx-imgBorder"]
- > ![properties menu option](./media/configure-network-routing-preference/properties.png)
+1. Under **Settings**, choose **Endpoints**.
2. The **Microsoft network routing** endpoint is shown for each service that supports routing preferences. This image shows the endpoint for the blob and file services.
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Previously updated : 10/01/2021 Last updated : 04/01/2022
Some Azure tools offer the option to use Azure AD authorization to access Azure
| Azure IoT Hub | Supported. For more information, see [IoT Hub support for virtual networks](../../iot-hub/virtual-network-support.md). | | Azure Cloud Shell | Azure Cloud Shell is an integrated shell in the Azure portal. Azure Cloud Shell hosts files for persistence in an Azure file share in a storage account. These files will become inaccessible if Shared Key authorization is disallowed for that storage account. For more information, see [Connect your Microsoft Azure Files storage](../../cloud-shell/overview.md#connect-your-microsoft-azure-files-storage). <br /><br /> To run commands in Azure Cloud Shell to manage storage accounts for which Shared Key access is disallowed, first make sure that you have been granted the necessary permissions to these accounts via Azure RBAC. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md). |
+## Disallow Shared Key authorization to use Azure AD Conditional Access
+
+To protect an Azure Storage account with Azure AD [Conditional Access](../../active-directory/conditional-access/overview.md) policies, you must disallow Shared Key authorization for the storage account. Follow the steps described in [Detect the type of authorization used by client applications](#detect-the-type-of-authorization-used-by-client-applications) to analyze the potential impact of this change for existing storage accounts before disallowing Shared Key authorization.
+ ## Transition Azure Files and Table storage workloads Azure Storage supports Azure AD authorization for requests to Blob and Queue storage only. If you disallow authorization with Shared Key for a storage account, requests to Azure Files or Table storage that use Shared Key authorization will fail. Because the Azure portal always uses Shared Key authorization to access file and table data, if you disallow authorization with Shared Key for the storage account, you will not be able to access file or table data in the Azure portal.
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
Share names must be all lower case letters, numbers, and single hyphens but cann
# [PowerShell](#tab/azure-powershell)
-Now that you've created a storage account, you can create your first Azure file share. Create a file share by using the [New-AzRmStorageShare](/powershell/module/az.storage/New-AzRmStorageShare) cmdlet. This example creates a share named **myshare**.
+Now that you've created a storage account, you can create your first Azure file share by using the [New-AzRmStorageShare](/powershell/module/az.storage/New-AzRmStorageShare) cmdlet. This example creates a share named **myshare** with a quota of 1024 GiB. The quota can be a maximum of 5 TiB, or 100 TiB with large file shares enabled on the storage account.
```azurepowershell-interactive $shareName = "myshare"
New-AzRmStorageShare `
# [Azure CLI](#tab/azure-cli)
-Now that you've created a storage account, you can create your first Azure file share. Create file shares by using the [az storage share-rm create](/cli/azure/storage/share-rm#az-storage-share-rm-create) command. This example creates an Azure file share named **myshare**:
+Now that you've created a storage account, you can create your first Azure file share by using the [az storage share-rm create](/cli/azure/storage/share-rm#az-storage-share-rm-create) command. This example creates a share named **myshare** with a quota of 1024 GiB. The quota can be a maximum of 5 TiB, or 100 TiB with large file shares enabled on the storage account.
```azurecli-interactive shareName="myshare"
storage Storage Powershell How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-powershell-how-to-use-queues.md
The following example demonstrates how to add a message to your queue.
$queueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("This is message 1") # Add a new message to the queue
-$queue.CloudQueue.AddMessageAsync($QueueMessage)
+$queue.CloudQueue.AddMessageAsync($queueMessage)
# Add two more messages to the queue $queueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("This is message 2")
-$queue.CloudQueue.AddMessageAsync($QueueMessage)
+$queue.CloudQueue.AddMessageAsync($queueMessage)
$queueMessage = [Microsoft.Azure.Storage.Queue.CloudQueueMessage]::new("This is message 3")
-$queue.CloudQueue.AddMessageAsync($QueueMessage)
+$queue.CloudQueue.AddMessageAsync($queueMessage)
``` If you use the [Azure Storage Explorer](https://storageexplorer.com), you can connect to your Azure account and view the queues in the storage account, and drill down into a queue to view the messages on the queue.
synapse-analytics Restore Sql Pool From Deleted Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool-from-deleted-workspace.md
+
+ Title: Restore a dedicated SQL pool from a dropped workspace
+description: How-to guide for restoring a dedicated SQL pool from a dropped workspace.
++++ Last updated : 03/29/2022++++
+# Restore a dedicated SQL pool from a deleted workspace
+
+In this article, you learn how to restore a dedicated SQL pool in Azure Synapse Analytics after an accidental drop of a workspace using PowerShell.
+
+> [!NOTE]
+> This guidance is for Synapse Workspace dedicated sql pools only. For standalone dedicated sql pool (formerly SQL DW) please follow guidance [Restore sql pool from deleted server](../sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md).
+
+## Before you begin
++
+## Restore the SQL pool from the dropped workspace
+
+1. Open PowerShell
+2. Connect to your Azure account.
+3. Set the context to the subscription that contains the workspace that was dropped.
+4. Specify the approximate datetime the workspace was dropped.
+5. Construct the resource ID for the database you wish to recover from the dropped workspace.
+6. Restore the database from the dropped workspace
+7. Verify the status of the recovered database as 'online'.
++
+```powershell
+$SubscriptionID="<YourSubscriptionID>"
+$ResourceGroupName="<YourResourceGroupName>"
+$WorkspaceName="<YourWorkspaceNameWithoutURLSuffixSeeNote>" # Without sql.azuresynapse.net
+$DatabaseName="<YourDatabaseName>"
+$TargetResourceGroupName="<YourTargetResourceGroupName>"
+$TargetWorkspaceName="<YourtargetServerNameWithoutURLSuffixSeeNote>"
+$TargetDatabaseName="<YourDatabaseName>"
+
+Connect-AzAccount
+Set-AzContext -SubscriptionID $SubscriptionID
+
+# Define the approximate point in time the workspace was dropped as DroppedDateTime "yyyy-MM-ddThh:mm:ssZ" (ex. 2022-01-01T16:15:00Z)
+$PointInTime=ΓÇ¥<DroppedDateTime>ΓÇ¥
+$DroppedDateTime = Get-Date -Date $PointInTime
++
+# construct the resource ID of the sql pool you wish to recover. The format required Microsoft.Sql. This includes the approximate date time the server was dropped.
+$SourceDatabaseID = "/subscriptions/"+$SubscriptionID+"/resourceGroups/"+$ResourceGroupName+"/providers/Microsoft.Sql/servers/"+$WorkspaceName+"/databases/"+$DatabaseName
+
+# Restore to the target workspace with the source SQL pool.
+$RestoredDatabase = Restore-AzSynapseSqlPool -FromDroppedSqlPool -DeletionDate $DroppedDateTime -TargetSqlPoolName $TargetDatabaseName -ResourceGroupName $TargetResourceGroupName -WorkspaceName $TargetWorkspaceName -ResourceId $SourceDatabaseID
+
+# Verify the status of restored database
+$RestoredDatabase.status
+```
+
+## Troubleshooting
+If "An unexpected error occurred while processing the request." message is received, the original database may not have any recovery points available due to the original workspace being short lived. Typically this is when the workspace existed for less than one hour.
+
+## Next Steps
+- [Create a restore point](sqlpool-create-restore-point.md)
+- [Restore a SQL pool](restore-sql-pool.md)
synapse-analytics Sql Data Warehouse Restore From Deleted Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-restore-from-deleted-server.md
+
+ Title: Restore a dedicated SQL pool (formerly SQL DW) from a deleted server
+description: How-to guide for restoring a dedicated SQL pool from a deleted server.
++++ Last updated : 04/01/2022++++
+# Restore a dedicated SQL pool (formerly SQL DW) from a deleted server
+
+In this article, you learn how to restore a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics after an accidental drop of a server using PowerShell.
+
+> [!NOTE]
+> This guidance is for standalone dedicated sql pools (formerly SQL DW) only. For synapse workspace dedicated sql pools please follow guidance [Restore SQL pool from deleted workspace](../backuprestore/restore-sql-pool-from-deleted-workspace.md).
+
+## Before you begin
++
+## Restore the SQL pool from the deleted server
+
+1. Open PowerShell
+2. Connect to your Azure account.
+3. Set the context to the subscription that contains the server that was dropped.
+4. Specify the approximate datetime the server was dropped.
+5. Construct the resource ID for the database you wish to recover from the dropped server.
+6. Restore the database from the dropped server
+7. Verify the status of the recovered database as 'online'.
++
+```powershell
+$SubscriptionID="<YourSubscriptionID>"
+$ResourceGroupName="<YourResourceGroupName>"
+$ServereName="<YourServerNameWithoutURLSuffixSeeNote>" # Without database.windows.net
+$DatabaseName="<YourDatabaseName>"
+$TargetServerName="<YourtargetServerNameWithoutURLSuffixSeeNote>"
+$TargetDatabaseName="<YourDatabaseName>"
+
+Connect-AzAccount
+Set-AzContext -SubscriptionId $SubscriptionID
+
+# Define the approximate point in time the server was dropped as DroppedDateTime "yyyy-MM-ddThh:mm:ssZ" (ex. 2022-01-01T16:15:00Z)
+$PointInTime=ΓÇ¥<DroppedDateTime>ΓÇ¥
+$DroppedDateTime = Get-Date -Date $PointInTime
+
+# construct the resource ID of the database you wish to recover. The format required Microsoft.Sql. This includes the approximate date time the server was dropped.
+$SourceDatabaseID = "/subscriptions/"+$SubscriptionID+"/resourceGroups/"+$ResourceGroupName+"/providers/Microsoft.Sql/servers/"+$ServerName+"/restorableDroppedDatabases/"+$DatabaseName+","+$DroppedDateTime.ToUniversalTime().ToFileTimeUtc().ToString()
+
+# Restore to target workspace with the source database.
+$RestoredDatabase = Restore-AzSqlDatabase -FromDeletedDatabaseBackup -DeletionDate $DroppedDateTime -ResourceGroupName $ResourceGroupName -ServerName $TargetServerName -TargetDatabaseName $TargetDatabaseName -ResourceId $SourceDatabaseID
+
+# Verify the status of restored database
+$RestoredDatabase.status
+```
+
+## Troubleshooting
+If "An unexpected error occurred while processing the request." message is received, the original database may not have any recovery points available due to the original server being short lived. Typically this is when the server existed for less than one hour.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You'll need to enter the following identity parameters when deploying session ho
You have a choice of operating systems that you can use for session hosts to provide virtual desktops and remote apps. You can use different operating systems with different host pools to provide flexibility to your users. Supported dates are inline with the [Microsoft Lifecycle Policy](/lifecycle/). We support the following 64-bit versions of these operating systems:
-|Operating system |Applicable license|
+|Operating system |User access rights|
|||
-|<ul><li>Windows 11 Enterprise multi-session</li><li>Windows 11 Enterprise</li><li>Windows 10 Enterprise multi-session, version 1909 and later</li><li>Windows 10 Enterprise, version 1909 and later</li><li>Windows 7 Enterprise</li></ul>|<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>|
-|<ul><li>Windows Server 2022</li><li>Windows Server 2019</li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li></ul>|<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses</li></ul>|
+|<ul><li>Windows 11 Enterprise multi-session</li><li>Windows 11 Enterprise</li><li>Windows 10 Enterprise multi-session, version 1909 and later</li><li>Windows 10 Enterprise, version 1909 and later</li><li>Windows 7 Enterprise</li></ul>|License entitlement:<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) instead of license entitlement.</li></ul>|
+|<ul><li>Windows Server 2022</li><li>Windows Server 2019</li><li>Windows Server 2016</li><li>Windows Server 2012 R2</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing is not available for Windows Server operating systems.|
> [!NOTE] > Azure Virtual Desktop doesn't support 32-bit operating systems or SKUs not listed in the previous table. In addition, Windows 7 doesn't support any VHD or VHDX-based profile solutions hosted on managed Azure Storage due to a sector size limitation.
virtual-desktop Troubleshoot Powershell 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-powershell-2019.md
Title: Azure Virtual Desktop (classic) PowerShell - Azure
description: How to troubleshoot issues with PowerShell when you set up a Azure Virtual Desktop (classic) tenant environment. Previously updated : 03/30/2020 Last updated : 04/05/2022
Get-RdsDiagnosticActivities -Deployment -username <username>
> [!NOTE] > New-RdsRoleAssignment cannot give permissions to a user that doesn't exist in the Azure Active Directory (AD).
+## Error: SessionHostPool could not be deleted
+
+This error usually happens when you run the following command to try to remove a session host.
+
+```powershell
+Remove-RdsHostPool -TenantName <TenantName> -Name <HostPoolName>
+```
+
+**Cause:** If you run the command before deleting the host pool's leaf objects, it won't work.
+
+**Fix:** Run the following command to delete the session host.
+
+```powershell
+Get-RdsSessionHost-TenantName <TenantName> -Hostpook <HostPoolName> | remove-RdsSessionhost -Force
+```
+
+Using the force command will let you delete the session host even if it has assigned users.
+ ## Next steps - For an overview on troubleshooting Azure Virtual Desktop and the escalation tracks, see [Troubleshooting overview, feedback, and support](troubleshoot-set-up-overview-2019.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md
If your application demand increases, the load on the VM instances in your scale
1. Open the Azure portal and select **Resource groups** from the menu on the left-hand side of the dashboard. 2. Select the resource group that contains your scale set, then choose your scale set from the list of resources.
-3. Choose **Scaling** from the menu on the left-hand side of the scale set window. Select the button to **Enable autoscale**:
+3. Choose **Scaling** from the menu on the left-hand side of the scale set window. Select the button to **Custom autoscale**:
- ![Enable autoscale in the Azure portal](media/virtual-machine-scale-sets-autoscale-portal/enable-autoscale.png)
+ :::image type="content" source="media/virtual-machine-scale-sets-autoscale-portal/enable-autoscale.png" alt-text="Enable autoscale in the Azure portal":::
-4. Enter a name for your settings, such as *autoscale*, then select the option to **Add a rule**.
+4. Select the option to **Add a rule**.
+ :::image type="content" source="media/virtual-machine-scale-sets-autoscale-portal/add-autoscale-rule.png" alt-text="Add autoscale rule in the Azure portal":::
5. Let's create a rule that increases the number of VM instances in a scale set when the average CPU load is greater than 70% over a 10-minute period. When the rule triggers, the number of VM instances is increased by 20%. In scale sets with a small number of VM instances, you could set the **Operation** to *Increase count by* and then specify *1* or *2* for the *Instance count*. In scale sets with a large number of VM instances, an increase of 10% or 20% VM instances may be more appropriate.
If your application demand increases, the load on the VM instances in your scale
The following examples show a rule created in the Azure portal that matches these settings:
- ![Create an autoscale rule to increase the number of VM instances](media/virtual-machine-scale-sets-autoscale-portal/rule-increase.png)
+
+ :::image type="content" source="media/virtual-machine-scale-sets-autoscale-portal/rule-increase.png" alt-text="Create an autoscale rule to increase the number of VM instances":::
> [!NOTE] > Tasks running inside the instance will abruptly stop and the instance will scale down once it completes the cooling period.
Your autoscale profile must define a minimum, maximum, and default number of VM
## Monitor number of instances in a scale set To see the number and status of VM instances, select **Instances** from the menu on the left-hand side of the scale set window. The status indicates if the VM instance is *Creating* as the scale set automatically scales out, or is *Deleting* as the scale automatically scales in.
-![View a list of scale set VM instances](media/virtual-machine-scale-sets-autoscale-portal/view-instances.png)
- ## Autoscale based on a schedule The previous examples automatically scaled a scale set in or out with basic host metrics such as CPU usage. You can also create autoscale rules based on schedules. These schedule-based rules allow you to automatically scale out the number of VM instances ahead of an anticipated increase in application demand, such as core work hours, and then automatically scale in the number of instances at a time that you anticipate less demand, such as the weekend.
-1. Choose **Scaling** from the menu on the left-hand side of the scale set window. To delete the existing autoscale rules created in the previous examples, choose the trash can icon.
-
- ![Delete the existing autoscale rules](media/virtual-machine-scale-sets-autoscale-portal/delete-rules.png)
+1. Choose **Scaling** from the menu on the left-hand side of the scale set window.
2. Choose to **Add a scale condition**. Select the pencil icon next to rule name, and provide a name such as *Scale out during each work day*.
- ![Rename the default autoscale rule](media/virtual-machine-scale-sets-autoscale-portal/rename-rule.png)
+ :::image type="content" source="media/virtual-machine-scale-sets-autoscale-portal/rename-rule.png" alt-text="Rename the default autoscale rule":::
3. Select the radio button to **Scale to a specific instance count**. 4. To scale up the number of instances, enter *10* as the instance count.
The previous examples automatically scaled a scale set in or out with basic host
8. Choose to **Add a scale condition** again. Repeat the process to create a schedule named *Scale in during the evening* that scales to *3* instances, repeats every weekday, and starts at *18:00*. 9. To apply your schedule-based autoscale rules, select **Save**.
- ![Create autoscale rules that scale on a schedule](media/virtual-machine-scale-sets-autoscale-portal/schedule-autoscale.PNG)
+ :::image type="content" source="media/virtual-machine-scale-sets-autoscale-portal/schedule-autoscale.png" alt-text="Create autoscale rules that scale on a schedule":::
To see how your autoscale rules are applied, select **Run history** across the top of the **Scaling** window. The graph and events list shows when the autoscale rules trigger and the number of VM instances in your scale set increases or decreases.
virtual-machines Ebdsv5 Ebsv5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md
Last updated 04/05/2022
-# Ebv5-series
+# Ebdsv5 and Ebsv5 series
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V
| Size | vCPU | Memory: GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps | Max uncached storage throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBp | Max NICs | Network bandwidth | | | | | | | | | | |
-| Standard_E2bds_v5 | 2 | 16 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 |
-| Standard_E4bds_v5 | 4 | 32 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
-| Standard_E8bds_v5 | 8 | 64 | 16 | 38000/500 | 22000/625 | 40000/1200 | 4 | 10000 |
-| Standard_E16bds_v5 | 16 | 128 | 32 | 75000/1000 | 44000/1250 | 64000/2000 | 8 | 12500
-| Standard_E32bds_v5 | 32 | 256 | 32 | 150000/1250 | 88000/2500 | 120000/4000 | 8 | 16000 |
-| Standard_E48bds_v5 | 48 | 384 | 32 | 225000/2000 | 120000/4000 | 120000/4000 | 8 | 16000 |
-| Standard_E64bds_v5 | 64 | 512 | 32 | 300000/4000 | 120000/4000 | 120000/4000 | 8 | 20000 |
+| Standard_E2bs_v5 | 2 | 16 | 4 | 9000/125 | 5500/156 | 10000/1200 | 2 | 10000 |
+| Standard_E4bs_v5 | 4 | 32 | 8 | 19000/250 | 11000/350 | 20000/1200 | 2 | 10000 |
+| Standard_E8bs_v5 | 8 | 64 | 16 | 38000/500 | 22000/625 | 40000/1200 | 4 | 10000 |
+| Standard_E16bs_v5 | 16 | 128 | 32 | 75000/1000 | 44000/1250 | 64000/2000 | 8 | 12500
+| Standard_E32bs_v5 | 32 | 256 | 32 | 150000/1250 | 88000/2500 | 120000/4000 | 8 | 16000 |
+| Standard_E48bs_v5 | 48 | 384 | 32 | 225000/2000 | 120000/4000 | 120000/4000 | 8 | 16000 |
+| Standard_E64bs_v5 | 64 | 512 | 32 | 300000/4000 | 120000/4000 | 120000/4000 | 8 | 20000 |
> [!NOTE] > Accelerated networking is required and turned on by default on all Ebsv5 VMs.
Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/cal
## Next steps -- Use the Azure [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+- Use the Azure [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
virtual-machines Os Disk Swap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/os-disk-swap.md
If you have an existing VM, but you want to swap the disk for a backup disk or another OS disk, you can use the Azure CLI to swap the OS disks. You don't have to delete and recreate the VM. You can even use a managed disk in another resource group, as long as it isn't already in use.
-The VM does need to be stopped\deallocated, then the resource ID of the managed disk can be replaced with the resource ID of a different managed disk.
+The VM does not need to be stopped\deallocated. The resource ID of the managed disk can be replaced with the resource ID of a different managed disk.
Make sure that the VM size and storage type are compatible with the disk you want to attach. For example, if the disk you want to use is in Premium Storage, then the VM needs to be capable of Premium Storage (like a DS-series size).
az disk list \
```
-Use [az vm stop](/cli/azure/vm) to stop\deallocate the VM before swapping the disks.
+(Optional) Use [az vm stop](/cli/azure/vm) to stop\deallocate the VM before swapping the disks.
```azurecli-interactive az vm stop \
virtual-machines Sizes Previous Gen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-previous-gen.md
Premium Storage caching: Supported
<sup>1</sup> The maximum disk throughput (IOPS or MBps) possible with a GS series VM may be limited by the number, size and striping of the attached disk(s). For details, see [Design for high performance](premium-storage-performance.md).
-<sup>2</sup> Instance is isolated to hardware dedicated to a single customer.
+<sup>2</sup> Isolation feature retired on 2/28/2022. For information, see the [retirement announcement](https://azure.microsoft.com/updates/the-g5-and-gs5-azure-vms-will-no-longer-be-hardwareisolated-on-28-february-2022/).
<sup>3</sup> Constrained core sizes available.
Premium Storage caching: Not Supported
| Standard_G4 | 16 | 224 | 3072 | 48000/750/375 | 64/64x500 | 8/16000 | | Standard_G5&nbsp;<sup>1</sup> | 32 | 448 | 6144 | 96000/1500/750| 64/64x500 | 8/20000 |
-<sup>1</sup> Instance is isolated to hardware dedicated to a single customer.
+<sup>1</sup> Isolation feature retired on 2/28/2022. For information, see the [retirement announcement](https://azure.microsoft.com/updates/the-g5-and-gs5-azure-vms-will-no-longer-be-hardwareisolated-on-28-february-2022/).
<br> ### NV-series
The ND-series virtual machines are a new addition to the GPU family designed for
## Next steps
-Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Os Disk Swap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/os-disk-swap.md
If you have an existing VM, but you want to swap the disk for a backup disk or another OS disk, you can use Azure PowerShell to swap the OS disks. You don't have to delete and recreate the VM. You can even use a managed disk in another resource group, as long as it isn't already in use.
-
-
-The VM does need to be stopped\deallocated, then the resource ID of the managed disk can be replaced with the resource ID of a different managed disk.
+The VM does not need to be stopped\deallocated. The resource ID of the managed disk can be replaced with the resource ID of a different managed disk.
Make sure that the VM size and storage type are compatible with the disk you want to attach. For example, if the disk you want to use is in Premium Storage, then the VM needs to be capable of Premium Storage (like a DS-series size). Both disks must also be the same size. And ensure that you're not mixing an un-encrypted VM with an encrypted OS disk, this is not supported. If the VM doesn't use Azure Disk Encryption, then the OS disk being swapped in shouldn't be using Azure Disk Encryption. If disks are using Disk Encryption Sets, both disks should belong to same Disk Encryption set.
When you have the name of the disk that you would like to use, set that as the O
# Get the VM $vm = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM
-# Make sure the VM is stopped\deallocated
+# (Optional) Stop/ deallocate the VM
Stop-AzVM -ResourceGroupName myResourceGroup -Name $vm.Name -Force # Get the new disk that you want to swap in
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules.
-You can use service tags to define network access controls on [network security groups](./network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name, such as **ApiManagement**, in the appropriate *source* or *destination* field of a rule, you can allow or deny the traffic for the corresponding service.
+You can use service tags to define network access controls on [network security groups](./network-security-groups-overview.md#security-rules), [Azure Firewall](../firewall/service-tags.md), and [user-defined routes](./virtual-networks-udr-overview.md#service-tags-for-user-defined-routes-preview). Use service tags in place of specific IP addresses when you create security rules and routes. By specifying the service tag name, such as **ApiManagement**, in the appropriate *source* or *destination* field of a security rule, you can allow or deny the traffic for the corresponding service. By specifying the service tag name in the address prefix of a route, you can route traffic intended for any of the prefixes encapsulated by the service tag to a desired next hop type.
> [!NOTE]
-> As of March 2021, you can also use Service Tags in place of explicit IP ranges in [user defined routes](./virtual-networks-udr-overview.md). This feature is currently in Public Preview and will move to GA in March 2022.
+> As of March 2022, using service tags in place of explicit address prefixes in [user defined routes](./virtual-networks-udr-overview.md#user-defined) is out of preview and generally available.
You can use service tags to achieve network isolation and protect your Azure resources from the general Internet while accessing Azure services that have public endpoints. Create inbound/outbound network security group rules to deny traffic to/from **Internet** and allow traffic to/from **AzureCloud** or other [available service tags](#available-service-tags) of specific Azure services.
virtual-wan How To Forced Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-forced-tunnel.md
+
+ Title: 'Configure forced tunneling for Virtual WAN Point-to-site VPN'
+
+description: Learn to configure forced tunneling for P2S VPN in Virtual WAN.
++++ Last updated : 3/25/2022+++
+# Configure forced tunneling for Virtual WAN Point-to-site VPN
+
+Forced tunneling allows you to send **all** traffic (including Internet-bound traffic) from remote users to Azure. In Virtual WAN, forced tunneling for Point-to-site VPN remote users signifies that the 0.0.0.0/0 default route is advertised to remote VPN users.
+
+## Creating a Virtual WAN hub
+
+The steps in this article assume that you've already deployed a virtual WAN with one or more hubs.
+
+To create a new virtual WAN and a new hub, use the steps in the following articles:
+
+* [Create a virtual WAN](virtual-wan-site-to-site-portal.md#openvwan)
+* [Create a virtual hub](virtual-wan-site-to-site-portal.md#hub)
+
+## Setting up Point-to-site VPN
+
+The steps in this article also assume that you already deployed a Point-to-site VPN gateway in the Virtual WAN hub. It also assumes you have created Point-to-site VPN profiles to assign to the gateway.
+
+To create the Point-to-site VPN gateway and related profiles, see [Create a Point-to-site VPN gateway](virtual-wan-point-to-site-portal.md).
+
+## Advertising default route to clients
+
+There are a couple ways to configure forced-tunneling and advertise the default route (0.0.0.0/0) to your remote user VPN clients connected to Virtual WAN.
+
+* You can specify a static 0.0.0.0/0 route in the defaultRouteTable with next hop Virtual Network Connection. This will force all internet-bound traffic to be sent to a Network Virtual Appliance deployed in that spoke Virtual Network. For more detailed instructions, consider the alternate workflow described in [Route through NVAs](scenario-route-through-nvas-custom.md).
+* You can use Azure Firewall Manager to configure Virtual WAN to send all internet-bound traffic via Azure Firewall deployed in the Virtual WAN hub. For configuration steps and a tutorial, see the Azure Firewall Manager documentation [Securing virtual hubs](../firewall-manager/secure-cloud-network.md). Alternatively, this can also be configured via using an Internet Traffic Routing Policy. For more information, see [Routing Intent and Routing Policies](how-to-routing-policies.md).
+* You can use Firewall Manager to send internet traffic via a third-party security provider. For more information on this capability, see [Trusted security providers](../firewall-manager/deploy-trusted-security-partner.md).
+* You can configure one of your branches (Site-to-site VPN, ExpressRoute circuit) to advertise the 0.0.0.0/0 route to Virtual WAN.
+
+After configuring one of the above four methods, make sure the EnableInternetSecurity flag is turned on for your Point-to-site VPN gateway. This flag must be set to true for your clients to be properly configured for forced-tunneling.
+
+To turn on the EnableInternetSecurity flag, use the following PowerShell command, substituting the appropriate values for your environment.
+
+```azurepowershell-interactive
+Update-AzP2sVpnGateway -ResourceGroupName "sampleRG" -Name "p2sgwsamplename" -EnableInternetSecurityFlag
+```
+
+## Downloading the Point-to-site VPN profile
+
+To download the Point-to-site VPN profile, see [global and hub profiles](global-hub-profile.md). The information in the zip-file downloaded from Azure portal is critical to properly configuring your clients.
+
+## Configuring forced-tunneling for Azure VPN clients (OpenVPN)
+
+The steps to configure forced-tunneling are different, depending on the operating system of the end user device.
+
+## Windows clients
+
+> [!NOTE]
+> For Windows clients, forced tunneling with the Azure VPN client is only available with software version 2:1900:39.0 or newer.
+
+1. Validate the version of your Azure VPN client is compatible with forced tunneling. To do this, click on the three dots at the bottom of the Azure VPN client, and click on Help. Alternatively, the keyboard short cut to navigate to Help is Ctrl-H. The version number can be found towards the top of the screen. Ensure your version number is **2:1900:39.0** or later.
+
+ :::image type="content" source="./media/virtual-wan-forced-tunnel/vpn-client-version.png"alt-text="Screenshot showing how to configure N V A private routing policies."lightbox="./media/virtual-wan-forced-tunnel/vpn-client-version.png":::
+
+1. Open the zip-file downloaded from the previous section. You should see a folder titled **AzureVPN**. Open the folder and open **azurevpnconfig.xml** in your favorite XML editing tool.
+
+1. In **azureconfig.xml**, there's a field called **version**. If the number between the version tags is **1**, change the **version** number to **2**.
+
+ ```xml
+ <version>2</version>
+ ```
+
+1. Import the profile into the Azure VPN client. For more information on how to import a profile, see [Azure VPN client import instructions](openvpn-azure-ad-client.md).
+
+1. Connect to the newly added connection. You are now force-tunneling all traffic to Azure Virtual WAN.
+
+## MacOS clients
+
+Once a macOS client learns the default route from Azure, forced tunneling is automatically configured on the client device. There are no extra steps to take. For instructions on how to use the macOS Azure VPN client to connect to the Virtual WAN Point-to-site VPN gateway, see the [macOS Configuration Guide](openvpn-azure-ad-client-mac.md).
+
+## Configuring forced-tunneling for IKEv2 clients
+
+For IKEv2 clients, you **cannot** directly use the executable profiles downloaded from Azure portal. To properly configure the client, you'll need to run a PowerShell script or distribute the VPN profile via Intune.
+
+Based on the authentication method configured on your Point-to-site VPN gateway, use a different EAP Configuration file. Sample EAP Configuration files are provided below.
+
+### IKEv2 with user certificate authentication
+
+To use user certificates to authenticate remote users, use the sample PowerShell script below. To properly import the contents of the VpnSettings and EAP XML files into PowerShell, navigate to the appropriate directory before running the **Get-Content** PowerShell command.
+
+```azurepowershell-interactive
+# specify the name of the VPN Connection to be installed on the client
+$vpnConnectionName = "SampleConnectionName"
+
+# get the VPN Server FQDN from the profile downloaded from Azure Portal
+$downloadedXML = [xml] (Get-Content VpnSettings.xml)
+$vpnserverFQDN = $downloadedXML.VpnProfile.VpnServer
+
+# use the appropriate EAP XML file based on the authentication method specified on the Point-to-site VPN gateway
+$EAPXML = [xml] (Get-Content EapXML.xml)
+
+# create the VPN Connection
+Add-VpnConnection -Name $vpnConnectionName -ServerAddress $vpnserverFQDN -TunnelType Ikev2 -AuthenticationMethod Eap -EapConfigXmlStream $EAPXML
+
+# enabled forced tunneling
+Set-VpnConnection -Name $vpnConnectionName -SplitTunneling $false
+```
+
+The following example shows an EAP XML file for user-certificate based authentication. Replace the *IssuerHash* field with the Thumbprint of the Root Certificate to ensure your client device selects the correct certificate to present to the VPN server for authentication.
+
+```xml
+<EapHostConfig xmlns="http://www.microsoft.com/provisioning/EapHostConfig">
+ <EapMethod>
+ <Type xmlns="http://www.microsoft.com/provisioning/EapCommon">13</Type>
+ <VendorId xmlns="http://www.microsoft.com/provisioning/EapCommon">0</VendorId>
+ <VendorType xmlns="http://www.microsoft.com/provisioning/EapCommon">0</VendorType>
+ <AuthorId xmlns="http://www.microsoft.com/provisioning/EapCommon">0</AuthorId>
+ </EapMethod>
+ <Config xmlns="http://www.microsoft.com/provisioning/EapHostConfig">
+ <Eap xmlns="http://www.microsoft.com/provisioning/BaseEapConnectionPropertiesV1">
+ <Type>13</Type>
+ <EapType xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV1">
+ <CredentialsSource>
+ <CertificateStore>
+ <SimpleCertSelection>true</SimpleCertSelection>
+ </CertificateStore>
+ </CredentialsSource>
+ <ServerValidation>
+ <DisableUserPromptForServerValidation>false</DisableUserPromptForServerValidation>
+ <ServerNames></ServerNames>
+ </ServerValidation>
+ <DifferentUsername>false</DifferentUsername>
+ <PerformServerValidation xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV2">false</PerformServerValidation>
+ <AcceptServerName xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV2">false</AcceptServerName>
+ <TLSExtensions xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV2">
+ <FilteringInfo xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV3">
+ <CAHashList Enabled="true">
+ <IssuerHash> REPLACE THIS WITH ROOT CERTIFICATE THUMBPRINT </IssuerHash>
+ </CAHashList>
+ </FilteringInfo>
+ </TLSExtensions>
+ </EapType>
+ </Eap>
+ </Config>
+</EapHostConfig>
+```
+
+### IKEv2 with machine certificate authentication
+
+To use machine certificates to authenticate remote users, use the sample PowerShell script below. To properly import the contents of the VpnSettings and EAP XML files into PowerShell, navigate to the appropriate directory before running the **Get-Content** PowerShell command.
+
+```azurepowershell-interactive
+# specify the name of the VPN Connection to be installed on the client
+$vpnConnectionName = "UserCertVPNConnection"
+
+# get the VPN Server FQDN from the profile downloaded from Azure portal
+$downloadedXML = [xml] (Get-Content VpnSettings.xml)
+$vpnserverFQDN = $downloadedXML.VpnProfile.VpnServer
+
+# create the VPN Connection
+Add-VpnConnection -Name $vpnConnectionName -ServerAddress $vpnserverFQDN -TunnelType Ikev2 -AuthenticationMethod MachineCertificate
+
+# enabled forced tunneling
+Set-VpnConnection -Name $vpnConnectionName -SplitTunneling $false
+```
+
+### IKEv2 with RADIUS server authentication with username and password (EAP-MSCHAPv2)
+
+To use username and password-based RADIUS authentication (EAP-MASCHAPv2) to authenticate remote users, use the sample PowerShell script below. To properly import the contents of the VpnSettings and EAP XML files into PowerShell, navigate to the appropriate directory before running the **Get-Content** PowerShell command.
+
+```azurepowershell-interactive
+# specify the name of the VPN Connection to be installed on the client
+$vpnConnectionName = "SampleConnectionName"
+
+# get the VPN Server FQDN from the profile downloaded from Azure portal
+$downloadedXML = [xml] (Get-Content VpnSettings.xml)
+$vpnserverFQDN = $downloadedXML.VpnProfile.VpnServer
+
+# use the appropriate EAP XML file based on the authentication method specified on the Point-to-site VPN gateway
+$EAPXML = [xml] (Get-Content EapXML.xml)
+
+# create the VPN Connection
+Add-VpnConnection -Name $vpnConnectionName -ServerAddress $vpnserverFQDN -TunnelType Point-to-sitev2 -AuthenticationMethod Eap -EapConfigXmlStream $EAPXML
+
+# enabled forced tunneling
+Set-VpnConnection -Name $vpnConnectionName -SplitTunneling $false
+```
+
+An example EAP XML file is the following.
+
+```xml
+<EapHostConfig xmlns="http://www.microsoft.com/provisioning/EapHostConfig">
+ <EapMethod>
+ <Type xmlns="http://www.microsoft.com/provisioning/EapCommon">26</Type>
+ <VendorId xmlns="http://www.microsoft.com/provisioning/EapCommon">0</VendorId>
+ <VendorType xmlns="http://www.microsoft.com/provisioning/EapCommon">0</VendorType>
+ <AuthorId xmlns="http://www.microsoft.com/provisioning/EapCommon">0</AuthorId>
+ </EapMethod>
+ <Config xmlns="http://www.microsoft.com/provisioning/EapHostConfig">
+ <Eap xmlns="http://www.microsoft.com/provisioning/BaseEapConnectionPropertiesV1">
+ <Type>26</Type>
+ <EapType xmlns="http://www.microsoft.com/provisioning/MsChapV2ConnectionPropertiesV1">
+ <UseWinLogonCredentials>false</UseWinLogonCredentials>
+ </EapType>
+ </Eap>
+ </Config>
+</EapHostConfig>
+```
+
+### IKEv2 with RADIUS server authentication with user certificates (EAP-TLS)
+
+To use certificate-based RADIUS authentication (EAP-TLS) to authenticate remote users, use the sample PowerShell script below. Note that in order to import the contents of the VpnSettings and EAP XML files into PowerShell, you will have to navigate to the appropriate directory before running the **Get-Content** PowerShell command.
+
+```azurepowershell-interactive
+# specify the name of the VPN Connection to be installed on the client
+$vpnConnectionName = "SampleConnectionName"
+
+# get the VPN Server FQDN from the profile downloaded from Azure portal
+$downloadedXML = [xml] (Get-Content VpnSettings.xml)
+$vpnserverFQDN = $downloadedXML.VpnProfile.VpnServer
+
+# use the appropriate EAP XML file based on the authentication method specified on the Point-to-site VPN gateway
+$EAPXML = [xml] (Get-Content EapXML.xml)
+
+# create the VPN Connection
+Add-VpnConnection -Name $vpnConnectionName -ServerAddress $vpnserverFQDN -TunnelType Ikev2 -AuthenticationMethod Eap -EapConfigXmlStream $EAPXML
+
+# enabled forced tunneling
+Set-VpnConnection -Name $vpnConnectionName -SplitTunneling $false
+```
+
+Below is a sample EAP XML file. Change the *TrustedRootCA* field to the thumbprint of your Certificate Authority's certificate and the *IssuerHash* to be the thumbprint of the Root Certificate.
+
+```xml
+<EapHostConfig xmlns="http://www.microsoft.com/provisioning/EapHostConfig">
+ <EapMethod>
+ <Type xmlns="http://www.microsoft.com/provisioning/EapCommon">13</Type>
+ <VendorId xmlns="http://www.microsoft.com/provisioning/EapCommon">0</VendorId>
+ <VendorType xmlns="http://www.microsoft.com/provisioning/EapCommon">0</VendorType>
+ <AuthorId xmlns="http://www.microsoft.com/provisioning/EapCommon">0</AuthorId>
+ </EapMethod>
+ <Config xmlns="http://www.microsoft.com/provisioning/EapHostConfig">
+ <Eap xmlns="http://www.microsoft.com/provisioning/BaseEapConnectionPropertiesV1">
+ <Type>13</Type>
+ <EapType xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV1">
+ <CredentialsSource>
+ <CertificateStore>
+ <SimpleCertSelection>false</SimpleCertSelection>
+ </CertificateStore>
+ </CredentialsSource>
+ <ServerValidation>
+ <DisableUserPromptForServerValidation>false</DisableUserPromptForServerValidation>
+ <ServerNames></ServerNames>
+ <TrustedRootCA> CERTIFICATE AUTHORITY THUMBPRINT </TrustedRootCA>
+ </ServerValidation>
+ <DifferentUsername>true</DifferentUsername>
+ <PerformServerValidation xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV2">true</PerformServerValidation>
+ <AcceptServerName xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV2">true</AcceptServerName>
+ <TLSExtensions xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV2">
+ <FilteringInfo xmlns="http://www.microsoft.com/provisioning/EapTlsConnectionPropertiesV3">
+ <CAHashList Enabled="true">
+ <IssuerHash> ROOT CERTIFCATE THUMBPRINT </IssuerHash>
+ </CAHashList>
+ </FilteringInfo>
+ </TLSExtensions>
+ </EapType>
+ </Eap>
+ </Config>
+</EapHostConfig>
+```
+
+## Next steps
+
+For more information about Virtual WAN, see the [FAQ](virtual-wan-faq.md).
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Route-based gateways implement the route-based VPNs. Route-based VPNs use "route
Yes, traffic selectors can be defined via the *trafficSelectorPolicies* attribute on a connection via the [New-AzIpsecTrafficSelectorPolicy](/powershell/module/az.network/new-azipsectrafficselectorpolicy) PowerShell command. For the specified traffic selector to take effect, ensure the [Use Policy Based Traffic Selectors](vpn-gateway-connect-multiple-policybased-rm-ps.md#enablepolicybased) option is enabled.
+The custom configured traffic selectors will be proposed only when an Azure VPN gateway initiates the connection. A VPN gateway will accept any traffic selectors proposed by a remote gateway (on-premises VPN device). This behavior is consistent between all connection modes (Default, InitiatorOnly, and ResponderOnly).
+ ### Can I update my policy-based VPN gateway to route-based? No. A gateway type cannot be changed from policy-based to route-based, or from route-based to policy-based. To change a gateway type, the gateway must be deleted and recreated. This process takes about 60 minutes. When you create the new gateway, you cannot retain the IP address of the original gateway.
Transit traffic via Azure VPN gateway is possible using the classic deployment m
### Does Azure generate the same IPsec/IKE pre-shared key for all my VPN connections for the same virtual network?
-No, Azure by default generates different pre-shared keys for different VPN connections. However, you can use the Set VPN Gateway Key REST API or PowerShell cmdlet to set the key value you prefer. The key MUST be printable ASCII characters.
+No, Azure by default generates different pre-shared keys for different VPN connections. However, you can use the Set VPN Gateway Key REST API or PowerShell cmdlet to set the key value you prefer. The key MUST only contain printable ASCII characters except space, hyphen (-) or tilde (~).
### Do I get more bandwidth with more Site-to-Site VPNs than for a single virtual network?