Updates from: 03/30/2021 03:08:08
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/synchronization.md
Previously updated : 07/06/2020 Last updated : 03/26/2021
The following table illustrates how specific attributes for user objects in Azur
|: |: | | accountEnabled |userAccountControl (sets or clears the ACCOUNT_DISABLED bit) | | city |l |
+| company |companyName |
| country |co | | department |department | | displayName |displayName |
-| employeedId |employeeId |
+| employeeId |employeeId |
| facsimileTelephoneNumber |facsimileTelephoneNumber | | givenName |givenName | | jobTitle |title |
active-directory-domain-services Tutorial Configure Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-domain-services/tutorial-configure-ldaps.md
Previously updated : 03/04/2021 Last updated : 03/23/2021 #Customer intent: As an identity administrator, I want to secure access to an Azure Active Directory Domain Services managed domain using secure lightweight directory access protocol (LDAPS)
If you added a DNS entry to the local hosts file of your computer to test connec
1. Browse to and open the file *C:\Windows\System32\drivers\etc\hosts* 1. Delete the line for the record you added, such as `168.62.205.103 ldaps.aaddscontoso.com`
+## Troubleshooting
+
+If you see an error stating that LDAP.exe cannot connect, try working through the different aspects of getting the connection:
+
+1. Configuring the domain controller
+1. Configuring the client
+1. Networking
+1. Establishing the TLS session
+
+For the certificate subject name match, the DC will use the Azure ADDS domain name (not the Azure AD domain name) to search its certificate store for the certificate. Spelling mistakes, for example, prevent the DC from selecting the right certificate.
+
+The client attempts to establish the TLS connection using the name you provided. The traffic needs to get all the way through. The DC sends the public key of the server auth cert. The cert needs to have the right usage in the certificate, the name signed in the subject name must be compatible for the client to trust that the server is the DNS name which youΓÇÖre connecting to (that is, a wildcard will work, with no spelling mistakes), and the client must trust the issuer. You can check for any problems in that chain in the System log in Event Viewer, and filter the events where source equals Schannel. Once those pieces are in place, they form a session key.
+
+For more information, see [TLS Handshake](https://docs.microsoft.com/windows/win32/secauthn/tls-handshake-protocol).
+ ## Next steps In this tutorial, you learned how to:
active-directory Concept Mfa Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-data-residency.md
Previously updated : 01/14/2021 Last updated : 03/16/2021
Personal data is user-level information that's associated with a specific person
* Blocked users * Bypassed users * Microsoft Authenticator device token change requests
-* Multifactor authentication activity reports
+* Multifactor Authentication activity reportsΓÇöstore multifactor authentication activity from the Multifactor Authentication on-premises components: NPS Extension, AD FS adapter and MFA server.
* Microsoft Authenticator activations This information is retained for 90 days.
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 03/15/2021 Last updated : 03/16/2021
Service settings can be accessed from the Azure portal by browsing to **Azure Ac
## Trusted IPs
-The _Trusted IPs_ feature of Azure AD Multi-Factor Authentication bypasses multi-factor authentication prompts for users who sign in from a defined IP address range. You can set trusted IP ranges for your on-premises environments to when users are in one of those locations, there's no Azure AD Multi-Factor Authentication prompt.
+The _Trusted IPs_ feature of Azure AD Multi-Factor Authentication bypasses multi-factor authentication prompts for users who sign in from a defined IP address range. You can set trusted IP ranges for your on-premises environments so when users are in one of those locations, there's no Azure AD Multi-Factor Authentication prompt. The _Trusted IPs_ feature of Azure AD Multi-Factor Authentication requires Azure AD Premium P1 edition.
> [!NOTE] > The trusted IPs can include private IP ranges only when you use MFA Server. For cloud-based Azure AD Multi-Factor Authentication, you can only use public IP address ranges.
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Token expiration and refresh is a standard mechanism in the industry. When a client application like Outlook connects to a service like Exchange Online, the API requests are authorized using OAuth 2.0 access tokens. By default, those access tokens are valid for one hour, when they expire, the client is redirected back to Azure AD to refresh them. That refresh period provides an opportunity to reevaluate policies for user access. For example: we might choose not to refresh the token because of a Conditional Access policy, or because the user has been disabled in the directory.
-Customers have expressed concerns about the lag between when conditions change for the user, like network location or credential theft, and when policies can be enforced related to that change. We have experimented with the ΓÇ£blunt objectΓÇ¥ approach of reduced token lifetimes but found they can degrade user experiences and reliability without eliminating risks.
+Customers have expressed concerns about the lag between when conditions change for the user, like network location or credential theft, and when policies can be enforced related to that change. We have experimented with the "blunt object" approach of reduced token lifetimes but found they can degrade user experiences and reliability without eliminating risks.
-Timely response to policy violations or security issues really requires a ΓÇ£conversationΓÇ¥ between the token issuer, like Azure AD, and the relying party, like Exchange Online. This two-way conversation gives us two important capabilities. The relying party can notice when things have changed, like a client coming from a new location, and tell the token issuer. It also gives the token issuer a way to tell the relying party to stop respecting tokens for a given user due to account compromise, disablement, or other concerns. The mechanism for this conversation is continuous access evaluation (CAE). The goal is for response to be near real time, but in some cases latency of up to 15 minutes may be observed due to event propagation time.
+Timely response to policy violations or security issues really requires a "conversation" between the token issuer, like Azure AD, and the relying party, like Exchange Online. This two-way conversation gives us two important capabilities. The relying party can notice when things have changed, like a client coming from a new location, and tell the token issuer. It also gives the token issuer a way to tell the relying party to stop respecting tokens for a given user due to account compromise, disablement, or other concerns. The mechanism for this conversation is continuous access evaluation (CAE). The goal is for response to be near real time, but in some cases latency of up to 15 minutes may be observed due to event propagation time.
The initial implementation of continuous access evaluation focuses on Exchange, Teams, and SharePoint Online.
For an explanation of the office update channels, see [Overview of update channe
Policy changes made by administrators could take up to one day to be effective. Some optimization has been done to reduce the delay to two hours. However, it does not cover all the scenarios yet.
-If there is an emergency and you need to have your updated policies to be applied to certain users immediately, you should use this [PowerShell command](/powershell/module/azuread/revoke-azureaduserallrefreshtoken?view=azureadps-2.0) or "Revoke Session" in the user profile page to revoke the users' session, which will make sure that the updated policies will be applied immediately.
+If there is an emergency and you need to have your updated policies to be applied to certain users immediately, you should use this [PowerShell command](/powershell/module/azuread/revoke-azureaduserallrefreshtoken) or "Revoke Session" in the user profile page to revoke the users' session, which will make sure that the updated policies will be applied immediately.
### Coauthoring in Office apps
-When multiple users are collaborating on the same document at the same time, the userΓÇÖs access to the document may not be immediately revoked by CAE based on user revocation or policy change events. In this case, the user loses access completely after, closing the document, closing Word, Excel, or PowerPoint, or after a period of 10 hours.
+When multiple users are collaborating on the same document at the same time, the user's access to the document may not be immediately revoked by CAE based on user revocation or policy change events. In this case, the user loses access completely after, closing the document, closing Word, Excel, or PowerPoint, or after a period of 10 hours.
-To reduce this time a SharePoint Administrator can optionally reduce the maximum lifetime of coauthoring sessions for documents stored in SharePoint Online and OneDrive for Business, by [configuring a network location policy in SharePoint Online](/sharepoint/control-access-based-on-network-location). Once this configuration is changed, the maximum lifetime of coauthoring sessions will be reduced to 15 minutes, and can be adjusted further using the SharePoint Online PowerShell command ΓÇ£Set-SPOTenant ΓÇôIPAddressWACTokenLifetime"
+To reduce this time a SharePoint Administrator can optionally reduce the maximum lifetime of coauthoring sessions for documents stored in SharePoint Online and OneDrive for Business, by [configuring a network location policy in SharePoint Online](/sharepoint/control-access-based-on-network-location). Once this configuration is changed, the maximum lifetime of coauthoring sessions will be reduced to 15 minutes, and can be adjusted further using the SharePoint Online PowerShell command "Set-SPOTenant ΓÇôIPAddressWACTokenLifetime"
### Enable after a user is disabled
active-directory Quickstart Register App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-register-app.md
Follow these steps to create the app registration:
When registration finishes, the Azure portal displays the app registration's **Overview** pane. You see the **Application (client) ID**. Also called the *client ID*, this value uniquely identifies your application in the Microsoft identity platform. > [!IMPORTANT]
-> New app registrations are hidden to users by default. When you are ready for users to see the app on their [My Apps page](../user-help/my-apps-portal-end-user-access.md) you can enable it. To enable the app, in the Azure portal navigate to **Azure Active Director** > **Enterprise applications** and select the app. Then on the **Properties** page toggle **Visible to users?** to Yes.
+> New app registrations are hidden to users by default. When you are ready for users to see the app on their [My Apps page](../user-help/my-apps-portal-end-user-access.md) you can enable it. To enable the app, in the Azure portal navigate to **Azure Active Directory** > **Enterprise applications** and select the app. Then on the **Properties** page toggle **Visible to users?** to Yes.
Your application's code, or more typically an authentication library used in your application, also uses the client ID. The ID is used as part of validating the security tokens it receives from the identity platform.
active-directory Reference Saml Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-saml-tokens.md
Previously updated : 09/09/2020 Last updated : 03/29/2021
The Microsoft identity platform emits several types of security tokens in the pr
> |Authentication Method | `amr` |Identifies how the subject of the token was authenticated. | `<AuthnContextClassRef>`<br>`http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod/password`<br>`</AuthnContextClassRef>` | > |First Name | `given_name` |Provides the first or "given" name of the user, as set on the Azure AD user object. | `<Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname">`<br>`<AttributeValue>Frank<AttributeValue>` | > |Groups | `groups` |Provides object IDs that represent the subject's group memberships. These values are unique (see Object ID) and can be safely used for managing access, such as enforcing authorization to access a resource. The groups included in the groups claim are configured on a per-application basis, through the "groupMembershipClaims" property of the application manifest. A value of null will exclude all groups, a value of "SecurityGroup" will include only Active Directory Security Group memberships, and a value of "All" will include both Security Groups and Microsoft 365 Distribution Lists. <br><br> **Notes**: <br> If the number of groups the user is in goes over a limit (150 for SAML, 200 for JWT) then an overage claim will be added the claim sources pointing at the Graph endpoint containing the list of groups for the user. | `<Attribute Name="http://schemas.microsoft.com/ws/2008/06/identity/claims/groups">`<br>`<AttributeValue>07dd8a60-bf6d-4e17-8844-230b77145381</AttributeValue>` |
-> | Groups Overage Indicator | `groups:src1` | For token requests that are not length-limited but still too large for the token, a link to the full groups list for the user will be included. For SAML this is added as a new claim in place of the `groups` claim. | `<Attribute Name=" http://schemas.microsoft.com/claims/groups.link">`<br>`<AttributeValue>https://graph.windows.net/{tenantID}/users/{userID}/getMemberObjects<AttributeValue>` |
+> | Groups Overage Indicator | `groups:src1` | For token requests that are not length-limited but still too large for the token, a link to the full groups list for the user will be included. For SAML this is added as a new claim in place of the `groups` claim. <br><br> **Notes**: <br> The Azure AD Graph API is being replaced by the Microsoft Graph API. To learn more about the equivalent endpoint, see [user: getMemberObjects](https://docs.microsoft.com/graph/api/user-getmemberobjects). | `<Attribute Name=" http://schemas.microsoft.com/claims/groups.link">`<br>`<AttributeValue>https://graph.windows.net/{tenantID}/users/{userID}/getMemberObjects<AttributeValue>` |
> |Identity Provider | `idp` |Records the identity provider that authenticated the subject of the token. This value is identical to the value of the Issuer claim unless the user account is in a different tenant than the issuer. | `<Attribute Name=" http://schemas.microsoft.com/identity/claims/identityprovider">`<br>`<AttributeValue>https://sts.windows.net/cbb1a5ac-f33b-45fa-9bf5-f37db0fed422/<AttributeValue>` | > |IssuedAt | `iat` |Stores the time at which the token was issued. It is often used to measure token freshness. | `<Assertion ID="_d5ec7a9b-8d8f-4b44-8c94-9812612142be" IssueInstant="2014-01-06T20:20:23.085Z" Version="2.0" xmlns="urn:oasis:names:tc:SAML:2.0:assertion">` | > |Issuer | `iss` |Identifies the security token service (STS) that constructs and returns the token. In the tokens that Azure AD returns, the issuer is sts.windows.net. The GUID in the Issuer claim value is the tenant ID of the Azure AD directory. The tenant ID is an immutable and reliable identifier of the directory. | `<Issuer>https://sts.windows.net/cbb1a5ac-f33b-45fa-9bf5-f37db0fed422/</Issuer>` |
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Previously updated : 02/23/2021 Last updated : 03/29/2021
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `scope` | required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. For the `/authorize` leg of the request, this can cover multiple resources, allowing your app to get consent for multiple web APIs you want to call. | | `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. Can be one of the following:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. | | `state` | recommended | A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-| `prompt` | optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, and `consent`.<br/><br/>- `prompt=login` will force the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an `interaction_required` error.<br/>- `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` will interrupt single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> |
+| `prompt` | optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` will force the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an `interaction_required` error.<br/>- `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` will interrupt single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> |
| `login_hint` | optional | Can be used to pre-fill the username/email address field of the sign-in page for the user, if you know their username ahead of time. Often apps will use this parameter during re-authentication, having already extracted the username from a previous sign-in using the `preferred_username` claim. | | `domain_hint` | optional | If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience - for example, sending them to their federated identity provider. Often apps will use this parameter during re-authentication, by extracting the `tid` from a previous sign-in. If the `tid` claim value is `9188040d-6c67-4c5b-b112-36a304b66dad`, you should use `domain_hint=consumers`. Otherwise, use `domain_hint=organizations`. | | `code_challenge` | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - both public and confidential clients - and required by the Microsoft identity platform for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md). |
active-directory How To Connect Pta Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-pta-quick-start.md
Ensure that the following prerequisites are in place.
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service. - If your firewall or proxy lets you add DNS entries to an allowlist, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
+ - Avoid all forms of inline inspection and Termination on outbound TLS communications between Azure Passthrough Agent and Azure Endpoint.
- If you have an outgoing HTTP proxy, make sure this URL, autologon.microsoftazuread-sso.com, is on the allowed list. You should specify this URL explicitly since wildcard may not be accepted. - Your Authentication Agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well. - For certificate validation, unblock the following URLs: **crl3.digicert.com:80**, **crl4.digicert.com:80**, **ocsp.digicert.com:80**, **www\.d-trust.net:80**, **root-c3-ca2-2009.ocsp.d-trust.net:80**, **crl.microsoft.com:80**, **oneocsp.microsoft.com:80**, and **ocsp.msocsp.com:80**. Since these URLs are used for certificate validation with other Microsoft products you may already have these URLs unblocked.
active-directory How To Connect Sso Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-sso-faq.md
Follow these steps on the on-premises server where you are running Azure AD Conn
After completing the wizard, Seamless SSO will be disabled on your tenant. However, you will see a message on screen that reads as follows:
- "Single sign-on is now disabled, but there are additional manual steps to perform in order to complete clean-up. Learn more"
+ "Single sign-on is now disabled, but there are additional manual steps to perform in order to complete clean-up. [Learn more](tshoot-connect-sso.md#step-3-disable-seamless-sso-for-each-active-directory-forest-where-youve-set-up-the-feature)"
To complete the clean-up process, follow steps 2 and 3 on the on-premises server where you are running Azure AD Connect.
active-directory Powershell Export Apps With Secrets Beyond Required https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md
The "Add-Member" command is responsible for creating the columns in the CSV file
| Command | Notes | |||
-| [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest?view=powershell-7.1) | Sends HTTP and HTTPS requests to a web page or web service. It parses the response and returns collections of links, images, and other significant HTML elements. |
+| [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest?view=powershell-7.1&preserve-view=true) | Sends HTTP and HTTPS requests to a web page or web service. It parses the response and returns collections of links, images, and other significant HTML elements. |
## Next steps
active-directory Custom Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/custom-overview.md
This article describes how to understand Azure Active Directory (Azure AD) role-
Both systems contain similarly used role definitions and role assignments. However, Azure AD role permissions can't be used in Azure custom roles and vice versa. ## Understand Azure AD role-based access control
-Azure AD supports 2 types of roles definitions -
+Azure AD supports 2 types of roles definitions:
* [Built-in roles](./permissions-reference.md) * [Custom roles](./custom-create.md)
Built-in roles are out of box roles that have a fixed set of permissions. These
Once youΓÇÖve created your custom role definition (or using a built-in role), you can assign it to a user by creating a role assignment. A role assignment grants the user the permissions in a role definition at a specified scope. This two-step process allows you to create a single role definition and assign it many times at different scopes. A scope defines the set of Azure AD resources the role member has access to. The most common scope is organization-wide (org-wide) scope. A custom role can be assigned at org-wide scope, meaning the role member has the role permissions over all resources in the organization. A custom role can also be assigned at an object scope. An example of an object scope would be a single application. The same role can be assigned to one user over all applications in the organization and then to another user with a scope of only the Contoso Expense Reports app.
-Azure AD built-in and custom roles operate on concepts similar to [Azure role-based access control (Azure RBAC)](../develop/access-tokens.md#payload-claims). The [difference between these two role-based access control systems](../../role-based-access-control/rbac-and-directory-admin-roles.md) is that Azure RBAC controls access to Azure resources such as virtual machines or storage using Azure Resource Management, and Azure AD custom roles control access to Azure AD resources using Graph API. Both systems leverage the concept of role definitions and role assignments. Azure AD RBAC permissions cannot be included in Azure roles and vice versa.
- ### How Azure AD determines if a user has access to a resource The following are the high-level steps that Azure AD uses to determine if you have access to a management resource. Use this information to troubleshoot access issues.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
Previously updated : 03/13/2021 Last updated : 03/29/2021
The [Authentication policy administrator](#authentication-policy-administrator)
>* Non-administrators like executives, legal counsel, and human resources employees who may have access to sensitive or private information. > [!IMPORTANT]
-> This role is not currently capable of managing per-user MFA in the legacy MFA management portal. The same functions can be accomplished using the [Set-MsolUser](/powershell/module/msonline/set-msoluser) commandlet Azure AD Powershell module.
+> This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens. The same functions can be accomplished using the [Set-MsolUser](/powershell/module/msonline/set-msoluser) commandlet Azure AD Powershell module.
> [!div class="mx-tableFixed"] > | Actions | Description |
The [Authentication administrator](#authentication-administrator) and [Privilege
| Authentication policy administrator | No | No | Yes | Yes | Yes | > [!IMPORTANT]
-> This role is not currently capable of managing MFA settings in the legacy MFA management portal.
+> This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens.
> [!div class="mx-tableFixed"] > | Actions | Description |
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/planned-maintenance.md
az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCl
[az-extension-update]: /cli/azure/extension#az-extension-update [az-feature-list]: /cli/azure/feature#az-feature-list [az-feature-register]: /cli/azure/feature#az-feature-register
-[az-aks-install-cli]: /cli/azure/aks?view=azure-cli-latest#az-aks-install-cli&preserve-view=true
-[az-provider-register]: /cli/azure/provider?view=azure-cli-latest#az-provider-register
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-provider-register]: /cli/azure/provider#az-provider-register
[aks-upgrade]: upgrade-cluster.md
analysis-services Analysis Services Backup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-backup.md
description: This article describes how to backup and restore model metadata and
Previously updated : 07/13/2020 Last updated : 03/29/2021
analysis-services Analysis Services Bcdr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-bcdr.md
description: This article describes how Azure Analysis Services provides high av
Previously updated : 03/30/2020 Last updated : 03/29/2021
analysis-services Analysis Services Capacity Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-capacity-limits.md
description: This article describes resource and object limits for an Azure Anal
Previously updated : 05/19/2020 Last updated : 03/29/2021
analysis-services Analysis Services Datasource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-datasource.md
description: Describes data sources and connectors supported for tabular 1200 an
Previously updated : 02/08/2021 Last updated : 03/29/2021
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/analysis-services/analysis-services-overview.md
description: Learn about Azure Analysis Services, a fully managed platform as a
Previously updated : 01/20/2021 Last updated : 03/29/2021 #Customer intent: As a BI developer, I want to determine if Azure Analysis Services is the best data modeling platform for our organization.
Azure Analysis Services is supported in regions throughout the world. Supported
|Canada Central | B1, B2, S0, S1, S2, S4, D1 | 1 | |Canada Central | S8v2, S9v2 | 1 | |East US | B1, B2, S0, S1, S2, S4, D1 | 1 |
+|East US | S8v2, S9v2 | 1 |
|East US 2 | B1, B2, S0, S1, S2, S4, D1 | 7 | |East US 2 | S8v2, S9v2 | 1 | |North Central US | B1, B2, S0, S1, S2, S4, D1 | 1 |
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-ip-restrictions.md
In addition to being able to control access to your app, you can restrict access
### Restrict access to a specific Azure Front Door instance Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you will need to further filter the incoming requests based on the unique http header that Azure Front Door sends. PowerShell example:
app-service App Service Migration Assistant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/app-service-migration-assistant.md
+
+ Title: Migrate to Azure App Service
+description: Migrate to Azure App Service using App Service Migration Assistant.
+++ Last updated : 03/29/2021++++
+# Migrate to Azure App Service
+
+Using [App Service Migration Assistant](https://azure.microsoft.com/services/app-service/migration-assistant/) you can migrate your on-premise app onto Azure App Service. App Service Migration Assistant is designed to simplify your journey to the cloud through a free, simple, and fast solution to migrate applications from on-premises to the cloud.
+
+With Azure App Service Migration Assistant, you can quickly:
+
+- Scan your app URL to assess whether it's a good candidate for migration
+- Download the Migration Assistant to begin your migration.
+- Use the tool to run readiness checks and general assessment of your app's configuration settings
+- Migrate your app or site to Azure App Service via the tool.
+
+[Watch how to migrate web apps to Azure App service](https://www.youtube.com/watch?v=9LBUmkUhmXU).
+
+Next step: [Migrate an on-premise web application to Azure App Service](https://docs.microsoft.com/learn/modules/migrate-app-service-migration-assistant/)
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/deploy-local-git.md
You may see the following common error messages when you use Git to publish to a
|`Unable to access '[siteURL]': Failed to connect to [scmAddress]`|The app isn't up and running.|Start the app in the Azure portal. Git deployment isn't available when the web app is stopped.| |`Couldn't resolve host 'hostname'`|The address information for the 'azure' remote is incorrect.|Use the `git remote -v` command to list all remotes, along with the associated URL. Verify that the URL for the 'azure' remote is correct. If needed, remove and recreate this remote using the correct URL.| |`No refs in common and none specified; doing nothing. Perhaps you should specify a branch such as 'main'.`|You didn't specify a branch during `git push`, or you haven't set the `push.default` value in `.gitconfig`.|Run `git push` again, specifying the main branch: `git push azure main`.|
+|`Error - Changes committed to remote repository but deployment to website failed.`|You pushed a local branch that doesn't match the app deployment branch on 'azure'.|Verify that current branch is `master`. To change the default branch, use `DEPLOYMENT_BRANCH` application setting.|
|`src refspec [branchname] does not match any.`|You tried to push to a branch other than main on the 'azure' remote.|Run `git push` again, specifying the main branch: `git push azure main`.| |`RPC failed; result=22, HTTP code = 5xx.`|This error can happen if you try to push a large git repository over HTTPS.|Change the git configuration on the local machine to make the `postBuffer` bigger. For example: `git config --global http.postBuffer 524288000`.| |`Error - Changes committed to remote repository but your web app not updated.`|You deployed a Node.js app with a _package.json_ file that specifies additional required modules.|Review the `npm ERR!` error messages before this error for more context on the failure. The following are the known causes of this error, and the corresponding `npm ERR!` messages:<br /><br />**Malformed package.json file**: `npm ERR! Couldn't read dependencies.`<br /><br />**Native module doesn't have a binary distribution for Windows**:<br />`npm ERR! \cmd "/c" "node-gyp rebuild"\ failed with 1` <br />or <br />`npm ERR! [modulename@version] preinstall: \make || gmake\ `|
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-python-postgresql-app.md
Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp).
In `polls/models.py`, locate the line that begins with `choice_text` and change the `max_length` parameter to 100: ```python
-# Find this lie of code and set max_length to 100 instead of 200
+# Find this line of code and set max_length to 100 instead of 200
choice_text = models.CharField(max_length=100) ```
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/delete-account.md
+
+ Title: Delete your Azure Automation account
+description: This article tells how to delete your Automation account across the different configuration scenarios.
+++ Last updated : 03/18/2021+++
+# How to delete your Azure Automation account
+
+After you enable an Azure Automation account to help automate IT or business process, or enable its other features to support operations management of your Azure and non-Azure machines such as Update Management, you may decide to stop using the Automation account. If you have enabled features that depend on integration with an Azure Monitor Log Analytics workspace, there are more steps required to complete this action.
+
+Removing your Automation account can be done using one of the following methods based on the supported deployment models:
+
+* Delete the resource group containing the Automation account.
+* Delete the resource group containing the Automation account and linked Azure Monitor Log Analytics workspace, if:
+
+ * The account and workspace is dedicated to supporting Update Management, Change Tracking and Inventory, and/or Start/Stop VMs during off-hours.
+ * The account is dedicated to process automation and integrated with a workspace to send runbook job status and job streams.
+
+* Unlink the Log Analytics workspace from the Automation account and delete the Automation account.
+* Delete the feature from your linked workspace, unlink the account from the workspace, and then delete the Automation account.
+
+This article tells you how to completely remove your Automation account through the Azure portal, PowerShell, the Azure CLI, or the REST API.
+
+## Delete the dedicated resource group
+
+To delete your Automation account, and also the Log Analytics workspace if linked to the account, created in the same resource group dedicated to the account, follow the steps outlined in the [Azure Resource Manager resource group and resource deletion](../azure-resource-manager/management/delete-resource-group.md) article.
+
+## Delete a standalone Automation account
+
+If your Automation account is not linked to a Log Analytics workspace, perform the following steps to delete it.
+
+# [Azure portal](#tab/azure-portal)
+
+1. Sign in to Azure at [https://portal.azure.com](https://portal.azure.com).
+
+2. In the Azure portal, navigate to **Automation Accounts**.
+
+3. Open your Automation account and select **Delete** from the menu.
+
+While the information is verified and the account is deleted, you can track the progress under **Notifications**, chosen from the menu.
+
+# [PowerShell](#tab/azure-powershell)
+
+This command removes the Automation account without prompting for validation.
+
+```powershell
+Remove-AzAutomationAccount -Name "automationAccountName" -Force -ResourceGroupName "resourceGroupName"
+```
+++
+## Delete a standalone Automation account linked to workspace
+
+If your Automation account is linked to a Log Analytics workspace to collect job streams and job logs, perform the following steps to delete the account.
+
+There are two options for unlinking the Log Analytics workspace from your Automation account. You can perform this process from the Automation account or from the linked workspace.
+
+To unlink from your Automation account, perform the following steps.
+
+1. In the Azure portal, navigate to **Automation Accounts**.
+
+2. Open your Automation account and select **Linked workspace** under **Related Resources** on the left.
+
+3. On the **Unlink workspace** page, select **Unlink workspace**, and respond to prompts.
+
+ ![Unlink workspace page](media/automation-solution-vm-management-remove/automation-unlink-workspace-blade.png)
+
+ While it attempts to unlink the Log Analytics workspace, you can track the progress under **Notifications** from the menu.
+
+To unlink from the workspace, perform the following steps.
+
+1. In the Azure portal, navigate to **Log Analytics workspaces**.
+
+2. From the workspace, select **Automation Account** under **Related Resources**.
+
+3. On the Automation Account page, select **Unlink account**, and respond to prompts.
+
+While it attempts to unlink the Automation account, you can track the progress under **Notifications** from the menu.
+
+After the Automation account is successfully unlinked from the workspace, perform the steps in the [standalone Automation account](#delete-a-standalone-automation-account) section to delete the account.
+
+## Delete a shared capability Automation account
+
+To delete your Automation account linked to a Log Analytics workspace in support of Update Management, Change Tracking and Inventory, and/or Start/Stop VMs during off-hours, perform the following steps.
+
+### Step 1. Delete the solution from the linked workspace
+
+# [Azure portal](#tab/azure-portal)
+
+1. Sign in to Azure at [https://portal.azure.com](https://portal.azure.com).
+
+2. Navigate to your Automation account, and select **Linked workspace** under **Related resources**.
+
+3. Select **Go to workspace**.
+
+4. Click **Solutions** under **General**.
+
+5. On the Solutions page, select one of the following based on the feature(s) deployed in the account:
+
+ * For Start/Stop VMs during off-hours, select **Start-Stop-VM[workspace name]**.
+ * For Update Management, select **Updates(workspace name)**.
+ * For Change Tracking and Inventory, select **ChangeTracking(workspace name)**.
+
+6. On the **Solution** page, select **Delete** from the menu. If more than one of the above listed features are deployed to the Automation account and linked workspace, you need to select and delete each one before proceeding.
+
+7. While the information is verified and the feature is deleted, you can track the progress under **Notifications**, chosen from the menu. You're returned to the Solutions page after the removal process.
+
+# [PowerShell](#tab/azure-powershell)
+
+To remove an installed solution using Azure PowerShell, use the [Remove-AzMonitorLogAnalyticsSolution](/powershell/module/az.monitoringsolutions/remove-azmonitorloganalyticssolution) cmdlet.
+
+```powershell
+Remove-AzMonitorLogAnalyticsSolution -ResourceGroupName "resourceGroupName" -Name "solutionName"
+```
+++
+### Step 2. Unlink workspace from Automation account
+
+There are two options for unlinking the Log Analytics workspace from your Automation account. You can perform this process from the Automation account or from the linked workspace.
+
+To unlink from your Automation account, perform the following steps.
+
+1. In the Azure portal, navigate to **Automation Accounts**.
+
+2. Open your Automation account and select **Linked workspace** under **Related Resources** on the left.
+
+3. On the **Unlink workspace** page, select **Unlink workspace**, and respond to prompts.
+
+ ![Unlink workspace page](media/automation-solution-vm-management-remove/automation-unlink-workspace-blade.png)
+
+ While it attempts to unlink the Log Analytics workspace, you can track the progress under **Notifications** from the menu.
+
+To unlink from the workspace, perform the following steps.
+
+1. In the Azure portal, navigate to **Log Analytics workspaces**.
+
+2. From the workspace, select **Automation Account** under **Related Resources**.
+
+3. On the Automation Account page, select **Unlink account**, and respond to prompts.
+
+While it attempts to unlink the Automation account, you can track the progress under **Notifications** from the menu.
+
+### Step 3. Delete Automation account
+
+After the Automation account is successfully unlinked from the workspace, perform the steps in the [standalone Automation account](#delete-a-standalone-automation-account) section to delete the account.
+
+## Next steps
+
+To create an Automation account from the Azure portal, see [Create a standalone Azure Automation account](automation-create-standalone-account.md). If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](quickstart-create-automation-account-template.md).
automation Variables https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/shared-resources/variables.md
Title: Manage variables in Azure Automation
description: This article tells how to work with variables in runbooks and DSC configurations. Previously updated : 12/01/2020 Last updated : 03/28/2021 # Manage variables in Azure Automation
When you create a variable with the Azure portal, you must specify a data type f
* Boolean * Null
-The variable isn't restricted to the specified data type. You must set the variable using Windows PowerShell if you want to specify a value of a different type. If you indicate `Not defined`, the value of the variable is set to Null. You must set the value with the [Set-AzAutomationVariable](/powershell/module/az.automation/set-azautomationvariable) cmdlet or the internal `Set-AutomationVariable` cmdlet.
+The variable isn't restricted to the specified data type. You must set the variable using Windows PowerShell if you want to specify a value of a different type. If you indicate `Not defined`, the value of the variable is set to Null. You must set the value with the [Set-AzAutomationVariable](/powershell/module/az.automation/set-azautomationvariable) cmdlet or the internal `Set-AutomationVariable` cmdlet. You use the `Set-AutomationVariable` in your runbooks that are intended to run in the Azure sandbox environment, or on a Windows Hybrid Runbook Worker.
You can't use the Azure portal to create or change the value for a complex variable type. However, you can provide a value of any type using Windows PowerShell. Complex types are retrieved as a [Newtonsoft.Json.Linq.JProperty](https://www.newtonsoft.com/json/help/html/N_Newtonsoft_Json_Linq.htm) for a Complex object type instead of a PSObject type [PSCustomObject](/dotnet/api/system.management.automation.pscustomobject).
The cmdlets in the following table create and manage Automation variables with P
| Cmdlet | Description | |:|:|
-|[Get-AzAutomationVariable](/powershell/module/az.automation/get-azautomationvariable) | Retrieves the value of an existing variable. If the value is a simple type, that same type is retrieved. If it's a complex type, a `PSCustomObject` type is retrieved. <br>**Note:** You can't use this cmdlet to retrieve the value of an encrypted variable. The only way to do this is by using the internal `Get-AutomationVariable` cmdlet in a runbook or DSC configuration. See [Internal cmdlets to access variables](#internal-cmdlets-to-access-variables). |
+|[Get-AzAutomationVariable](/powershell/module/az.automation/get-azautomationvariable) | Retrieves the value of an existing variable. If the value is a simple type, that same type is retrieved. If it's a complex type, a `PSCustomObject` type is retrieved. <sup>1</sup>|
|[New-AzAutomationVariable](/powershell/module/az.automation/new-azautomationvariable) | Creates a new variable and sets its value.| |[Remove-AzAutomationVariable](/powershell/module/az.automation/remove-azautomationvariable)| Removes an existing variable.| |[Set-AzAutomationVariable](/powershell/module/az.automation/set-azautomationvariable)| Sets the value for an existing variable. |
+<sup>1</sup>
+You can't use this cmdlet to retrieve the value of an encrypted variable. The only way to do this is by using the internal `Get-AutomationVariable` cmdlet in a runbook or DSC configuration. For example, to see the value of an encrypted variable, you might create a runbook to get the variable and then write it to the output stream:
+
+```powershell
+$encryptvar = Get-AutomationVariable -Name TestVariable
+Write-output "The encrypted value of the variable is: $encryptvar"
+```
+ ## Internal cmdlets to access variables The internal cmdlets in the following table are used to access variables in your runbooks and DSC configurations. These cmdlets come with the global module `Orchestrator.AssetManagement.Cmdlets`. For more information, see [Internal cmdlets](modules.md#internal-cmdlets).
The internal cmdlets in the following table are used to access variables in your
|`Set-AutomationVariable`|Sets the value for an existing variable.| > [!NOTE]
-> Avoid using variables in the `Name` parameter of `Get-AutomationVariable` in a runbook or DSC configuration. Use of the variables can complicate the discovery of dependencies between runbooks and Automation variables at design time.
-
-`Get-AutomationVariable` does not work in PowerShell, but only in a runbook or DSC configuration. For example, to see the value of an encrypted variable, you might create a runbook to get the variable and then write it to the output stream:
-
-```powershell
-$mytestencryptvar = Get-AutomationVariable -Name TestVariable
-Write-output "The encrypted value of the variable is: $mytestencryptvar"
-```
+> Avoid using variables in the `Name` parameter of `Get-AutomationVariable` cmdlet in a runbook or DSC configuration. Use of a variable can complicate the discovery of dependencies between runbooks and Automation variables at design time.
## Python functions to access variables
Your runbook or DSC configuration uses the `New-AzAutomationVariable` cmdlet to
The following example shows how to create a string variable and then return its value. ```powershell
-New-AzAutomationVariable -ResourceGroupName "ResourceGroup01"
-ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName 'MyStringVariable' `
-ΓÇôEncrypted $false ΓÇôValue 'My String'
-$string = (Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" `
-ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName 'MyStringVariable').Value
+$rgName = "ResourceGroup01"
+$accountName = "MyAutomationAccount"
+$variableValue = "My String"
+
+New-AzAutomationVariable -ResourceGroupName $rgName ΓÇôAutomationAccountName $accountName ΓÇôName "MyStringVariable" ΓÇôEncrypted $false ΓÇôValue $variableValue
+$string = (Get-AzAutomationVariable -ResourceGroupName $rgName -AutomationAccountName $accountName ΓÇôName "MyStringVariable").Value
``` The following example shows how to create a variable with a complex type and then retrieve its properties. In this case, a virtual machine object from [Get-AzVM](/powershell/module/Az.Compute/Get-AzVM) is used specifying a subset of its properties. ```powershell
-$vm = Get-AzVM -ResourceGroupName "ResourceGroup01" ΓÇôName "VM01" | Select Name, Location, Extensions
-New-AzAutomationVariable -ResourceGroupName "ResourceGroup01" ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName "MyComplexVariable" ΓÇôEncrypted $false ΓÇôValue $vm
+$rgName = "ResourceGroup01"
+$accountName = "MyAutomationAccount"
+
+$vm = Get-AzVM -ResourceGroupName $rgName ΓÇôName "VM01" | Select Name, Location, Tags
+New-AzAutomationVariable -ResourceGroupName $rgName ΓÇôAutomationAccountName $accountName ΓÇôName "MyComplexVariable" ΓÇôEncrypted $false ΓÇôValue $vm
-$vmValue = Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" `
-ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName "MyComplexVariable"
+$vmValue = Get-AzAutomationVariable -ResourceGroupName $rgName ΓÇôAutomationAccountName $accountName ΓÇôName "MyComplexVariable"
-$vmName = $vmValue.Name
-$vmExtensions = $vmValue.Extensions
+$vmName = $vmValue.Value.Name
+$vmTags = $vmValue.Value.Tags
``` ## Textual runbook examples # [PowerShell](#tab/azure-powershell)
-The following example shows how to set and retrieve a variable in a textual runbook. This example assumes the creation of integer variables named `NumberOfIterations` and `NumberOfRunnings` and a string variable named `SampleMessage`.
+The following example shows how to set and retrieve a variable in a textual runbook. This example assumes the creation of integer variables named **numberOfIterations** and **numberOfRunnings** and a string variable named **sampleMessage**.
```powershell
-$NumberOfIterations = Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" ΓÇôAutomationAccountName "MyAutomationAccount" -Name 'NumberOfIterations'
-$NumberOfRunnings = Get-AzAutomationVariable -ResourceGroupName "ResourceGroup01" ΓÇôAutomationAccountName "MyAutomationAccount" -Name 'NumberOfRunnings'
-$SampleMessage = Get-AutomationVariable -Name 'SampleMessage'
+$rgName = "ResourceGroup01"
+$accountName = "MyAutomationAccount"
-Write-Output "Runbook has been run $NumberOfRunnings times."
+$numberOfIterations = Get-AutomationVariable -Name "numberOfIterations"
+$numberOfRunnings = Get-AutomationVariable -Name "numberOfRunnings"
+$sampleMessage = Get-AutomationVariable -Name "sampleMessage"
-for ($i = 1; $i -le $NumberOfIterations; $i++) {
- Write-Output "$i`: $SampleMessage"
+Write-Output "Runbook has been run $numberOfRunnings times."
+
+for ($i = 1; $i -le $numberOfIterations; $i++) {
+ Write-Output "$i`: $sampleMessage"
}
-Set-AzAutomationVariable -ResourceGroupName "ResourceGroup01" ΓÇôAutomationAccountName "MyAutomationAccount" ΓÇôName NumberOfRunnings ΓÇôValue ($NumberOfRunnings += 1)
+Set-AutomationVariable ΓÇôName numberOfRunnings ΓÇôValue ($numberOfRunnings += 1)
``` # [Python 2](#tab/python2)
except AutomationAssetNotFound:
## Graphical runbook examples
-In a graphical runbook, you can add activities for the internal cmdlets `Get-AutomationVariable` or `Set-AutomationVariable`. Just right-click each variable in the Library pane of the graphical editor and select the activity that you want.
+In a graphical runbook, you can add activities for the internal cmdlets **Get-AutomationVariable** or **Set-AutomationVariable**. Just right-click each variable in the Library pane of the graphical editor and select the activity that you want.
![Add variable to canvas](../media/variables/runbook-variable-add-canvas.png)
-The following image shows example activities to update a variable with a simple value in a graphical runbook. In this example, the activity for `Get-AzVM` retrieves a single Azure virtual machine and saves the computer name to an existing Automation string variable. It doesn't matter whether the [link is a pipeline or sequence](../automation-graphical-authoring-intro.md#use-links-for-workflow) since the code only expects a single object in the output.
+The following image shows example activities to update a variable with a simple value in a graphical runbook. In this example, the activity for `Get-AzVM` retrieves a single Azure virtual machine and saves the computer name to an existing Automation string variable. It doesn't matter whether the [link is a pipeline or sequence](../automation-graphical-authoring-intro.md#use-links-for-workflow) since the code only expects a single object in the output.
![Set simple variable](../media/variables/runbook-set-simple-variable.png) ## Next steps
-* To learn more about the cmdlets used to access variables, see [Manage modules in Azure Automation](modules.md).
-* For general information about runbooks, see [Runbook execution in Azure Automation](../automation-runbook-execution.md).
-* For details of DSC configurations, see [Azure Automation State Configuration overview](../automation-dsc-overview.md).
+- To learn more about the cmdlets used to access variables, see [Manage modules in Azure Automation](modules.md).
+
+- For general information about runbooks, see [Runbook execution in Azure Automation](../automation-runbook-execution.md).
+
+- For details of DSC configurations, see [Azure Automation State Configuration overview](../automation-dsc-overview.md).
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-resource-manager.md
Title: Create an Azure App Configuration store by using Azure Resource Manager template (ARM template) description: Learn how to create an Azure App Configuration store by using Azure Resource Manager template (ARM template).--++ Last updated 10/16/2020-+
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
By convention, the `FeatureManagement` section of this JSON document is used for
## Use dependency injection to access IFeatureManager
-For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](/dotnet/api/microsoft.featuremanagement.ifeaturemanager?view=azure-dotnet-preview). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
+For some operations, such as manually checking feature flag values, you need to get an instance of [IFeatureManager](/dotnet/api/microsoft.featuremanagement.ifeaturemanager?preserve-view=true&view=azure-dotnet-preview). In ASP.NET Core MVC, you can access the feature manager `IFeatureManager` through dependency injection. In the following example, an argument of type `IFeatureManager` is added to the signature of the constructor for a controller. The runtime automatically resolves the reference and provides an of the interface when calling the constructor. If you're using an application template in which the controller already has one or more dependency injection arguments in the constructor, such as `ILogger`, you can just add `IFeatureManager` as an additional argument:
### [.NET 5.x](#tab/core5x)
public IActionResult Index()
} ```
-When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered [IDisabledFeaturesHandler](/dotnet/api/microsoft.featuremanagement.mvc.idisabledfeatureshandler?view=azure-dotnet-preview) interface is called. The default `IDisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
+When an MVC controller or action is blocked because the controlling feature flag is *off*, a registered [IDisabledFeaturesHandler](/dotnet/api/microsoft.featuremanagement.mvc.idisabledfeatureshandler?preserve-view=true&view=azure-dotnet-preview) interface is called. The default `IDisabledFeaturesHandler` interface returns a 404 status code to the client with no response body.
## MVC views
azure-arc Tutorial Use Gitops Connected Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md
In this tutorial, you will apply configurations using GitOps on an Azure Arc ena
> [!div class="checklist"] > * Create a configuration on an Azure Arc enabled Kubernetes cluster using an example Git repository. > * Validate that the configuration was successfully created.
-> * Apply configuration form a private Git repository.
+> * Apply configuration from a private Git repository.
> * Validate the Kubernetes configuration. ## Prerequisites
az k8s-configuration delete --name cluster-config --cluster-name AzureArcTest1 -
Advance to the next tutorial to learn how to implement CI/CD with GitOps. > [!div class="nextstepaction"]
-> [Implement CI/CD with GitOps](./tutorial-gitops-ci-cd.md)
+> [Implement CI/CD with GitOps](./tutorial-gitops-ci-cd.md)
azure-functions Durable Functions External Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-external-events.md
Orchestrator functions have the ability to wait and listen for external events.
## Wait for events
-The [WaitForExternalEvent](/dotnet/api/microsoft.azure.webjobs.durableorchestrationcontextbase.waitforexternalevent?view=azure-dotnet-legacy) (.NET), `waitForExternalEvent` (JavaScript), and `wait_for_external_event` (Python) methods of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger) allow an orchestrator function to asynchronously wait and listen for an external event. The listening orchestrator function declares the *name* of the event and the *shape of the data* it expects to receive.
+The [WaitForExternalEvent](/dotnet/api/microsoft.azure.webjobs.durableorchestrationcontextbase.waitforexternalevent?view=azure-dotnet-legacy&preserve-view=true) (.NET), `waitForExternalEvent` (JavaScript), and `wait_for_external_event` (Python) methods of the [orchestration trigger binding](durable-functions-bindings.md#orchestration-trigger) allow an orchestrator function to asynchronously wait and listen for an external event. The listening orchestrator function declares the *name* of the event and the *shape of the data* it expects to receive.
# [C#](#tab/csharp)
main = df.Orchestrator.create(orchestrator_function)
## Send events
-You can use the [RaiseEventAsync](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclientbase.raiseeventasync?view=azure-dotnet-legacy) (.NET) or `raiseEventAsync` (JavaScript) methods to send an external event to an orchestration. These methods are exposed by the [orchestration client](durable-functions-bindings.md#orchestration-client) binding. You can also use the built-in [raise event HTTP API](durable-functions-http-api.md#raise-event) to send an external event to an orchestration.
+You can use the [RaiseEventAsync](/dotnet/api/microsoft.azure.webjobs.durableorchestrationclientbase.raiseeventasync?view=azure-dotnet-legacy&preserve-view=true) (.NET) or `raiseEventAsync` (JavaScript) methods to send an external event to an orchestration. These methods are exposed by the [orchestration client](durable-functions-bindings.md#orchestration-client) binding. You can also use the built-in [raise event HTTP API](durable-functions-http-api.md#raise-event) to send an external event to an orchestration.
A raised event includes an *instance ID*, an *eventName*, and *eventData* as parameters. Orchestrator functions handle these events using the `WaitForExternalEvent` (.NET) or `waitForExternalEvent` (JavaScript) APIs. The *eventName* must match on both the sending and receiving ends in order for the event to be processed. The event data must also be JSON-serializable.
azure-functions Functions Bindings Http Webhook Output https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-bindings-http-webhook-output.md
This section describes the global configuration settings available for this bind
|||| | customHeaders|none|Allows you to set custom headers in the HTTP response. The previous example adds the `X-Content-Type-Options` header to the response to avoid content type sniffing. | |dynamicThrottlesEnabled|true<sup>\*</sup>|When enabled, this setting causes the request processing pipeline to periodically check system performance counters like `connections/threads/processes/memory/cpu/etc` and if any of those counters are over a built-in high threshold (80%), requests will be rejected with a `429 "Too Busy"` response until the counter(s) return to normal levels.<br/><sup>\*</sup>The default in a Consumption plan is `true`. The default in a Dedicated plan is `false`.|
-|hsts|not enabled|When `isEnabled` is set to `true`, the [HTTP Strict Transport Security (HSTS) behavior of .NET Core](/aspnet/core/security/enforcing-ssl?view=aspnetcore-3.0&tabs=visual-studio#hsts) is enforced, as defined in the [`HstsOptions` class](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions?view=aspnetcore-3.0). The above example also sets the [`maxAge`](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions.maxage?view=aspnetcore-3.0#Microsoft_AspNetCore_HttpsPolicy_HstsOptions_MaxAge) property to 10 days. Supported properties of `hsts` are: <table><tr><th>Property</th><th>Description</th></tr><tr><td>excludedHosts</td><td>A string array of host names for which the HSTS header isn't added.</td></tr><tr><td>includeSubDomains</td><td>Boolean value that indicates whether the includeSubDomain parameter of the Strict-Transport-Security header is enabled.</td></tr><tr><td>maxAge</td><td>String that defines the max-age parameter of the Strict-Transport-Security header.</td></tr><tr><td>preload</td><td>Boolean that indicates whether the preload parameter of the Strict-Transport-Security header is enabled.</td></tr></table>|
+|hsts|not enabled|When `isEnabled` is set to `true`, the [HTTP Strict Transport Security (HSTS) behavior of .NET Core](/aspnet/core/security/enforcing-ssl?tabs=visual-studio#hsts) is enforced, as defined in the [`HstsOptions` class](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions). The above example also sets the [`maxAge`](/dotnet/api/microsoft.aspnetcore.httpspolicy.hstsoptions.maxage#Microsoft_AspNetCore_HttpsPolicy_HstsOptions_MaxAge) property to 10 days. Supported properties of `hsts` are: <table><tr><th>Property</th><th>Description</th></tr><tr><td>excludedHosts</td><td>A string array of host names for which the HSTS header isn't added.</td></tr><tr><td>includeSubDomains</td><td>Boolean value that indicates whether the includeSubDomain parameter of the Strict-Transport-Security header is enabled.</td></tr><tr><td>maxAge</td><td>String that defines the max-age parameter of the Strict-Transport-Security header.</td></tr><tr><td>preload</td><td>Boolean that indicates whether the preload parameter of the Strict-Transport-Security header is enabled.</td></tr></table>|
|maxConcurrentRequests|100<sup>\*</sup>|The maximum number of HTTP functions that are executed in parallel. This value allows you to control concurrency, which can help manage resource utilization. For example, you might have an HTTP function that uses a large number of system resources (memory/cpu/sockets) such that it causes issues when concurrency is too high. Or you might have a function that makes outbound requests to a third-party service, and those calls need to be rate limited. In these cases, applying a throttle here can help. <br/><sup>*</sup>The default for a Consumption plan is 100. The default for a Dedicated plan is unbounded (`-1`).| |maxOutstandingRequests|200<sup>\*</sup>|The maximum number of outstanding requests that are held at any given time. This limit includes requests that are queued but have not started executing, as well as any in progress executions. Any incoming requests over this limit are rejected with a 429 "Too Busy" response. That allows callers to employ time-based retry strategies, and also helps you to control maximum request latencies. This only controls queuing that occurs within the script host execution path. Other queues such as the ASP.NET request queue will still be in effect and unaffected by this setting. <br/><sup>\*</sup>The default for a Consumption plan is 200. The default for a Dedicated plan is unbounded (`-1`).| |routePrefix|api|The route prefix that applies to all routes. Use an empty string to remove the default prefix. |
azure-functions Functions Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-scale.md
This article provides a detailed comparison between the various hosting plans, a
The following is a summary of the benefits of the three main hosting plans for Functions:
-| | |
+| Plan | Benefits |
| | | |**[Consumption plan](consumption-plan.md)**| Scale automatically and only pay for compute resources when your functions are running.<br/><br/>On the Consumption plan, instances of the Functions host are dynamically added and removed based on the number of incoming events.<br/><br/> Γ£ö Default hosting plan.<br/>Γ£ö Pay only when your functions are running.<br/>Γ£ö Scales automatically, even during periods of high load.| |**[Premium plan](functions-premium-plan.md)**|Automatically scales based on demand using pre-warmed workers which run applications with no delay after being idle, runs on more powerful instances, and connects to virtual networks. <br/><br/>Consider the Azure Functions Premium plan in the following situations: <br/><br/>Γ£ö Your function apps run continuously, or nearly continuously.<br/>Γ£ö You have a high number of small executions and a high execution bill, but low GB seconds in the Consumption plan.<br/>Γ£ö You need more CPU or memory options than what is provided by the Consumption plan.<br/>Γ£ö Your code needs to run longer than the maximum execution time allowed on the Consumption plan.<br/>Γ£ö You require features that aren't available on the Consumption plan, such as virtual network connectivity.|
The following is a summary of the benefits of the three main hosting plans for F
The comparison tables in this article also include the following hosting options, which provide the highest amount of control and isolation in which to run your function apps.
-| | |
+| Hosting option | Details |
| | | |**[ASE](dedicated-plan.md)** | App Service Environment (ASE) is an App Service feature that provides a fully isolated and dedicated environment for securely running App Service apps at high scale.<br/><br/>ASEs are appropriate for application workloads that require: <br/><br/>Γ£ö Very high scale.<br/>Γ£ö Full compute isolation and secure network access.<br/>Γ£ö High memory usage.| | **[Kubernetes](functions-kubernetes-keda.md)** | Kubernetes provides a fully isolated and dedicated environment running on top of the Kubernetes platform.<br/><br/> Kubernetes is appropriate for application workloads that require: <br/>Γ£ö Custom hardware requirements.<br/>Γ£ö Isolation and secure network access.<br/>Γ£ö Ability to run in hybrid or multi-cloud environment.<br/>Γ£ö Run alongside existing Kubernetes applications and services.|
The following table shows supported operating system and language runtime suppor
The following table compares the scaling behaviors of the various hosting plans.
-| | Scale out | Max # instances |
+| Plan | Scale out | Max # instances |
| | | | | **[Consumption plan](consumption-plan.md)** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of incoming trigger events. | 200 | | **[Premium plan](functions-premium-plan.md)** | [Event driven](event-driven-scaling.md). Scale out automatically, even during periods of high load. Azure Functions infrastructure scales CPU and memory resources by adding additional instances of the Functions host, based on the number of events that its functions are triggered on. |100|
The following table compares the scaling behaviors of the various hosting plans.
## Cold start behavior
-| | |
+| Plan | Details |
| -- | -- | | **[Consumption&nbsp;plan](consumption-plan.md)** | Apps may scale to zero when idle, meaning some requests may have additional latency at startup. The consumption plan does have some optimizations to help decrease cold start time, including pulling from pre-warmed placeholder functions that already have the function host and language processes running. | | **[Premium plan](functions-premium-plan.md)** | Perpetually warm instances to avoid any cold start. |
-| **[Dedicated plan](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isnΓÇÖt really an issue. |
-| **[ASE](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isnΓÇÖt really an issue. |
+| **[Dedicated plan](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isn't really an issue. |
+| **[ASE](dedicated-plan.md)** | When running in a Dedicated plan, the Functions host can run continuously, which means that cold start isn't really an issue. |
| **[Kubernetes](functions-kubernetes-keda.md)** | Depending on KEDA configuration, apps can be configured to avoid a cold start. If configured to scale to zero, then a cold start is experienced for new events. ## Service limits
The following table compares the scaling behaviors of the various hosting plans.
## Billing
-| | |
+| Plan | Details |
| | | | **[Consumption plan](consumption-plan.md)** | Pay only for the time your functions run. Billing is based on number of executions, execution time, and memory used. | | **[Premium plan](functions-premium-plan.md)** | Premium plan is based on the number of core seconds and memory used across needed and pre-warmed instances. At least one instance per plan must be kept warm at all times. This plan provides the most predictable pricing. |
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
# Azure services by FedRAMP and DoD CC SRG audit scope
-MicrosoftΓÇÖs government cloud services meet the demanding requirements of the US Federal Risk & Authorization Management Program (FedRAMP) and of the US Department of Defense, from information impact levels 2 through 6. By deploying protected services including Azure Government, Office 365 U.S. Government, and Dynamics 365 Government, federal and defense agencies can leverage a rich array of compliant services.
+Microsoft's government cloud services meet the demanding requirements of the US Federal Risk & Authorization Management Program (FedRAMP) and of the US Department of Defense, from information impact levels 2 through 6. By deploying protected services including Azure Government, Office 365 U.S. Government, and Dynamics 365 Government, federal and defense agencies can leverage a rich array of compliant services.
This article provides a detailed list of in-scope cloud services across Azure Public and Azure Government for FedRAMP and DoD CC SRG compliance offerings.
This article provides a detailed list of in-scope cloud services across Azure Pu
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: | | [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure Stack Hub](/azure-stack/operator/azure-stack-overview?view=azs-2002)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
+| [Azure Stack Hub](/azure-stack/operator/azure-stack-overview)| :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | :heavy_check_mark: |
| [Backup](https://azure.microsoft.com/services/backup/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Batch](https://azure.microsoft.com/services/batch/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | [Cloud Services](https://azure.microsoft.com/services/cloud-services/) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
azure-monitor Azure Monitor Agent Install https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/agents/azure-monitor-agent-install.md
The following prerequisites are required prior to installing the Azure Monitor a
- [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc enabled servers. The system identity will be enabled automatically if the agent is installed as part of the process for [creating and assigning a data collection rule using the Azure portal](#install-with-azure-portal). - The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine.
+> [!IMPORTANT]
+> The Azure Monitor agent does not currently support network proxies.
+ ## Virtual machine extension details The Azure Monitor Agent is implemented as an [Azure VM extension](../../virtual-machines/extensions/overview.md) with the details in the following table. It can be installed using any of the methods to install virtual machine extensions including those described in this article.
azure-monitor Asp Net Exceptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/asp-net-exceptions.md
Unhandled exceptions originating from controllers typically result in 500 "Inter
### Prior versions support If you use MVC 4 (and prior) of Application Insights Web SDK 2.5 (and prior), refer to the following examples to track exceptions.
-If the [CustomErrors](/previous-versions/dotnet/netframework-4.0/h0hfz6fc(v=vs.100)) configuration is `Off`, then exceptions will be available for the [HTTP Module](/previous-versions/dotnet/netframework-3.0/ms178468(v=vs.85)) to collect. However, if it is `RemoteOnly` (default), or `On`, then the exception will be cleared and not available for Application Insights to automatically collect. You can fix that by overriding the [System.Web.Mvc.HandleErrorAttribute class](/dotnet/api/system.web.mvc.handleerrorattribute?view=aspnet-mvc-5.2), and applying the overridden class as shown for the different MVC versions below ([GitHub source](https://github.com/AppInsightsSamples/Mvc2UnhandledExceptions/blob/master/MVC2App/Controllers/AiHandleErrorAttribute.cs)):
+If the [CustomErrors](/previous-versions/dotnet/netframework-4.0/h0hfz6fc(v=vs.100)) configuration is `Off`, then exceptions will be available for the [HTTP Module](/previous-versions/dotnet/netframework-3.0/ms178468(v=vs.85)) to collect. However, if it is `RemoteOnly` (default), or `On`, then the exception will be cleared and not available for Application Insights to automatically collect. You can fix that by overriding the [System.Web.Mvc.HandleErrorAttribute class](/dotnet/api/system.web.mvc.handleerrorattribute), and applying the overridden class as shown for the different MVC versions below ([GitHub source](https://github.com/AppInsightsSamples/Mvc2UnhandledExceptions/blob/master/MVC2App/Controllers/AiHandleErrorAttribute.cs)):
```csharp using System;
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/data-platform-metrics.md
na Previously updated : 03/26/2019 Last updated : 02/20/2021
The following table lists the different ways that you can use Metrics in Azure M
| **Alert** | Configure a [metric alert rule](../alerts/alerts-metric.md) that sends a notification or takes [automated action](../alerts/action-groups.md) when the metric value crosses a threshold. | | **Visualize** | Pin a chart from metrics explorer to an [Azure dashboard](../app/tutorial-app-dashboards.md).<br>Create a [workbook](../visualize/workbooks-overview.md) to combine with multiple sets of data in an interactive report.Export the results of a query to [Grafana](../visualize/grafana-plugin.md) to leverage its dashboarding and combine with other data sources. | | **Automate** | Use [Autoscale](../autoscale/autoscale-overview.md) to increase or decrease resources based on a metric value crossing a threshold. |
-| **Retrieve** | Access metric values from a command line using [PowerShell cmdlets](/powershell/module/az.applicationinsights)<br>Access metric values from custom application using [REST API](./rest-api-walkthrough.md).<br>Access metric values from a command line using [CLI](/cli/azure/monitor/metrics). |
+| **Retrieve** | Access metric values from a command line using [PowerShell cmdlets](/powershell/module/az.monitor)<br>Access metric values from custom application using [REST API](./rest-api-walkthrough.md).<br>Access metric values from a command line using [CLI](/cli/azure/monitor/metrics). |
| **Export** | [Route Metrics to Logs](./resource-logs.md#send-to-azure-storage) to analyze data in Azure Monitor Metrics together with data in Azure Monitor Logs and to store metric values for longer than 93 days.<br>Stream Metrics to an [Event Hub](./stream-monitoring-data-event-hubs.md) to route them to external systems. | | **Archive** | [Archive](./platform-logs-overview.md) the performance or health history of your resource for compliance, auditing, or offline reporting purposes. |
This metric can answer questions such as "what was the network throughput for ea
For most resources in Azure, metrics are stored for 93 days. There are some exceptions: **Guest OS metrics**-- **Classic guest OS metrics**. These are performance counters collected by the [Windows Diagnostic Extension (WAD)](../agents/diagnostics-extension-overview.md) or the [Linux Diagnostic Extension (LAD)](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure storage account. Retention for these metrics is 14 days.
+- **Classic guest OS metrics**. These are performance counters collected by the [Windows Diagnostic Extension (WAD)](../agents/diagnostics-extension-overview.md) or the [Linux Diagnostic Extension (LAD)](../../virtual-machines/extensions/diagnostics-linux.md) and routed to an Azure storage account. Retention for these metrics is guaranteed to be at least 14 days, though no actual expiration date is written to the storage account. For performance reasons, the portal limits how much data it displays based on volume. Therefore, the actual number of days retrieved by the portal can be longer than 14 days if the volume of data being written is not very large.
- **Guest OS metrics sent to Azure Monitor Metrics**. These are performance counters collected by the [Windows Diagnostic Extension (WAD)](../agents/diagnostics-extension-overview.md) and sent to the [Azure Monitor data sink](../agents/diagnostics-extension-overview.md#data-destinations), or via the [InfluxData Telegraf Agent](https://www.influxdata.com/time-series-platform/telegraf/) on Linux machines. Retention for these metrics is 93 days. - **Guest OS metrics collected by Log Analytics agent**. These are performance counters collected by the Log Analytics agent and sent to a Log Analytics workspace. Retention for these metrics is 31 days, and can be extended up to 2 years.
azure-monitor Metrics Aggregation Explained https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/metrics-aggregation-explained.md
Previously updated : 01/12/2020 Last updated : 03/10/2021+ # Azure Monitor Metrics metrics aggregation and display explained
When you add a metric to a chart, metrics explorer automatically pre-selects its
Let's define a few terms clearly first: - **Metric value** ΓÇô A single measurement value gathered for a specific resource.
+- **Time-Series database** - A database optimized for the storage and retrieval of data points all containing a value and a corresponding time-stamp.
- **Time period** ΓÇô A generic period of time. - **Time interval** ΓÇô The period of time between the gathering of two metric values. - **Time range** ΓÇô The time period displayed on a chart. Typical default is 24 hours. Only specific ranges are available.
Let's define a few terms clearly first:
- **Aggregation type** ΓÇô A type of statistic calculated from multiple metric values. - **Aggregate** ΓÇô The process of taking multiple input values and then using them to produce a single output value via the rules defined by the aggregation type. For example, taking an average of multiple values.
-Metrics are a series of metric values captured at a regular time interval. When you plot a chart, the values of the selected metric are separately aggregated over the time granularity (also known as time grain). You select the size of the time granularity using the [Metrics Explorer time picker panel](../essentials/metrics-getting-started.md#select-a-time-range). If you donΓÇÖt make an explicit selection, the time granularity is automatically selected based on the currently selected time range. Once selected, the metric values that were captured during each time granularity interval are aggregated and placed onto the chart - one datapoint per interval.
+## Summary of process
+
+Metrics are a series of values stored with a time-stamp. In Azure, most metrics are stored in the Azure Metrics time-series database. When you plot a chart, the values of the selected metrics are retrieved from the database and then separately aggregated based on the chosen time granularity (also known as time grain). You select the size of the time granularity using the [Metrics Explorer time picker panel](../essentials/metrics-getting-started.md#select-a-time-range). If you donΓÇÖt make an explicit selection, the time granularity is automatically selected based on the currently selected time range. Once selected, the metric values that were captured during each time granularity interval are aggregated and placed onto the chart - one datapoint per interval.
## Aggregation types
It is important to establish what's "normal" for your workload to know what time
## How the system collects metrics
-Data collection varies by metric. There are two types of collection periods.
+Data collection varies by metric.
### Measurement collection frequency
+There are two types of collection periods.
+ - **Regular** - The metric is gathered at a consistent time interval that does not vary. - **Activity-based** - The metric is gathered based on when a transaction of a certain type occurs. Each transaction has a metric entry and a time stamp. They are not gathered at regular intervals so there are a varying number of records over a given time period.
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/essentials/monitor-azure-resource.md
Some monitoring data is collected automatically, but you may need to perform som
- [Platform metrics](../essentials/data-platform-metrics.md) - Platform metrics are collected automatically into [Azure Monitor Metrics](../essentials/data-platform-metrics.md) with no configuration required. Create a diagnostic setting to send entries to Azure Monitor Logs or to forward them outside of Azure. - [Resource logs](./platform-logs-overview.md) - Resource logs are automatically generated by Azure resources but not collected without a diagnostic setting. Create a diagnostic setting to send entries to Azure Monitor Logs or to forward them outside of Azure.-- [Activity log](./platform-logs-overview.md) - The Activity log is collected automatically with no configuration required and can be view in the Azure portal. Create a diagnostic setting to copy them to Azure Monitor Logs or to forward them outside of Azure.
+- [Activity log](./platform-logs-overview.md) - The Activity log is collected automatically with no configuration required and can be viewed in the Azure portal. Create a diagnostic setting to copy them to Azure Monitor Logs or to forward them outside of Azure.
### Log Analytics workspace Collecting data into Azure Monitor Logs requires a Log Analytics workspace. You can start monitoring your service quickly by creating a new workspace, but there may be value in using a workspace that's collecting data from other services. See [Create a Log Analytics workspace in the Azure portal](../logs/quick-create-workspace.md) for details on creating a workspace and [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) to help determine the best workspace design for your requirements. If you use an existing workspace in your organization, then you will require appropriate permissions as described in [Manage access to log data and workspaces in Azure Monitor](../logs/manage-access.md).
Use **Alerts** from a resource's menu to view alerts and manage alert rules for
## Next steps
-* See [Supported services, schemas, and categories for Azure Resource Logs](./resource-logs-schema.md) for details of resource logs for different Azure services.
+* See [Supported services, schemas, and categories for Azure Resource Logs](./resource-logs-schema.md) for details of resource logs for different Azure services.
azure-monitor Wire Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/wire-data.md
description: Wire data is consolidated network and performance data from compute
Previously updated : 05/29/2020 Last updated : 03/26/2021
-# Wire Data 2.0 (Preview) solution in Azure Monitor
+# Wire Data 2.0 (Preview) solution in Azure Monitor (Retired)
![Wire Data symbol](media/wire-data/wire-data2-symbol.png)
-Wire data is consolidated network and performance data collected from Windows-connected and Linux-connected computers with the Log Analytics agent, including those monitored by Operations Manager in your environment. Network data is combined with your other log data to help you correlate data.
-
-In addition to the Log Analytics agent, the Wire Data solution uses Microsoft Dependency Agents that you install on computers in your IT infrastructure. Dependency Agents monitor network data sent to and from your computers for network levels 2-3 in the [OSI model](https://en.wikipedia.org/wiki/OSI_model), including the various protocols and ports used. Data is then sent to Azure Monitor using agents.
- >[!NOTE]
->The Wire Data solution has been replaced with the [Service Map solution](../vm/service-map.md). Both use the Log Analytics agent and Dependency agent to collect network connection data into Azure Monitor.
->
->Existing customers using the Wire Data solution may continue to use it. We will publish guidance for a migration timeline for moving to Service Map.
+>The Wire Data solution has been replaced with [VM insights](../vm/vminsights-overview.md) and [Service Map solution](../vm/service-map.md). Both use the Log Analytics agent and Dependency agent to collect network connection data into Azure Monitor.
>
->New customers should install the [Service Map solution](../vm/service-map.md) or [VM insights](../vm/vminsights-overview.md). The Service Map data set is comparable to Wire Data. VM insights includes the Service Map data set with additional performance data and features for analysis.
--
-By default, Azure Monitor logs data for CPU, memory, disk, and network performance data from counters built into Windows and Linux, as well as other performance counters that you can specify. Network and other data collection is done in real-time for each agent, including subnets and application-level protocols being used by the computer. Wire Data looks at network data at the application level, not down at the TCP transport layer.  The solution doesn't look at individual ACKs and SYNs.  Once the handshake is completed, it is considered a live connection and marked as Connected. That connection stays live as long as both sides agree the socket is open and data can pass back and forth.  Once either side closes the connection, it is marked as Disconnected. Therefore, it only counts the bandwidth of successfully completed packets, it doesn't report on resends or failed packets.
-
-If you've used [sFlow](http://www.sflow.org/) or other software with [Cisco's NetFlow protocol](https://www.cisco.com/c/en/us/products/collateral/ios-nx-os-software/ios-netflow/prod_white_paper0900aecd80406232.html), then the statistics and data you see from wire data will be familiar to you.
-
-Some of the types of built-in Log search queries include:
--- Agents that provide wire data-- IP address of agents providing wire data-- Outbound communications by IP addresses-- Number of bytes sent by application protocols-- Number of bytes sent by an application service-- Bytes received by different protocols-- Total bytes sent and received by IP version-- Average latency for connections that were measured reliably-- Computer processes that initiated or received network traffic-- Amount of network traffic for a process-
-When you search using wire data, you can filter and group data to view information about the top agents and top protocols. Or you can view when certain computers (IP addresses/MAC addresses) communicated with each other, for how long, and how much data was sentΓÇöbasically, you view metadata about network traffic, which is search-based.
-
-However, since you're viewing metadata, it's not necessarily useful for in-depth troubleshooting. Wire data in Azure Monitor is not a full capture of network data. It is not intended for deep packet-level troubleshooting. The advantage of using the agent, compared to other collection methods, is that you don't have to install appliances, reconfigure your network switches, or perform complicated configurations. Wire data is simply agent-basedΓÇöyou install the agent on a computer and it will monitor its own network traffic. Another advantage is when you want to monitor workloads running in cloud providers or hosting service provider or Microsoft Azure, where the user doesn't own the fabric layer.
-
-## Connected sources
-
-Wire Data gets its data from the Microsoft Dependency Agent. The Dependency Agent depends on the Log Analytics agent for its connections to Azure Monitor. This means that a server must have the Log Analytics agent installed and configured with the Dependency agent. The following table describes the connected sources that the Wire Data solution supports.
-
-| **Connected source** | **Supported** | **Description** |
-| | | |
-| Windows agents | Yes | Wire Data analyzes and collects data from Windows agent computers. <br><br> In addition to the [Log Analytics agent for Windows](../agents/agent-windows.md), Windows agents require the Microsoft Dependency agent. See the [supported operating systems](../vm/vminsights-enable-overview.md#supported-operating-systems) for a complete list of operating system versions. |
-| Linux agents | Yes | Wire Data analyzes and collects data from Linux agent computers.<br><br> In addition to the [Log Analytics agent for Linux](../vm/quick-collect-linux-computer.md), Linux agents require the Microsoft Dependency agent. See the [supported operating systems](../vm/vminsights-enable-overview.md#supported-operating-systems) for a complete list of operating system versions. |
-| System Center Operations Manager management group | Yes | Wire Data analyzes and collects data from Windows and Linux agents in a connected [System Center Operations Manager management group](../agents/om-agents.md). <br><br> A direct connection from the System Center Operations Manager agent computer to Azure Monitor is required. |
-| Azure storage account | No | Wire Data collects data from agent computers, so there is no data from it to collect from Azure Storage. |
-
-On Windows, the Microsoft Monitoring Agent (MMA) is used by both System Center Operations Manager and Azure Monitor to gather and send data. Depending on the context, the agent is called the System Center Operations Manager Agent, Log Analytics agent, MMA, or Direct Agent. System Center Operations Manager and Azure Monitor provide slightly different versions of the MMA. These versions can each report to System Center Operations Manager, to Azure Monitor, or to both.
-
-On Linux, the Log Analytics agent for Linux gathers and sends data to Azure Monitor. You can use Wire Data on servers with agents directly connected to Azure Monitor, or on servers that are connecting to Azure Monitor via System Center Operations Manager management groups.
-
-The Dependency agent does not transmit any data itself, and it does not require any changes to firewalls or ports. The data in Wire Data is always transmitted by the Log Analytics agent to Azure Monitor, either directly or through the Log Analytics gateway.
-
-![agent diagram](./media/wire-data/agents.png)
-
-If you are a System Center Operations Manager user with a management group connected to Azure Monitor:
--- No additional configuration is required when your System Center Operations Manager agents can access the internet to connect to Azure Monitor.-- You need to configure the Log Analytics gateway to work with System Center Operations Manager when your System Center Operations Manager agents cannot access Azure Monitor over the internet.-
-If your Windows or Linux computers cannot directly connect to the service, you need to configure the Log Analytics agent to connect to Azure Monitor using the Log Analytics gateway. You can download the Log Analytics gateway from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=52666).
-
-## Prerequisites
--- Requires the [Insight and Analytics](https://www.microsoft.com/cloud-platform/operations-management-suite-pricing) solution offer.-- If you're using the previous version of the Wire Data solution, you must first remove it. However, all data captured through the original Wire Data solution is still available in Wire Data 2.0 and log search.-- Administrator privileges are required to install or uninstall the Dependency agent.-- The Dependency agent must be installed on a computer with a 64-bit operating system.-
-### Operating systems
-
-The following sections list the supported operating systems for the Dependency agent. Wire Data doesn't support 32-bit architectures for any operating system.
-
-#### Windows Server
--- Windows Server 2019-- Windows Server 2016 1803-- Windows Server 2016-- Windows Server 2012 R2-- Windows Server 2012-- Windows Server 2008 R2 SP1-
-#### Windows desktop
--- Windows 10 1803-- Windows 10-- Windows 8.1-- Windows 8-- Windows 7-
-#### Supported Linux operating systems
-The following sections list the supported operating systems for the Dependency agent on Linux.
--- Only default and SMP Linux kernel releases are supported.-- Nonstandard kernel releases, such as PAE and Xen, are not supported for any Linux distribution. For example, a system with the release string of "2.6.16.21-0.8-xen" is not supported.-- Custom kernels, including recompiles of standard kernels, are not supported.-
-##### Red Hat Linux 7
-
-| OS version | Kernel version |
-|:--|:--|
-| 7.4 | 3.10.0-693 |
-| 7.5 | 3.10.0-862 |
-| 7.6 | 3.10.0-957 |
-
-##### Red Hat Linux 6
-
-| OS version | Kernel version |
-|:--|:--|
-| 6.9 | 2.6.32-696 |
-| 6.10 | 2.6.32-754 |
-
-##### CentOSPlus
-| OS version | Kernel version |
-|:--|:--|
-| 6.9 | 2.6.32-696.18.7<br>2.6.32-696.30.1 |
-| 6.10 | 2.6.32-696.30.1<br>2.6.32-754.3.5 |
-
-##### Ubuntu Server
-
-| OS version | Kernel version |
-|:--|:--|
-| Ubuntu 18.04 | kernel 4.15.\*<br>4.18* |
-| Ubuntu 16.04.3 | kernel 4.15.* |
-| 16.04 | 4.4.\*<br>4.8.\*<br>4.10.\*<br>4.11.\*<br>4.13.\* |
-| 14.04 | 3.13.\*<br>4.4.\* |
-
-##### SUSE Linux 11 Enterprise Server
-
-| OS version | Kernel version
-|:--|:--|
-| 11 SP4 | 3.0.* |
-
-##### SUSE Linux 12 Enterprise Server
-
-| OS version | Kernel version
-|:--|:--|
-| 12 SP2 | 4.4.* |
-| 12 SP3 | 4.4.* |
-
-### Dependency agent downloads
-
-| File | OS | Version | SHA-256 |
-|:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.7.4 | A111B92AB6CF28EB68B696C60FE51F980BFDFF78C36A900575E17083972989E0 |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.7.4 | AB58F3DB8B1C3DEE7512690E5A65F1DFC41B43831543B5C040FCCE8390F2282C |
---
-## Configuration
-
-Perform the following steps to configure the Wire Data solution for your workspaces.
-
-1. Enable the Activity Log Analytics solution from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.WireData2OMS?tab=Overview) or by using the process described in [Add monitoring solutions from the Solutions Gallery](../insights/solutions.md).
-2. Install the Dependency agent on each computer where you want to get data. The Dependency agent can monitor connections to immediate neighbors, so you might not need an agent on every computer.
-
-> [!NOTE]
-> You cannot add the previous version of the Wire Data solution to new workspaces. If you have the original Wire Data solution enabled, you can continue to use it. However, to use Wire Data 2.0, you must first remove the original version.
->
+>Support for Wire Data solution will end on **March 31, 2022**. Until the retirement date, existing customers using the Wire Data 2.0 (preview) solution may continue to use it.
+>
+>New and existing customers should install the [VM insights](../vm/vminsights-enable-overview.md) or [Service Map solution](../vm/service-map.md). The Map data set they collect is comparable to the Wire Data 2.0 (preview) data set. VM insights includes the Service Map data set along with additional performance data and features for analysis. Both offerings have [connections with Azure Sentinel](https://docs.microsoft.com/azure/sentinel/connect-data-sources#map-data-types-with-azure-sentinel-connection-options).
-### Install the Dependency agent on Windows
-
-Administrator privileges are required to install or uninstall the agent.
-The Dependency agent is installed on computers running Windows through InstallDependencyAgent-Windows.exe. If you run this executable file without any options, it starts a wizard that you can follow to install interactively.
-
-Use the following steps to install the Dependency agent on each computer running Windows:
-
-1. Install the Log Analytics agent following the steps in [Collect data from Windows computers hosted in your environment](../agents/agent-windows.md).
-2. Download the Windows Dependency agent using the link in the previous section and then run it by using the following command: `InstallDependencyAgent-Windows.exe`
-3. Follow the wizard to install the agent.
-4. If the Dependency agent fails to start, check the logs for detailed error information. For Windows agents, the log directory is %Programfiles%\Microsoft Dependency Agent\logs.
-
-#### Windows command line
-
-Use options from the following table to install from a command line. To see a list of the installation flags, run the installer by using the /? flag as follows.
-
-InstallDependencyAgent-Windows.exe /?
-
-| **Flag** | **Description** |
-| | |
-| <code>/?</code> | Get a list of the command-line options. |
-| <code>/S</code> | Perform a silent installation with no user prompts. |
+Wire data is consolidated network and performance data collected from Windows-connected and Linux-connected computers with the Log Analytics agent, including those monitored by Operations Manager in your environment. Network data is combined with your other log data to help you correlate data.
-Files for the Windows Dependency agent are placed in C:\Program Files\Microsoft Dependency agent by default.
+In addition to the Log Analytics agent, the Wire Data solution uses Microsoft Dependency Agents that you install on computers in your IT infrastructure. Dependency Agents monitor network data sent to and from your computers for network levels 2-3 in the [OSI model](https://en.wikipedia.org/wiki/OSI_model), including the various protocols and ports used. Data is then sent to Azure Monitor using agents.
-### Install the Dependency agent on Linux
+## Migrate to Azure Monitor VM insights or Service Map
-Root access is required to install or configure the agent.
+In many cases, we see that customers often have both Wire Data 2.0 (preview) and [VM insights](../vm/vminsights-overview.md) or [Service Map solution](../vm/service-map.md) already enabled on the same VMs. This means you have the replacement offering enabled on your VM. You can simply [remove the Wire Data 2.0 (preview) solution from your Log Analytics workspace](https://docs.microsoft.com/azure/azure-monitor/insights/solutions?tabs=portal#remove-a-monitoring-solution).
-The Dependency agent is installed on Linux computers through InstallDependencyAgent-Linux64.bin, a shell script with a self-extracting binary. You can run the file by using _sh_ or add execute permissions to the file itself.
+If you have VMs that only have Wire Data 2.0 (preview) enabled on them, then you can onboard the VMs to [VM insights](../vm/vminsights-enable-overview.md) or [Service Map solution](../vm/service-map.md) and then [remove the Wire Data 2.0 (preview) solution from your Log Analytics workspace](https://docs.microsoft.com/azure/azure-monitor/insights/solutions?tabs=portal#remove-a-monitoring-solution).
-Use the following steps to install the Dependency agent on each Linux computer:
+## Migrate your queries to the VMConnection table from Azure Monitor VM insights
-1. Install the Log Analytics agent following the steps in [Collect data from Linux computers hosted in your environment](../vm/quick-collect-linux-computer.md#obtain-workspace-id-and-key).
-2. Download the Linux Dependency agent using the link in the previous section and then install it as root by using the following command:
-sh InstallDependencyAgent-Linux64.bin
-3. If the Dependency agent fails to start, check the logs for detailed error information. On Linux agents, the log directory is: /var/opt/microsoft/dependency-agent/log.
+### Agents providing data
-To see a list of the installation flags, run the installation program with the `-help` flag as follows.
+#### Wire Data 2.0 query
```
-InstallDependencyAgent-Linux64.bin -help
+WireData
+| summarize AggregatedValue = sum(TotalBytes) by Computer
+| limit 500000
```
-| **Flag** | **Description** |
-| | |
-| <code>-help</code> | Get a list of the command-line options. |
-| <code>-s</code> | Perform a silent installation with no user prompts. |
-| <code>--check</code> | Check permissions and the operating system but do not install the agent. |
-
-Files for the Dependency agent are placed in the following directories:
-
-| **Files** | **Location** |
-| | |
-| Core files | /opt/microsoft/dependency-agent |
-| Log files | /var/opt/microsoft/dependency-agent/log |
-| Config files | /etc/opt/microsoft/dependency-agent/config |
-| Service executable files | /opt/microsoft/dependency-agent/bin/microsoft-dependency-agent<br><br>/opt/microsoft/dependency-agent/bin/microsoft-dependency-agent-manager |
-| Binary storage files | /var/opt/microsoft/dependency-agent/storage |
-
-### Installation script examples
-
-To easily deploy the Dependency agent on many servers at once, it helps to use a script. You can use the following script examples to download and install the Dependency agent on either Windows or Linux.
-
-#### PowerShell script for Windows
-
-```powershell
-
-Invoke-WebRequest "https://aka.ms/dependencyagentwindows" -OutFile InstallDependencyAgent-Windows.exe
-
-.\InstallDependencyAgent-Windows.exe /S
+#### VM insights and Service Map query
+```
+VMConnection
+| summarize AggregatedValue = sum(BytesReceived + BytesSent) by Computer
+| limit 500000
```
-#### Shell script for Linux
+### IP Addresses of the agents providing data
-```
-wget --content-disposition https://aka.ms/dependencyagentlinux -O InstallDependencyAgent-Linux64.bin
-```
+#### Wire Data 2.0 query
```
-sh InstallDependencyAgent-Linux64.bin -s
+WireData
+| summarize AggregatedValue = count() by LocalIP
```
-### Desired State Configuration
+#### VM insights and Service Map query
-To deploy the Dependency agent via Desired State Configuration, you can use the xPSDesiredStateConfiguration module and a bit of code like the following:
+```
+VMComputer
+| distinct Computer, tostring(Ipv4Addresses)
+```
-```powershell
-Import-DscResource -ModuleName xPSDesiredStateConfiguration
+### All Outbound communications by Remote IP Address
-$DAPackageLocalPath = "C:\InstallDependencyAgent-Windows.exe"
+#### Wire Data 2.0 query
+```
+WireData
+| where Direction == "Outbound"
+| summarize AggregatedValue = count() by RemoteIP
+```
+#### VM insights and Service Map query
-Node $NodeName
+```
+VMConnection
+| where Direction == "outbound"
+| summarize AggregatedValue = count() by RemoteIp
+```
-{
+### Bytes received by Protocol Name
- # Download and install the Dependency agent
+#### Wire Data 2.0 query
- xRemoteFile DAPackage
+```
+WireData
+| where Direction == "Inbound"
+| summarize AggregatedValue = sum(ReceivedBytes) by ProtocolName
+```
- {
+#### VM insights and Service Map query
- Uri = "https://aka.ms/dependencyagentwindows"
+```
+VMConnection
+| where Direction == "inbound"
+| summarize AggregatedValue = sum(BytesReceived) by Protocol
+```
- DestinationPath = $DAPackageLocalPath
+### Amount of Network Traffic (in Bytes) by Process
- DependsOn = "[Package]OI"
+#### Wire Data 2.0 query
- }
+```
+WireData
+| summarize AggregatedValue = sum(TotalBytes) by ProcessName
+```
- xPackage DA
+#### VM insights and Service Map query
- {
+```
+VMConnection
+| summarize sum(BytesReceived), sum(BytesSent) by ProcessName
+```
- Ensure = "Present"
+### More examples queries
- Name = "Dependency Agent"
+Refer to the [VM insights log search documentation](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-log-search) and the [VM insights alert documentation](https://docs.microsoft.com/azure/azure-monitor/vm/vminsights-alerts#sample-alert-queries) for additional example queries.
- Path = $DAPackageLocalPath
+## Uninstall Wire Data 2.0 Solution
- Arguments = '/S'
+To uninstall Wire Data 2.0 you simply need to remove the Solution from your Log Analytics Workspace(s). This will result in the following:
- ProductId = ""
+* the Wire Data Management pack being removed from the VMs that are connected to the Workspace
+* the Wire Data data type no longer appearing in your Workspace
- InstalledCheckRegKey = "HKEY\_LOCAL\_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows\CurrentVersion\Uninstall\DependencyAgent"
+Follow [these instructions](https://docs.microsoft.com/azure/azure-monitor/insights/solutions?tabs=portal#remove-a-monitoring-solution) to remove the Wire Data solution.
- InstalledCheckRegValueName = "DisplayName"
+>[!NOTE]
+>If you have either the Service Map or VM insights solution on your workspace then the management pack will not be removed, as these solutions also use this management pack.
- InstalledCheckRegValueData = "Dependency Agent"
+### Wire Data 2.0 Management packs
- }
+When Wire Data is activated in a Log Analytics workspace, a 300-KB management pack is sent to all the Windows servers in that workspace. If you are using System Center Operations Manager agents in a [connected management group](../agents/om-agents.md), the Dependency Monitor management pack is deployed from System Center Operations Manager. If the agents are directly connected, Azure Monitor delivers the management pack.
-}
+The management pack is named Microsoft.IntelligencePacks.ApplicationDependencyMonitor. It's written to: %Programfiles%\Microsoft Monitoring Agent\Agent\Health Service State\Management Packs. The data source that the management pack uses is: %Program files%\Microsoft Monitoring Agent\Agent\Health Service State\Resources&lt;AutoGeneratedID&gt;\Microsoft.EnterpriseManagement.Advisor.ApplicationDependencyMonitorDataSource.dll.
-```
+## Uninstall the Dependency agent
-### Uninstall the Dependency agent
+>[!NOTE]
+>If you plan to replace Wire Data with either Service Map or VM insights, you should not remove the Dependency agent.
-Use the following sections to help you remove the Dependency agent.
+Use the following sections to help you remove the Dependency agent.
-#### Uninstall the Dependency agent on Windows
+### Uninstall the Dependency agent on Windows
An administrator can uninstall the Dependency agent for Windows through Control Panel. An administrator can also run %Programfiles%\Microsoft Dependency Agent\Uninstall.exe to uninstall the Dependency agent.
-#### Uninstall the Dependency agent on Linux
+### Uninstall the Dependency agent on Linux
To completely uninstall the Dependency agent from Linux, you must remove the agent itself and the connector, which is installed automatically with the agent. You can uninstall both by using the following single command:
To completely uninstall the Dependency agent from Linux, you must remove the age
rpm -e dependency-agent dependency-agent-connector ```
-## Management packs
-
-When Wire Data is activated in a Log Analytics workspace, a 300-KB management pack is sent to all the Windows servers in that workspace. If you are using System Center Operations Manager agents in a [connected management group](../agents/om-agents.md), the Dependency Monitor management pack is deployed from System Center Operations Manager. If the agents are directly connected, Azure Monitor delivers the management pack.
-
-The management pack is named Microsoft.IntelligencePacks.ApplicationDependencyMonitor. It's written to: %Programfiles%\Microsoft Monitoring Agent\Agent\Health Service State\Management Packs. The data source that the management pack uses is: %Program files%\Microsoft Monitoring Agent\Agent\Health Service State\Resources&lt;AutoGeneratedID&gt;\Microsoft.EnterpriseManagement.Advisor.ApplicationDependencyMonitorDataSource.dll.
-
-## Using the solution
-
-Use the following information to install and configure the solution.
--- The Wire Data solution acquires data from computers running Windows Server 2012 R2, Windows 8.1, and later operating systems.-- Microsoft .NET Framework 4.0 or later is required on computers where you want to acquire wire data from.-- Add the Wire Data solution to your Log Analytics workspace using the process described in [Add monitoring solutions from the Solutions Gallery](../insights/solutions.md). There is no further configuration required.-- If you want to view wire data for a specific solution, you need to have the solution already added to your workspace.-
-After you have agents installed and you install the solution, the Wire Data 2.0 tile appears in your workspace.
-
-![Wire Data tile](./media/wire-data/wire-data-tile.png)
- ## Using the Wire Data 2.0 solution In the **Overview** page for your Log Analytics workspace in the Azure portal, click the **Wire Data 2.0** tile to open the Wire Data dashboard. The dashboard includes the blades in the following table. Each blade lists up to 10 items matching that blade's criteria for the specified scope and time range. You can run a log search that returns all records by clicking **See all** at the bottom of the blade or by clicking the blade header.
A record with a type of _WireData_ is created for each type of input data. WireD
## Next steps
+- See [Deploy VM insights](./vminsights-enable-overview.md) for requirements and methods that to enable monitoring for your virtual machines.
- [Search logs](../logs/log-query-overview.md) to view detailed wire data search records.-
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/overview.md
Azure Monitor can collect data from a [variety of sources](monitor-reference.md)
- **Azure subscription monitoring data**: Data about the operation and management of an Azure subscription, as well as data about the health and operation of Azure itself. - **Azure tenant monitoring data**: Data about the operation of tenant-level Azure services, such as Azure Active Directory.
-As soon as you create an Azure subscription and start adding resources such as virtual machines and web apps, Azure Monitor starts collecting data. [Activity logs](essentials/platform-logs-overview.md) record when resources are created or modified. [Metrics](data-platform.md) tell you how the resource is performing and the resources that it's consuming.
+As soon as you create an Azure subscription and start adding resources such as virtual machines and web apps, Azure Monitor starts collecting data. [Activity logs](essentials/platform-logs-overview.md) record when resources are created or modified. [Metrics](essentials/data-platform-metrics.md) tell you how the resource is performing and the resources that it's consuming.
[Enable diagnostics](essentials/platform-logs-overview.md) to extend the data you're collecting into the internal operation of the resources. [Add an agent](agents/agents-overview.md) to compute resources to collect telemetry from their guest operating systems.
azure-monitor Vminsights Health Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-enable.md
VM insights guest health allows you to view the health of a virtual machine as d
VM insights guest health has the following limitations in public preview: - Only Azure virtual machines are currently supported. Azure Arc for servers is not currently supported.
+- Network proxies aren't currently supported.
## Supported operating systems
az deployment group create --name GuestHealthDeployment --resource-group my-reso
## Next steps -- [Customize monitors enabled by VM insights](vminsights-health-configure.md)
+- [Customize monitors enabled by VM insights](vminsights-health-configure.md)
azure-monitor Vminsights Health Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-health-troubleshoot.md
description: Describes troubleshooting steps that you can take when you have iss
Previously updated : 09/08/2020 Last updated : 02/25/2021 # Troubleshoot VM insights guest health (preview) This article describes troubleshooting steps that you can take when you have issues with VM insights health. +
+## Upgrade available message is still displayed after upgrading guest health
+
+- Verify that VM is running in global Azure. Arc enabled servers are not yet supported.
+- Verify that the virtual machine's region and operating system version are supported as described in [Enable Azure Monitor for VMs guest health (preview)](vminsights-health-enable.md).
+- Verify that guest health extension installed successfully with 0 exit code.
+- Verify that Azure Monitor agent extension is installed successfully.
+- Verify that system-assigned managed identity is enabled for the virtual machine.
+- Verify that no user-assigned managed identities are specified for the virtual machine.
+- Verify for Windows virtual machines that locale is *US English*. Localization is not currently supported by Azure Monitor agent.
+- Verify that the virtual machine is not using the network proxy. Azure Monitor agent does not currently support proxies.
+- Verify that the health extension agent started without errors. If the agent can't start, the agent's state may be corrupt. Delete the contents of the agent state folder and restart the agent.
+ - For Linux: Daemon is *vmGuestHealthAgent*. State folder is */var/opt/vmGuestHealthAgent/**
+ - For Windows: Service is *VM Guest Health agent*. State folder is _%ProgramData%\Microsoft\VMGuestHealthAgent\\*_.
+- Verify the Azure Monitor agent has network connectivity.
+ - From the virtual machine, attempt to ping _<region>.handler.control.monitor.azure.com_. For example, for a virtual machine in westeurope, attempt to ping _westeurope.handler.control.monitor.azure.com:443_.
+- Verify that virtual machine has an association with a data collection rule in the same region as the Log Analytics workspace.
+ - Refer to **Create data collection rule (DCR)** in [Enable Azure Monitor for VMs guest health (preview)](vminsights-health-enable.md) to ensure structure of the DCR is correct. Pay particular attention to presence of *performanceCounters* data source section set up to samples three counters and presence of *inputDataSources* section in health extension configuration to send counters to the extension.
+- Check the virtual machine for guest health extension errors.
+ - For Linux: Check logs at _/var/log/azure/Microsoft.Azure.Monitor.VirtualMachines.GuestHealthLinuxAgent/*.log_.
+ - For Windows: Check logs at _C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.VirtualMachines.GuestHealthWindowsAgent\{extension version}\*.log_.
+- Check the virtual machine for Azure Monitor agent errors.
+ - For Linux: Check logs at _/var/log/mdsd.*_.
+ - For Windows: Check logs at _C:\WindowsAzure\Resources\*{vmName}.AMADataStore_.
+
+++ ## Error message that no data is available ![No data](media/vminsights-health-troubleshoot/no-data.png)
This error indicates that the **Microsoft.WorkloadMonitor** resource provider wa
![Bad request](media/vminsights-health-troubleshoot/bad-request.png)
+## Health shows as "unknown" after guest health is enabled.
+
+### Verify that performance counters on Windows nodes are working correctly
+Guest health relies on the agent being able to collect performance counters from the node. he base set of performance counter libraries may become corrupted and may need to be rebuilt. Follow the instructions at [Manually rebuild performance counter library values](/troubleshoot/windows-server/performance/rebuild-performance-counter-library-values) to rebuild the performance counters.
+++++ ## Next steps - [Get an overview of the guest health feature of VM insights](vminsights-health-overview.md)
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na ms.devlang: na Previously updated : 03/19/2021 Last updated : 03/29/2021 # Create an SMB volume for Azure NetApp Files
Access to an SMB volume is managed through permissions.
### Share permissions
-By default, a new volume has the **Everyone / Full Control** share permissions. Members of the Domain Admins group can change the share permissions by using Computer Management on the computer account that is used for the Azure NetApp Files volume.
+By default, a new volume has the **Everyone / Full Control** share permissions. Members of the Domain Admins group can change the share permissions as follows:
-![SMB mount path](../media/azure-netapp-files/smb-mount-path.png)
-![Set share permissions](../media/azure-netapp-files/set-share-permissions.png)
+1. Map the share to a drive.
+2. Right-click the drive, select **Properties**, then go to the **Security** tab.
+
+[ ![Set share permissions](../media/azure-netapp-files/set-share-permissions.png)](../media/azure-netapp-files/set-share-permissions.png#lightbox)
### NTFS file and folder permissions
azure-netapp-files Azure Netapp Files Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-faqs.md
na ms.devlang: na Previously updated : 03/25/2021 Last updated : 03/29/2021 # FAQs About Azure NetApp Files
The volume size reported by the SMB client is the maximum size the Azure NetApp
As a best practice, set the maximum tolerance for computer clock synchronization to five minutes. For more information, see [Maximum tolerance for computer clock synchronization](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/jj852172(v=ws.11)).
+### Can I manage `SMB Shares`, `Sessions`, and `Open Files` through Computer Management Console (MMC)?
+
+Management of `SMB Shares`, `Sessions`, and `Open Files` through Computer Management Console (MMC) is currently not supported.
+ ### How can I obtain the IP address of an SMB volume via the portal? Use the **JSON View** link on the volume overview pane, and look for the **startIp** identifier under **properties** -> **mountTargets**.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na ms.devlang: na Previously updated : 03/25/2021 Last updated : 03/29/2021 # Solution architectures using Azure NetApp Files
This section provides references to SAP on Azure solutions.
* [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 1](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2078737) * [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 2](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2117130) * [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 3](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2215948)
+* [SAP Landscape sizing and volume consolidation with Azure NetApp Files](https://techcommunity.microsoft.com/t5/sap-on-microsoft/sap-landscape-sizing-and-volume-consolidation-with-anf/m-p/2145572/highlight/true#M14)
## Azure VMware Solutions
azure-netapp-files Volume Hard Quota Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/volume-hard-quota-guidelines.md
na ms.devlang: na Previously updated : 02/05/2021 Last updated : 03/29/2021 # What changing to volume hard quota means for your Azure NetApp Files service
From the beginning of the service, Azure NetApp Files has been using a capacity-pool provisioning and automatic growth mechanism. Azure NetApp Files volumes are thinly provisioned on an underlaying, customer-provisioned capacity pool of a selected tier and size. Volume sizes (quotas) are used to provide performance and capacity, and the quotas can be adjusted on-the-fly at any time. This behavior means that, currently, the volume quota is a performance lever used to control bandwidth to the volume. Currently, underlaying capacity pools automatically grow when the capacity fills up. > [!IMPORTANT]
-> The Azure NetApp Files behavior of volume and capacity pool provisioning will change to a *manual* and *controllable* mechanism. **Starting from April 1, 2021 (updated), volume sizes (quota) will manage bandwidth performance, as well as provisioned capacity, and underlying capacity pools will no longer grow automatically.**
+> The Azure NetApp Files behavior of volume and capacity pool provisioning will change to a *manual* and *controllable* mechanism. **Starting from April 30, 2021 (updated), volume sizes (quota) will manage bandwidth performance, as well as provisioned capacity, and underlying capacity pools will no longer grow automatically.**
## Reasons for the change to volume hard quota
You can submit bugs and feature requests by clicking **New Issue** on the [ANFCa
## Next steps * [Resize a capacity pool or a volume](azure-netapp-files-resize-capacity-pools-or-volumes.md)
-* [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
+* [Metrics for Azure NetApp Files](azure-netapp-files-metrics.md)
azure-resource-manager Async Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/async-operations.md
The response body contains the status of the operation:
### Deploy resources (201 with Azure-AsyncOperation)
-This example shows how to determine the status of [deployments operation for deploying resources](/rest/api/resources/deployments/createorupdate) to Azure. The initial request is in the following format:
+This example shows how to determine the status of [deployments operation for deploying resources](/rest/api/resources/resources/deployments/createorupdate) to Azure. The initial request is in the following format:
```HTTP PUT
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/delete-resource-group.md
If you have the required access, but the delete request fails, it may be because
## Next steps * To understand Resource Manager concepts, see [Azure Resource Manager overview](overview.md).
-* For deletion commands, see [PowerShell](/powershell/module/az.resources/Remove-AzResourceGroup), [Azure CLI](/cli/azure/group#az-group-delete), and [REST API](/rest/api/resources/resourcegroups/delete).
+* For deletion commands, see [PowerShell](/powershell/module/az.resources/Remove-AzResourceGroup), [Azure CLI](/cli/azure/group#az-group-delete), and [REST API](/rest/api/resources/resources/resourcegroups/delete).
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/lock-resources.md
az lock delete --ids $lockid
### REST API
-You can lock deployed resources with the [REST API for management locks](/rest/api/resources/managementlocks). The REST API enables you to create and delete locks, and retrieve information about existing locks.
+You can lock deployed resources with the [REST API for management locks](/rest/api/resources/managementlocks/managementlocks). The REST API enables you to create and delete locks, and retrieve information about existing locks.
To create a lock, run:
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
For illustration purposes, we have only one dependent resource.
## Validate move
-The [validate move operation](/rest/api/resources/resources/validatemoveresources) lets you test your move scenario without actually moving the resources. Use this operation to check if the move will succeed. Validation is automatically called when you send a move request. Use this operation only when you need to predetermine the results. To run this operation, you need the:
+The [validate move operation](/rest/api/resources/resources/resources/moveresources) lets you test your move scenario without actually moving the resources. Use this operation to check if the move will succeed. Validation is automatically called when you send a move request. Use this operation only when you need to predetermine the results. To run this operation, you need the:
* name of the source resource group * resource ID of the target resource group
If you get an error, see [Troubleshoot moving Azure resources to new resource gr
## Use REST API
-To move existing resources to another resource group or subscription, use the [Move resources](/rest/api/resources/Resources/MoveResources) operation.
+To move existing resources to another resource group or subscription, use the [Move resources](/rest/api/resources/resources/resources/moveresources) operation.
```HTTP POST https://management.azure.com/subscriptions/{source-subscription-id}/resourcegroups/{source-resource-group-name}/moveResources?api-version={api-version}
azure-resource-manager Resource Manager Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-manager-personal-data.md
For deployments, Resource Manager retains parameter values and status messages i
To list **deployments** in the history, use:
-* [List By Resource Group](/rest/api/resources/deployments/listbyresourcegroup)
+* [List By Resource Group](/rest/api/resources/resources/deployments/listbyresourcegroup)
* [Get-AzResourceGroupDeployment](/powershell/module/az.resources/Get-AzResourceGroupDeployment) * [az deployment group list](/cli/azure/deployment/group#az_deployment_group_list) To delete **deployments** from the history, use:
-* [Delete](/rest/api/resources/deployments/delete)
+* [Delete](/rest/api/resources/resources/deployments/delete)
* [Remove-AzResourceGroupDeployment](/powershell/module/az.resources/Remove-AzResourceGroupDeployment) * [az deployment group delete](/cli/azure/deployment/group#az_deployment_group_delete)
The name of the resource group persists until you delete the resource group. To
To list **resource groups**, use:
-* [List](/rest/api/resources/resourcegroups/list)
+* [List](/rest/api/resources/resources/resourcegroups/list)
* [Get-AzResourceGroup](/powershell/module/az.resources/Get-AzResourceGroup) * [az group list](/cli/azure/group#az-group-list) To delete **resource groups**, use:
-* [Delete](/rest/api/resources/resourcegroups/delete)
+* [Delete](/rest/api/resources/resources/resourcegroups/delete)
* [Remove-AzResourceGroup](/powershell/module/az.resources/Remove-AzResourceGroup) * [az group delete](/cli/azure/group#az-group-delete)
Tags names and values persist until you delete or modify the tag. To see if you
To list **tags**, use:
-* [List](/rest/api/resources/tags/list)
+* [List](/rest/api/resources/resources/tags/list)
* [Get-AzTag](/powershell/module/az.resources/Get-AzTag) * [az tag list](/cli/azure/tag#az-tag-list) To delete **tags**, use:
-* [Delete](/rest/api/resources/tags/delete)
+* [Delete](/rest/api/resources/resources/tags/delete)
* [Remove-AzTag](/powershell/module/az.resources/Remove-AzTag) * [az tag delete](/cli/azure/tag#az-tag-delete)
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | Entity | Scope | Length | Valid Characters | > | | | | | > | deployments | resource group | 1-64 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
-> | resourcegroups | subscription | 1-90 | Alphanumerics, underscores, parentheses, hyphens, periods, and unicode characters that match the [regex documentation](/rest/api/resources/resourcegroups/createorupdate).<br><br>Can't end with period. |
+> | resourcegroups | subscription | 1-90 | Alphanumerics, underscores, parentheses, hyphens, periods, and unicode characters that match the [regex documentation](/rest/api/resources/resources/resourcegroups/createorupdate).<br><br>Can't end with period. |
> | tagNames | resource | 1-512 | Can't use:<br>`<>%&\?/` | > | tagNames / tagValues | tag name | 1-256 | All characters. | > | templateSpecs | resource group | 1-90 | Alphanumerics, underscores, parentheses, hyphens, and periods. |
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-resources.md
The following template adds the tags from an object to either a resource group o
To work with tags through the Azure REST API, use:
-* [Tags - Create Or Update At Scope](/rest/api/resources/tags/createorupdateatscope) (PUT operation)
-* [Tags - Update At Scope](/rest/api/resources/tags/updateatscope) (PATCH operation)
-* [Tags - Get At Scope](/rest/api/resources/tags/getatscope) (GET operation)
-* [Tags - Delete At Scope](/rest/api/resources/tags/deleteatscope) (DELETE operation)
+* [Tags - Create Or Update At Scope](/rest/api/resources/resources/tags/createorupdateatscope) (PUT operation)
+* [Tags - Update At Scope](/rest/api/resources/resources/tags/updateatscope) (PATCH operation)
+* [Tags - Get At Scope](/rest/api/resources/resources/tags/getatscope) (GET operation)
+* [Tags - Delete At Scope](/rest/api/resources/resources/tags/deleteatscope) (DELETE operation)
## Inherit tags
azure-resource-manager Deploy Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deploy-rest.md
You can either include your template in the request body or link to a file. When
You can target your deployment to a resource group, Azure subscription, management group, or tenant. Depending on the scope of the deployment, you use different commands. -- To deploy to a **resource group**, use [Deployments - Create](/rest/api/resources/deployments/createorupdate). The request is sent to:
+- To deploy to a **resource group**, use [Deployments - Create](/rest/api/resources/resources/deployments/createorupdate). The request is sent to:
```HTTP PUT https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01 ``` -- To deploy to a **subscription**, use [Deployments - Create At Subscription Scope](/rest/api/resources/deployments/createorupdateatsubscriptionscope). The request is sent to:
+- To deploy to a **subscription**, use [Deployments - Create At Subscription Scope](/rest/api/resources/resources/deployments/createorupdateatsubscriptionscope). The request is sent to:
```HTTP PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
You can target your deployment to a resource group, Azure subscription, manageme
For more information about subscription level deployments, see [Create resource groups and resources at the subscription level](deploy-to-subscription.md). -- To deploy to a **management group**, use [Deployments - Create At Management Group Scope](/rest/api/resources/deployments/createorupdateatmanagementgroupscope). The request is sent to:
+- To deploy to a **management group**, use [Deployments - Create At Management Group Scope](/rest/api/resources/resources/deployments/createorupdateatmanagementgroupscope). The request is sent to:
```HTTP PUT https://management.azure.com/providers/Microsoft.Management/managementGroups/{groupId}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
You can target your deployment to a resource group, Azure subscription, manageme
For more information about management group level deployments, see [Create resources at the management group level](deploy-to-management-group.md). -- To deploy to a **tenant**, use [Deployments - Create Or Update At Tenant Scope](/rest/api/resources/deployments/createorupdateattenantscope). The request is sent to:
+- To deploy to a **tenant**, use [Deployments - Create Or Update At Tenant Scope](/rest/api/resources/resources/deployments/createorupdateattenantscope). The request is sent to:
```HTTP PUT https://management.azure.com/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
The examples in this article use resource group deployments.
1. Set [common parameters and headers](/rest/api/azure/), including authentication tokens.
-1. If you're deploying to a resource group that doesn't exist, create the resource group. Provide your subscription ID, the name of the new resource group, and location that you need for your solution. For more information, see [Create a resource group](/rest/api/resources/resourcegroups/createorupdate).
+1. If you're deploying to a resource group that doesn't exist, create the resource group. Provide your subscription ID, the name of the new resource group, and location that you need for your solution. For more information, see [Create a resource group](/rest/api/resources/resources/resourcegroups/createorupdate).
```HTTP PUT https://management.azure.com/subscriptions/<YourSubscriptionId>/resourcegroups/<YourResourceGroupName>?api-version=2020-06-01
The examples in this article use resource group deployments.
} ```
-1. To get the status of the template deployment, use [Deployments - Get](/rest/api/resources/deployments/get).
+1. To get the status of the template deployment, use [Deployments - Get](/rest/api/resources/resources/deployments/get).
```HTTP GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/{deploymentName}?api-version=2020-10-01
azure-resource-manager Deployment History Deletions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-history-deletions.md
az feature unregister --namespace Microsoft.Resources --name DisableDeploymentGr
# [REST](#tab/rest)
-For REST API, use [Features - Register](/rest/api/resources/features/register).
+For REST API, use [Features - Register](/rest/api/resources/features/features/register).
```rest POST https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Resources/features/DisableDeploymentGrooming/register?api-version=2015-12-01
To see the current status of your subscription, use:
GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Resources/features/DisableDeploymentGrooming/register?api-version=2015-12-01 ```
-To reenable automatic deletions, use [Features - Unregister](/rest/api/resources/features/unregister)
+To reenable automatic deletions, use [Features - Unregister](/rest/api/resources/features/features/unregister)
```rest POST https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Features/providers/Microsoft.Resources/features/DisableDeploymentGrooming/unregister?api-version=2015-12-01
azure-resource-manager Deployment History https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-history.md
az deployment group show --resource-group ExampleGroup --name ExampleDeployment
# [HTTP](#tab/http)
-To list the deployments for a resource group, use the following operation. For the latest API version number to use in the request, see [Deployments - List By Resource Group](/rest/api/resources/deployments/listbyresourcegroup).
+To list the deployments for a resource group, use the following operation. For the latest API version number to use in the request, see [Deployments - List By Resource Group](/rest/api/resources/resources/deployments/listbyresourcegroup).
``` GET https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.Resources/deployments/?api-version={api-version} ```
-To get a specific deployment. use the following operation. For the latest API version number to use in the request, see [Deployments - Get](/rest/api/resources/deployments/get).
+To get a specific deployment. use the following operation. For the latest API version number to use in the request, see [Deployments - Get](/rest/api/resources/resources/deployments/get).
``` GET https://management.azure.com/subscriptions/{subscription-id}/resourcegroups/{resource-group-name}/providers/microsoft.resources/deployments/{deployment-name}?api-version={api-version}
az deployment operation group list --resource-group ExampleGroup --name ExampleD
# [HTTP](#tab/http)
-To get deployment operations, use the following operation. For the latest API version number to use in the request, see [Deployment Operations - List](/rest/api/resources/deploymentoperations/list).
+To get deployment operations, use the following operation. For the latest API version number to use in the request, see [Deployment Operations - List](/rest/api/resources/resources/deploymentoperations/list).
``` GET https://management.azure.com/subscriptions/{subscription-id}/resourcegroups/{resource-group-name}/providers/microsoft.resources/deployments/{deployment-name}/operations?$skiptoken={skiptoken}&api-version={api-version}
azure-resource-manager Export Template Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/export-template-portal.md
To assist with creating Azure Resource Manager templates, you can export a templ
Resource Manager enables you to pick one or more resources for exporting to a template. You can focus on exactly the resources you need in the template.
-This article shows how to export templates through the portal. You can also use [Azure CLI](../management/manage-resource-groups-cli.md#export-resource-groups-to-templates), [Azure PowerShell](../management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), or [REST API](/rest/api/resources/resourcegroups/exporttemplate).
+This article shows how to export templates through the portal. You can also use [Azure CLI](../management/manage-resource-groups-cli.md#export-resource-groups-to-templates), [Azure PowerShell](../management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), or [REST API](/rest/api/resources/resources/resourcegroups/exporttemplate).
## Choose the right export option
You can export the template that was used to deploy existing resources. The temp
## Next steps -- Learn how to export templates with [Azure CLI](../management/manage-resource-groups-cli.md#export-resource-groups-to-templates), [Azure PowerShell](../management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), or [REST API](/rest/api/resources/resourcegroups/exporttemplate).
+- Learn how to export templates with [Azure CLI](../management/manage-resource-groups-cli.md#export-resource-groups-to-templates), [Azure PowerShell](../management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), or [REST API](/rest/api/resources/resources/resourcegroups/exporttemplate).
- To learn the Resource Manager template syntax, see [Understand the structure and syntax of Azure Resource Manager templates](template-syntax.md). - To learn how to develop templates, see the [step-by-step tutorials](../index.yml). - To view the Azure Resource Manager template schemas, see [template reference](/azure/templates/).
azure-resource-manager Template Deploy What If https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/template-deploy-what-if.md
If you want to return the results without colors, open your [Azure CLI configura
For REST API, use:
-* [Deployments - What If](/rest/api/resources/deployments/whatif) for resource group deployments
-* [Deployments - What If At Subscription Scope](/rest/api/resources/deployments/whatifatsubscriptionscope) for subscription deployments
-* [Deployments - What If At Management Group Scope](/rest/api/resources/deployments/whatifatmanagementgroupscope) for management group deployments
-* [Deployments - What If At Tenant Scope](/rest/api/resources/deployments/whatifattenantscope) for tenant deployments.
+* [Deployments - What If](/rest/api/resources/resources/deployments/whatif) for resource group deployments
+* [Deployments - What If At Subscription Scope](/rest/api/resources/resources/deployments/whatifatsubscriptionscope) for subscription deployments
+* [Deployments - What If At Management Group Scope](/rest/api/resources/resources/deployments/whatifatmanagementgroupscope) for management group deployments
+* [Deployments - What If At Tenant Scope](/rest/api/resources/resources/deployments/whatifattenantscope) for tenant deployments.
## Change types
azure-signalr Signalr Howto Troubleshoot Method https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-howto-troubleshoot-method.md
Client side logging experience is exactly the same as when using self-hosted Sig
##### Enable server-side logging for `ASP.NET Core SignalR`
-Server-side logging for `ASP.NET Core SignalR` integrates with the `ILogger` based [logging](/aspnet/core/fundamentals/logging/?tabs=aspnetcore2x&view=aspnetcore-2.1) provided in the `ASP.NET Core` framework. You can enable server-side logging by using `ConfigureLogging`, a sample usage as follows:
+Server-side logging for `ASP.NET Core SignalR` integrates with the `ILogger` based [logging](/aspnet/core/fundamentals/logging/?tabs=aspnetcore2x&preserve-view=true&view=aspnetcore-2.1) provided in the `ASP.NET Core` framework. You can enable server-side logging by using `ConfigureLogging`, a sample usage as follows:
```cs .ConfigureLogging((hostingContext, logging) =>
azure-signalr Signalr Tutorial Build Blazor Server Chat App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-signalr/signalr-tutorial-build-blazor-server-chat-app.md
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
## Publish to Azure
- So far, the Blazor App is working on local SignalR and when deploy to Azure App Service, it's suggested to use [Azure SignalR Service](/aspnet/core/signalr/scale?view=aspnetcore-3.1#azure-signalr-service) which allows for scaling up a Blazor Server app to a large number of concurrent SignalR connections. In addition, the SignalR service's global reach and high-performance data centers significantly aid in reducing latency due to geography.
+ So far, the Blazor App is working on local SignalR and when deploy to Azure App Service, it's suggested to use [Azure SignalR Service](/aspnet/core/signalr/scale#azure-signalr-service) which allows for scaling up a Blazor Server app to a large number of concurrent SignalR connections. In addition, the SignalR service's global reach and high-performance data centers significantly aid in reducing latency due to geography.
> [!IMPORTANT] > In Blazor Server app, UI states are maintained at server side which means server sticky is required in this case. If there's single app server, server sticky is ensured by design. However, if there're multiple app servers, there's a chance that client negotiation and connection may go to different servers and leads to UI errors in Blazor app. So you need to enable server sticky like below in `appsettings.json`:
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
The service dependency will do things below to enable your app automatically switch to Azure SignalR Service when on Azure.
- * Update [`HostingStartupAssembly`](/aspnet/core/fundamentals/host/platform-specific-configuration?view=aspnetcore-3.1) to use Azure SignalR Service.
+ * Update [`HostingStartupAssembly`](/aspnet/core/fundamentals/host/platform-specific-configuration) to use Azure SignalR Service.
* Add Azure SignalR Service NuGet package reference. * Update profile properties to save the dependency settings. * Configure secrets store depends on your choice.
From Visual Studio 2019 version 16.2.0, Azure SignalR Service is build-in web ap
} ```
-1. Configure Azure SignalR Service `ConnectionString` either in `appsetting.json` or with [Secret Manager](/aspnet/core/security/app-secrets?tabs=visual-studio&view=aspnetcore-3.1#secret-manager) tool
+1. Configure Azure SignalR Service `ConnectionString` either in `appsetting.json` or with [Secret Manager](/aspnet/core/security/app-secrets?tabs=visual-studio#secret-manager) tool
> [!NOTE]
-> Step 2 can be replaced by using [`HostingStartupAssembly`](/aspnet/core/fundamentals/host/platform-specific-configuration?view=aspnetcore-3.1) to SignalR SDK.
+> Step 2 can be replaced by using [`HostingStartupAssembly`](/aspnet/core/fundamentals/host/platform-specific-configuration) to SignalR SDK.
> > 1. Add configuration to turn on Azure SignalR Service in `appsetting.json` > ```js
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auto-failover-group-overview.md
Previously updated : 12/26/2020 Last updated : 03/26/2021 # Use auto-failover groups to enable transparent and coordinated failover of multiple databases
When performing OLTP operations, use `<fog-name>.database.windows.net` as the se
If you have a logically isolated read-only workload that is tolerant to certain staleness of data, you can use the secondary database in the application. For read-only sessions, use `<fog-name>.secondary.database.windows.net` as the server URL and the connection is automatically directed to the secondary. It is also recommended that you indicate read intent in the connection string by using `ApplicationIntent=ReadOnly`.
+> [!NOTE]
+> In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
+>
+> - To connect to a read-only replica in the primary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.database.windows.net`.
+> - To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.database.windows.net`.
+ ### Preparing for performance degradation A typical Azure application uses multiple Azure services and consists of multiple components. The automated failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the DR region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the DR region and follow these [network security guidelines](#failover-groups-and-network-security).
When performing OLTP operations, use `<fog-name>.zone_id.database.windows.net` a
If you have a logically isolated read-only workload that is tolerant to certain staleness of data, you can use the secondary database in the application. To connect directly to the geo-replicated secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server URL and the connection is made directly to the geo-replicated secondary. > [!NOTE]
-> In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of [read-only replicas](read-scale-out.md) to run read-only query workloads using the capacity of one or more read-only replicas, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
+> In Business Critical tier, SQL Managed Instance supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
> > - To connect to a read-only replica in the primary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.<zone_id>.database.windows.net`. > - To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.<zone_id>.database.windows.net`.
azure-sql Dns Alias Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/dns-alias-overview.md
The cmdlets used in the code example are the following:
- [Set-AzSqlServerDNSAlias](/powershell/module/az.Sql/Set-azSqlServerDnsAlias): Modifies the server name that the alias is configured to refer to, from server 1 to server 2. - [Remove-AzSqlServerDNSAlias](/powershell/module/az.Sql/Remove-azSqlServerDnsAlias): Remove the DNS alias from server 2, by using the name of the alias.
-## Limitations during preview
+## Limitations
Presently, a DNS alias has the following limitations:
Presently, a DNS alias has the following limitations:
## Next steps -- [PowerShell for DNS Alias to Azure SQL Database](dns-alias-powershell-create.md)
+- [PowerShell for DNS Alias to Azure SQL Database](dns-alias-powershell-create.md)
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md
Previously updated : 03/01/2021 Last updated : 03/29/2021 # Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)
After LRS is stopped, either automatically through autocomplete or manually thro
| **2. Start LRS in the cloud**. | You can restart the service with a choice of cmdlets: PowerShell ([start-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/start-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_start cmdlets](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_start)). <br /><br /> Start LRS separately for each database that points to a backup folder on Blob Storage. <br /><br /> After you start the service, it will take backups from the Blob Storage container and start restoring them on SQL Managed Instance.<br /><br /> If you started LRS in continuous mode, after all initially uploaded backups are restored, the service will watch for any new files uploaded to the folder. The service will continuously apply logs based on the log sequence number (LSN) chain until it's stopped. | | **2.1. Monitor the operation's progress**. | You can monitor progress of the restore operation with a choice of cmdlets: PowerShell ([get-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/get-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_show cmdlets](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_show)). | | **2.2. Stop the operation if needed**. | If you need to stop the migration process, you have a choice of cmdlets: PowerShell ([stop-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/stop-azsqlinstancedatabaselogreplay)) or Azure CLI ([az_sql_midb_log_replay_stop](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_stop)). <br /><br /> Stopping the operation will delete the database that you're restoring on SQL Managed Instance. After you stop an operation, you can't resume LRS for a database. You need to restart the migration process from scratch. |
-| **3. Cut over to the cloud when you're ready**. | Stop the application and the workload. Take the last log-tail backup and upload it to Azure Blob Storage.<br /><br /> Complete the cutover by initiating an LRS `complete` operation with a choice of cmdlets: PowerShell ([complete-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/complete-azsqlinstancedatabaselogreplay)) or Azure CLI [az_sql_midb_log_replay_complete](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_complete). This operation will stop LRS and cause the database to come online for read and write use on SQL Managed Instance.<br /><br /> Repoint the application connection string from SQL Server to SQL Managed Instance. |
+| **3. Cut over to the cloud when you're ready**. | Stop the application and the workload. Take the last log-tail backup and upload it to Azure Blob Storage.<br /><br /> Complete the cutover by initiating an LRS `complete` operation with a choice of cmdlets: PowerShell ([complete-azsqlinstancedatabaselogreplay](/powershell/module/az.sql/complete-azsqlinstancedatabaselogreplay)) or Azure CLI [az_sql_midb_log_replay_complete](/cli/azure/sql/midb/log-replay#az_sql_midb_log_replay_complete). This operation will stop LRS and cause the database to come online for read and write use on SQL Managed Instance.<br /><br /> Repoint the application connection string from SQL Server to SQL Managed Instance. You will need to orchestrate this step yourself, either through a manual connection string change in your application, or automatically (e.g. if your application can, for example, read the connection string from a property, or a database). |
## Requirements for getting started
After you start LRS, use the monitoring cmdlet (`get-azsqlinstancedatabaselogrep
## Next steps - Learn more about [migrating SQL Server to SQL Managed instance](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md). - Learn more about [differences between SQL Server and SQL Managed Instance](transact-sql-tsql-differences-sql-server.md).-- Learn more about [best practices to cost and size workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
+- Learn more about [best practices to cost and size workloads migrated to Azure](/azure/cloud-adoption-framework/migrate/azure-best-practices/migrate-best-practices-costs).
azure-vmware Configure Nsx Network Components Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/configure-nsx-network-components-azure-portal.md
Last updated 02/16/2021
# Configure NSX network components in Azure VMware Solution
-An Azure VMware Solution private cloud comes with NSX-T as a software-defined network (SDDC) by default. It comes pre-provisioned with an NSX-T Tier-0 gateway in Active/Active mode and a default NSX-T Tier-1 gateway in Active/Standby mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
+An Azure VMware Solution private cloud comes with NSX-T as a software-defined network (SDDC) by default. It comes pre-provisioned with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in Active/Standby mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
After the Azure VMware Solution private cloud is deployed, you can configure the necessary NSX-T objects from the Azure VMware Solution console under **Workload Networking**. The console presents the simplified view of NSX-T operations that a VMware administrator needs daily and targeted at users not familiar with NSX-T.
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cdn/cdn-custom-ssl.md
Previously updated : 01/27/2021 Last updated : 03/26/2021 # As a website owner, I want to enable HTTPS on the custom domain of my CDN endpoint so that my users can use my custom domain to access my content securely.
Grant Azure CDN permission to access the certificates (secrets) in your Azure Ke
3. Under Certificate management type, select **Use my own certificate**.
- ![Configure your certificate](./media/cdn-custom-ssl/cdn-configure-your-certificate.png)
+ :::image type="content" source="./media/cdn-custom-ssl/cdn-configure-your-certificate.png" alt-text="Screenshot of how to configure certificate for cdn endpoint.":::
-4. Select a key vault, certificate (secret), and certificate version.
+4. Select a key vault, Certificate/Secret, and Certificate/Secret version.
Azure CDN lists the following information: - The key vault accounts for your subscription ID.
- - The certificates (secrets) under the selected key vault.
- - The available certificate versions.
+ - The certificates/secrets under the selected key vault.
+ - The available certificate/secret versions.
+ > [!NOTE]
+ > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the certificate/secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be deployed.
+
5. Select **On** to enable HTTPS. 6. When you use your certificate, domain validation isn't required. Continue to [Wait for propagation](#wait-for-propagation).
DigiCert sends a verification email to the following email addresses. Verify tha
* **hostmaster@your-domain-name.com** * **postmaster@your-domain-name.com**
-You should receive an email in a few minutes for you to approve the request. In case you're using a spam filter, add verification@digicert.com to its allow list. If you don't receive an email within 24 hours, contact Microsoft support.
+You should receive an email in a few minutes for you to approve the request. In case you're using a spam filter, add verification@digicert.com to its allowlist. If you don't receive an email within 24 hours, contact Microsoft support.
![Domain validation email](./media/cdn-custom-ssl/domain-validation-email.png)
cloud-services-extended-support Configure Scaling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services-extended-support/configure-scaling.md
Title: Configure scaling for Azure Cloud Services (extended support)
description: How to enable scaling options for Azure Cloud Services (extended support) +
cloud-services Cloud Services How To Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-configure-portal.md
Title: How to configure a cloud service (classic) - Portal | Microsoft Docs
description: Learn how to configure cloud services in Azure. Learn to update the cloud service configuration and configure remote access to role instances. These examples use the Azure portal. + Last updated 10/14/2020
cloud-services Cloud Services How To Scale Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-scale-portal.md
Title: Auto scale a cloud service (classic) in the portal | Microsoft Docs
description: Learn how to use the portal to configure auto scale rules for a cloud service (classic) roles in Azure. + Last updated 10/14/2020
cloud-services Cloud Services How To Scale Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/cloud-services-how-to-scale-powershell.md
Title: Scale an Azure cloud service (classic) in Windows PowerShell | Microsoft
description: (classic) Learn how to use PowerShell to scale a web role or worker role in or out in Azure. + Last updated 10/14/2020
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/resource-health-for-cloud-services.md
Title: Resource Health for Cloud Services (Classic)
description: This article talks about Resource Health Check (RHC) Support for Microsoft Azure Cloud Services (Classic) + Last updated 10/14/2020
cloud-services Schema Cscfg File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-file.md
Title: Azure Cloud Services (classic) Definition Schema (.cscfg File) | Microsof
description: A service configuration (.cscfg) file specifies how many role instances to deploy for each role, configuration values, and certificate thumbprints for a role. + Last updated 10/14/2020
cloud-services Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-networkconfiguration.md
Title: Azure Cloud Services (classic) NetworkConfiguration Schema | Microsoft Do
description: Learn about the child elements of the NetworkConfiguration element of the service configuration file, which specifies Virtual Network and DNS values. + Last updated 10/14/2020
cloud-services Schema Cscfg Role https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-cscfg-role.md
Title: Azure Cloud Services (classic) Role Schema | Microsoft Docs
description: The Role element of a service configuration file specifies how many role instances to deploy for each role, configuration values, and certificate thumbprints. + Last updated 10/14/2020
cloud-services Schema Csdef File https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-file.md
Title: Azure Cloud Services (classic) Definition Schema (.csdef File) | Microsof
description: A service definition (.csdef) file defines a service model for an application, containing available roles, endpoints, and configuration values for the service. + Last updated 10/14/2020
cloud-services Schema Csdef Loadbalancerprobe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-loadbalancerprobe.md
Title: Azure Cloud Services (classic) Def. LoadBalancerProbe Schema | Microsoft
description: The customer defined LoadBalancerProbe is a health probe of endpoints in role instances. It combines with web or worker roles in a service definition file. + Last updated 10/14/2020
cloud-services Schema Csdef Networktrafficrules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-networktrafficrules.md
Title: Azure Cloud Services (classic) Def. NetworkTrafficRules Schema | Microsof
description: Learn about NetworkTrafficRules, which limits the roles that can access the internal endpoints of a role. It combines with roles in a service definition file. + Last updated 10/14/2020
cloud-services Schema Csdef Webrole https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-webrole.md
Title: Azure Cloud Services (classic) Def. WebRole Schema | Microsoft Docs
description: Azure web role is customized for web application programming supporting ASP.NET, PHP, WCF, and FastCGI. Learn about service definition elements of a web role. + Last updated 10/14/2020
cloud-services Schema Csdef Workerrole https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-services/schema-csdef-workerrole.md
Title: Azure Cloud Services (classic) Def. WorkerRole Schema | Microsoft Docs
description: The Azure worker role is used for generalized development and may perform background processing for a web role. Learn about the Azure worker role schema. + Last updated 10/14/2020
cognitive-services How To Migrate Face Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/how-to-migrate-face-data.md
await DisplayPersonGroup(FaceClientEastAsia, personGroupId);
await IdentifyInPersonGroup(FaceClientEastAsia, personGroupId); await DisplayPersonGroup(FaceClientWestUS, newPersonGroupId);
-// No need to retrain the person group before identification,
+// No need to retrain the PersonGroup before identification,
// training results are copied by snapshot as well. await IdentifyInPersonGroup(FaceClientWestUS, newPersonGroupId); ```
Use the following helper methods:
private static async Task DisplayPersonGroup(IFaceClient client, string personGroupId) { var personGroup = await client.PersonGroup.GetAsync(personGroupId);
- Console.WriteLine("Person Group:");
+ Console.WriteLine("PersonGroup:");
Console.WriteLine(JsonConvert.SerializeObject(personGroup)); // List persons.
cognitive-services Specify Recognition Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/Face-API-How-to-Topics/specify-recognition-model.md
There is no change in the [Face - Identify] API; you only need to specify the mo
## Find similar faces with specified model
-You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the face list with [FaceList - Create] API or [LargeFaceList - Create]. If you do not specify this parameter, the `recognition_01` model is used by default. A face list will always use the recognition model it was created with, and new faces will become associated with this model when they are added to the list; you cannot change this after creation. To see what model a face list is configured with, use the [FaceList - Get] API with the _returnRecognitionModel_ parameter set as **true**.
+You can also specify a recognition model for similarity search. You can assign the model version with `recognitionModel` when creating the **FaceList** with [FaceList - Create] API or [LargeFaceList - Create]. If you do not specify this parameter, the `recognition_01` model is used by default. A **FaceList** will always use the recognition model it was created with, and new faces will become associated with this model when they are added to the list; you cannot change this after creation. To see what model a **FaceList** is configured with, use the [FaceList - Get] API with the _returnRecognitionModel_ parameter set as **true**.
See the following code example for the .NET client library.
See the following code example for the .NET client library.
await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04"); ```
-This code creates a face list called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this face list for similar faces to a new detected face, that face must have been detected ([Face - Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
+This code creates a **FaceList** called `My face collection`, using the _recognition_04_ model for feature extraction. When you search this **FaceList** for similar faces to a new detected face, that face must have been detected ([Face - Detect]) using the _recognition_04_ model. As in the previous section, the model needs to be consistent.
There is no change in the [Face - Find Similar] API; you only specify the model version in detection.
cognitive-services Face Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Face/face-resource-container-config.md
The configuration settings in the `CloudAI` section provide container-specific o
### Storage scenario settings
-The Face container stores blob, cache, metadata, and queue data, depending on what's being stored. For example, training indexes and results for a large person group are stored as blob data. The Face container provides two different storage scenarios when interacting with and storing these types of data:
+The Face container stores blob, cache, metadata, and queue data, depending on what's being stored. For example, training indexes and results for a **LargePersonGroup** are stored as blob data. The Face container provides two different storage scenarios when interacting with and storing these types of data:
* Memory All four types of data are stored in memory. They're not distributed, nor are they persistent. If the Face container is stopped or removed, all of the data in storage for that container is destroyed.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
#### New features -- **C++/C#/Java/Python**: Moved to the latest version of GStreamer (1.18.3) to add support for transcribing _any_ media format on Windows, Linux and Android. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams). Previously, the SDK only supported a subset of GStreamer supported formats. This gives you the flexibility to use the audio format that is right for your use case.-- **C++/C#/Java/Objective-C/Python**: Added support to decode compressed TTS/synthesized audio with the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. This can lower the bandwidth needed for your use case. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.propertyid?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.propertyid?view=azure-java-stable), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxpropertyid), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid?view=azure-python).-- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest#fromWavFileInput_File_), allowing customers to send the path on disk to a wav file to the SDK which the SDK will then recognize. This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252).-- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices programmatically. This allows you to list available voices in your application, or programmatically choose from different voices. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-dotnet#methods), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer?view=azure-java-stable#methods), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoices), and [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer?view=azure-python#methods).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. Visemes enable you to create more natural news broadcast assistants, more interactive gaming and cartoon characters, and more intuitive language teaching videos. People with hearing impairment can also pick up sounds visually and "lip-read" any speech content. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. You might use this in your application to take an action when certain words are spoken by text-to-speech. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element).-- **Java**: Added support for speaker recognition APIs, allowing you to use speaker recognition from Java. Details [here](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable).-- **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-dotnet), [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-java-stable), [JavaScript](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat?view=azure-node-latest), [Objective-C](https://docs.microsoft.com/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat?view=azure-python).-- **C++/C#/Java/Python**: Added support on Linux to allow connections to succeed in environments where network access to Certificate Revocation Lists has been blocked. This enables scenarios where you choose to let the client machine only connect to the Azure Speech service. See documentation [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/how-to-configure-openssl-linux).-- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario so that an app can compare speaker data to an existing voice profile. Details for [C++](https://docs.microsoft.com/cpp/cognitive-services/speech/speakerrecognizer), [C#](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-dotnet), and [Java](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer?view=azure-java-stable). This addresses [GitHub issue #808](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/808).
+- **C++/C#/Java/Python**: Moved to the latest version of GStreamer (1.18.3) to add support for transcribing _any_ media format on Windows, Linux and Android. See documentation [here](/azure/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams). Previously, the SDK only supported a subset of GStreamer supported formats. This gives you the flexibility to use the audio format that is right for your use case.
+- **C++/C#/Java/Objective-C/Python**: Added support to decode compressed TTS/synthesized audio with the SDK. If you set output audio format to PCM and GStreamer is available on your system, the SDK will automatically request compressed audio from the service to save bandwidth and decode the audio on the client. This can lower the bandwidth needed for your use case. You can set `SpeechServiceConnection_SynthEnableCompressedAudioTransmission` to `false` to disable this feature. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#propertyid), [C#](/dotnet/api/microsoft.cognitiveservices.speech.propertyid), [Java](/java/api/com.microsoft.cognitiveservices.speech.propertyid), [Objective-C](/objectivec/cognitive-services/speech/spxpropertyid), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.propertyid).
+- **JavaScript**: Node.js users can now use the [`AudioConfig.fromWavFileInput` API](/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig#fromWavFileInput_File_), allowing customers to send the path on disk to a wav file to the SDK which the SDK will then recognize. This addresses [GitHub issue #252](https://github.com/microsoft/cognitive-services-speech-sdk-js/issues/252).
+- **C++/C#/Java/Objective-C/Python**: Added `GetVoicesAsync()` method for TTS to return all available synthesis voices programmatically. This allows you to list available voices in your application, or programmatically choose from different voices. Details for [C++](/cpp/cognitive-services/speech/speechsynthesizer#getvoicesasync), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesizer#methods), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesizer#getvoices), and [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer#methods).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `VisemeReceived` event for TTS/speech synthesis to return synchronous viseme animation. Visemes enable you to create more natural news broadcast assistants, more interactive gaming and cartoon characters, and more intuitive language teaching videos. People with hearing impairment can also pick up sounds visually and "lip-read" any speech content. See documentation [here](/azure/cognitive-services/speech-service/how-to-speech-synthesis-viseme).
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added `BookmarkReached` event for TTS. You can set bookmarks in the input SSML and get the audio offsets for each bookmark. You might use this in your application to take an action when certain words are spoken by text-to-speech. See documentation [here](/azure/cognitive-services/speech-service/speech-synthesis-markup#bookmark-element).
+<!--
+- **Java**: Added support for speaker recognition APIs, allowing you to use speaker recognition from Java. Details [here](/java/api/com.microsoft.cognitiveservices.speech.speakerrecognizer).
+-->
+- **C++/C#/Java/JavaScript/Objective-C/Python**: Added two new output audio formats with WebM container for TTS (Webm16Khz16BitMonoOpus and Webm24Khz16BitMonoOpus). These are better formats for streaming audio with the Opus codec. Details for [C++](/cpp/cognitive-services/speech/microsoft-cognitiveservices-speech-namespace#speechsynthesisoutputformat), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [Java](/java/api/com.microsoft.cognitiveservices.speech.speechsynthesisoutputformat), [JavaScript](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechsynthesisoutputformat), [Objective-C](/objectivec/cognitive-services/speech/spxspeechsynthesisoutputformat), [Python](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisoutputformat).
+- **C++/C#/Java/Python**: Added support on Linux to allow connections to succeed in environments where network access to Certificate Revocation Lists has been blocked. This enables scenarios where you choose to let the client machine only connect to the Azure Speech service. See documentation [here](/azure/cognitive-services/speech-service/how-to-configure-openssl-linux).
+- **C++/C#/Java**: Added support for retrieving voice profile for speaker recognition scenario so that an app can compare speaker data to an existing voice profile. Details for [C++](/cpp/cognitive-services/speech/speakerrecognizer), [C#](/dotnet/api/microsoft.cognitiveservices.speech.speakerrecognizer), and Java. This addresses [GitHub issue #808](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/808).
- **Objective-C/Swift**: Added support for module framework with umbrella header. This allows to import Speech SDK as a module in iOS/Mac Objective-C/Swift apps. This addresses [GitHub issue #452](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/452).-- **Python**: Added support for [Python 3.9](https://docs.microsoft.com/azure/cognitive-services/speech-service/quickstarts/setup-platform?pivots=programming-language-python) and dropped support for Python 3.5 per Python's [end-of-life for 3.5](https://devguide.python.org/devcycle/#end-of-life-branches).
+- **Python**: Added support for [Python 3.9](/azure/cognitive-services/speech-service/quickstarts/setup-platform?pivots=programming-language-python) and dropped support for Python 3.5 per Python's [end-of-life for 3.5](https://devguide.python.org/devcycle/#end-of-life-branches).
#### Improvements - **Java**: As part of our multi release effort to reduce the Speech SDK's memory usage and disk footprint, Android binaries are now 3% to 5% smaller.-- **C#**: Improved accuracy, readability and see-also sections of our C# reference documentation [here](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech?view=azure-dotnet) to improve usability of the SDK in C#.
+- **C#**: Improved accuracy, readability and see-also sections of our C# reference documentation [here](/dotnet/api/microsoft.cognitiveservices.speech) to improve usability of the SDK in C#.
- **C++/C#/Java/Objective-C/Python**: Moved microphone and speaker control into separate shared library. This allows use of the SDK in use cases that do not require audio hardware, for example if you don't need a microphone or speaker for your use case on Linux, you don't need to install libasound. #### Bug fixes
## Speech CLI (also known as SPX): 2021-March release
-**Note**: Get started with the Azure Speech service command line interface (CLI) [here](https://docs.microsoft.com/azure/cognitive-services/speech-service/spx-basics). The CLI enables you to use the Azure Speech service without writing any code.
+**Note**: Get started with the Azure Speech service command line interface (CLI) [here](/azure/cognitive-services/speech-service/spx-basics). The CLI enables you to use the Azure Speech service without writing any code.
#### New features
Download the latest version [here](./spx-basics.md). <br>
### New features * **Neural TTS**
- * **Extended to support 18 new languages/locales.** They are Bulgarian, Czech, German (Austria), German (Switzerland), Greek, English (Ireland), French (Switzerland), Hebrew, Croatian, Hungarian, Indonesian, Malay, Romanian, Slovak, Slovenian, Tamil, Telugu and Vietnamese.
- * **Released 14 new voices to enrich the variety in the existing languages.** See [full language and voice list](language-support.md#neural-voices).
- * **New speaking styles for `en-US` and `zh-CN` voices.** Jenny, the new voice in English (US), supports chatbot, customer service, and assistant styles. 10 new speaking styles are available with our zh-CN voice, XiaoXiao. In addition, the XiaoXiao neural voice supports `StyleDegree` tuning. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).
+ * **Extended to support 18 new languages/locales.** They are Bulgarian, Czech, German (Austria), German (Switzerland), Greek, English (Ireland), French (Switzerland), Hebrew, Croatian, Hungarian, Indonesian, Malay, Romanian, Slovak, Slovenian, Tamil, Telugu and Vietnamese.
+ * **Released 14 new voices to enrich the variety in the existing languages.** See [full language and voice list](language-support.md#neural-voices).
+ * **New speaking styles for `en-US` and `zh-CN` voices.** Jenny, the new voice in English (US), supports chatbot, customer service, and assistant styles. 10 new speaking styles are available with our zh-CN voice, XiaoXiao. In addition, the XiaoXiao neural voice supports `StyleDegree` tuning. See [how to use the speaking styles in SSML](speech-synthesis-markup.md#adjust-speaking-styles).
* **Containers: Neural TTS Container released in public preview with 16 voices available in 14 languages.** Learn more on [how to deploy Speech Containers for Neural TTS](speech-container-howto.md)
Stay healthy!
| `es-MX` | $1.58 | un peso cincuenta y ocho centavos | | `es-ES` | $1.58 | un d├│lar cincuenta y ocho centavos |
- * Support for negative currency (like “-325 €” ) in following locales: `en-US`, `en-GB`, `fr-FR`, `it-IT`, `en-AU`, `en-CA`.
+ * Support for negative currency (like "-325ΓÇ»&euro;" ) in following locales: `en-US`, `en-GB`, `fr-FR`, `it-IT`, `en-AU`, `en-CA`.
* Improved address reading in `pt-PT`. * Fixed Natasha (`en-AU`) and Libby (`en-UK`) pronunciation issues on the word "for" and "four".
Stay healthy!
- Windows: Added compressed audio input format support on Windows platform for all the win32 console applications. Details [here](./how-to-use-codec-compressed-audio-input-streams.md). - JavaScript: Support speech synthesis (text-to-speech) in NodeJS. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript/node/text-to-speech). - JavaScript: Add new API's to enable inspection of all send and received messages. Learn more [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/javascript).
-
+
**Bug fixes** - C#, C++: Fixed an issue so `SendMessageAsync` now sends binary message as binary type. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.sendmessageasync#Microsoft_CognitiveServices_Speech_Connection_SendMessageAsync_System_String_System_Byte___System_UInt32_), [C++](/cpp/cognitive-services/speech/connection). - C#, C++: Fixed an issue where using `Connection MessageReceived` event may cause crash if `Recognizer` is disposed before `Connection` object. Details for [C#](/dotnet/api/microsoft.cognitiveservices.speech.connection.messagereceived), [C++](/cpp/cognitive-services/speech/connection#messagereceived).
Stay healthy!
- Android: Fixed an [issue](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/563) with x86 Android emulator in Android Studio. - JavaScript: Added support for Regions in China with the `fromSubscription` API. Details [here](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#fromsubscription-string--string-). - JavaScript: Add more error information for connection failures from NodeJS.
-
+
**Samples** - Unity: Intent recognition public sample is fixed, where LUIS json import was failing. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues/369). - Python: Sample added for `Language ID`. Details [here](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py).
-
+
**Covid19 abridged testing:** Due to working remotely over the last few weeks, we couldn't do as much manual device verification testing as we normally do. For example, we couldn't test microphone input and speaker output on Linux, iOS, and macOS. We haven't made any changes we think could have broken anything on these platforms, and our automated tests all passed. In the unlikely event that we missed something, please let us know on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen).<br> Thank you for your continued support. As always, please post questions or feedback on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/issues?q=is%3Aissue+is%3Aopen) or [Stack Overflow](https://stackoverflow.microsoft.com/questions/tagged/731).<br>
This is a bug fix release and only affecting the native/managed SDK. It is not a
**New Features** - The Speech SDK supports selection of the input microphone through the `AudioConfig` class. This allows you to stream audio data to the Speech service from a non-default microphone. For more information, see the documentation describing [audio input device selection](how-to-select-audio-input-devices.md). This feature is not yet available from JavaScript.-- The Speech SDK now supports Unity in a beta version. Provide feedback through the issue section in the [GitHub sample repository](https://aka.ms/csspeech/samples). This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our [Unity quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=unity).
+- The Speech SDK now supports Unity in a beta version. Provide feedback through the issue section in the [GitHub sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk). This release supports Unity on Windows x86 and x64 (desktop or Universal Windows Platform applications), and Android (ARM32/64, x86). More information is available in our [Unity quickstart](./get-started-speech-to-text.md?pivots=programming-language-csharp&tabs=unity).
- The file `Microsoft.CognitiveServices.Speech.csharp.bindings.dll` (shipped in previous releases) isn't needed anymore. The functionality is now integrated into the core SDK. **Samples**
-The following new content is available in our [sample repository](https://aka.ms/csspeech/samples):
+The following new content is available in our [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk):
- Additional samples for `AudioConfig.FromMicrophoneInput`. - Additional Python samples for intent recognition and translation.
This is a JavaScript-only release. No features have been added. The following fi
**Samples** - Updated and fixed several samples (for example output voices for translation, etc.).-- Added Node.js samples in the [sample repository](https://aka.ms/csspeech/samples).
+- Added Node.js samples in the [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk).
## Speech SDK 1.1.0
This is a JavaScript-only release. No features have been added. The following fi
**Samples** -- Added C++ and C# samples for pull and push stream usage in the [sample repository](https://aka.ms/csspeech/samples).
+- Added C++ and C# samples for pull and push stream usage in the [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk).
## Speech SDK 1.0.1
Reliability improvements and bug fixes:
- JavaScript: Fixed regarding events and their payloads. - Documentation improvements.
-In our [sample repository](https://aka.ms/csspeech/samples), a new sample for JavaScript was added.
+In our [sample repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk), a new sample for JavaScript was added.
## Cognitive Services Speech SDK 1.0.0: 2018-September release
In our [sample repository](https://aka.ms/csspeech/samples), a new sample for Ja
- On Windows, C# .NET assemblies now are strong named. - Documentation fix: `Region` is required information to create a recognizer.
-More samples have been added and are constantly being updated. For the latest set of samples, see the [Speech SDK samples GitHub repository](https://aka.ms/csspeech/samples).
+More samples have been added and are constantly being updated. For the latest set of samples, see the [Speech SDK samples GitHub repository](https://github.com/Azure-Samples/cognitive-services-speech-sdk).
## Cognitive Services Speech SDK 0.2.12733: 2018-May release
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
For the usage with [Speech SDK](speech-sdk.md) and/or [Speech-to-text REST API f
### Text-to-Speech Quotas and limits per Speech resource In the table below Parameters without "Adjustable" row are **not** adjustable for all price tiers.
-| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
-|--|--|--|
-| **Max number of Transactions per Second (TPS) for Standard and Neural voices** | 200<sup>4</sup> | 200<sup>4</sup> | |
-| **Concurrent Request limit for Custom voice** | | |
-| Default value | 10 | 10 |
-| Adjustable | No<sup>5</sup> | Yes<sup>5</sup> |
-| **HTTP-specific quotas** | |
-| Max Audio length produced per request | 10 min | 10 min |
-| Max number of distinct `<voice>` tags in SSML | 50 | 50 |
-| **Websocket specific quotas** | | |
-|Max Audio length produced per turn | 10 min | 10 min |
-|Max SSML Message size per turn |64 KB |64 KB |
-| **REST API limit** | 20 requests per minute | 300 requests per minute |
+| Quota | Free (F0)<sup>3</sup> | Standard (S0) |
+|--||--|
+| **Max number of Transactions per Second (TPS) for Standard and Neural voices** | 200<sup>4</sup> | 200<sup>4</sup> |
+| **Concurrent Request limit for Custom voice** | | |
+| Default value | 10 | 10 |
+| Adjustable | No<sup>5</sup> | Yes<sup>5</sup> |
+| **HTTP-specific quotas** | | |
+| Max Audio length produced per request | 10 min | 10 min |
+| Max number of distinct `<voice>` tags in SSML | 50 | 50 |
+| **Websocket specific quotas** | | |
+| Max Audio length produced per turn | 10 min | 10 min |
+| Max SSML Message size per turn | 64 KB | 64 KB |
+| **REST API limit** | 20 requests per minute | 300 requests per minute |
<sup>3</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).<br/>
cognitive-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/form-recognizer/concept-custom.md
With Form Recognizer, you can train a model that will extract information from f
At a high level, the steps for building, training, and using your custom model are as follows: > [!div class="nextstepaction"]
-> [1. Assemble your training dataset](build-training-data-set.md#custom-model-input-requirements)
+>Assemble your training dataset](build-training-data-set.md#custom-model-input-requirements)
Building a custom model begins with establishing your training dataset. You'll need a minimum of five completed forms of the same type for your sample dataset. They can be of different file types and contain both text and handwriting. Your forms must be of the same type of document and follow the [input requirements](build-training-data-set.md#custom-model-input-requirements) for Form Recognizer. &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155; > [!div class="nextstepaction"]
-> [2. Upload your training dataset](build-training-data-set.md#upload-your-training-data)
+> [Upload your training dataset](build-training-data-set.md#upload-your-training-data)
You'll need to upload your training data to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, *see* [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). Use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production. &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155; > [!div class="nextstepaction"]
-> [3. Train your custom model](quickstarts/client-library.md#train-a-custom-model)
+>[Train your custom model](quickstarts/client-library.md#train-a-custom-model)
You can train your model [without](quickstarts/client-library.md#train-a-model-without-labels) or [with](quickstarts/client-library.md#train-a-model-with-labels) labeled data sets. Unlabeled datasets rely solely on the Layout API to detect and identify key information without added human input. Labeled datasets also rely on the Layout API, but supplementary human input is included such as your specific labels and field locations. To use both labeled and unlabeled data, start with at least five completed forms of the same type for the labeled training data and then add unlabeled data to the required data set. &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155; >[!div class="nextstepaction"]
-> [4. Analyze documents with your custom model](quickstarts/client-library.md#analyze-forms-with-a-custom-model)
+>[Analyze documents with your custom model](quickstarts/client-library.md#analyze-forms-with-a-custom-model)
Test your newly trained model by using a form that wasn't part of the training dataset. You can continue to do further training to improve the performance of your custom model. &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&#129155; > [!div class="nextstepaction"]
-> [5. Manage your custom models](quickstarts/client-library.md#manage-custom-models)
+>[Manage your custom models](quickstarts/client-library.md#manage-custom-models)
At any time, you can view a list of all the custom models under your subscription, retrieve information about a specific custom model, or delete a custom model from your account. ## Next steps Learn more about the Form Recognizer client library by exploring our API reference documentation.+ > [!div class="nextstepaction"]
-> [Form Recognizer API reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/5ed8c9843c2794cbb1a96291)
+> [Form Recognizer API reference](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1-preview-3/operations/AnalyzeWithCustomForm)
>
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-flows.md
# Call flow basics The section below gives an overview of the call flows in Azure Communication Services. Signaling and media flows depend on the types of calls your users are making. Examples of call types include one-to-one VoIP, one-to-one PSTN, and group calls containing a combination of VoIP and PSTN-connected participants. Review [Call types](./voice-video-calling/about-call-types.md).
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/concepts.md
# Chat concepts - Azure Communication Services Chat SDKs can be used to add real-time text chat to your applications. This page summarizes key Chat concepts and capabilities. See the [Communication Services Chat SDK Overview](./sdk-features.md) to learn more about specific SDK languages and capabilities.
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/chat/sdk-features.md
# Chat SDK overview - Azure Communication Services Chat SDKs can be used to add rich, real-time chat to your applications. ## Chat SDK capabilities
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/client-and-server-architecture.md
# Client and Server Architecture --
-<!--
-> [!WARNING]
-> This document is under construction and needs the following items to be addressed:
-> - Need to add security best practices for token management here
-> - Reference docs:
-> - https://docs.microsoft.com/windows/security/threat-protection/security-policy-settings/create-a-token-object
-> - https://docs.microsoft.com/azure/aks/operator-best-practices-identity
-> - https://docs.microsoft.com/cloud-app-security/api-tokens?view=gestures-1.0-->
- Every Azure Communication Services application will have **client applications** that use **services** to facilitate person-to-person connectivity. This page illustrates common architectural elements in a variety of scenarios. ## User access management
communication-services Logging And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/logging-and-diagnostics.md
Communication Services offers three types of logs that you can enable:
| OutgoingMessageLength | The number of characters in the outgoing message. | | IncomingMessageLength | The number of characters in the incoming message. | | DeliveryAttempts | The number of attempts made to deliver this message. |
-| PhoneNumber | The phone number the SMS message is being sent to. |
+| PhoneNumber | The phone number the SMS message is being sent from. |
| SdkType | The SDK type used in the request. | | PlatformType | The platform type used in the request. | | Method | The method used in the request. |
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/notifications.md
# Communication Services notifications -- The Azure Communication Services chat and calling SDKs create a real-time messaging channel that allows signaling messages to be pushed to connected clients in an efficient, reliable manner. This enables you to build rich, real-time communication functionality into your applications without the need to implement complicated HTTP polling logic. However, on mobile applications, this signaling channel only remains connected when your application is active in the foreground. If you want your users to receive incoming calls or chat messages while your application is in the background, you should use push notifications. Push notifications allow you to send information from your application to users' mobile devices. You can use push notifications to show a dialog, play a sound, or display incoming call UI. Azure Communication Services provides integrations with [Azure Event Grid](../../event-grid/overview.md) and [Azure Notification Hubs](../../notification-hubs/notification-hubs-push-notification-overview.md) that enable you to add push notifications to your apps.
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/pricing.md
# Pricing Scenarios -- Prices for Azure Communication Services are generally based on a pay-as-you-go model. The prices in the following examples are for illustrative purposes and may not reflect the latest Azure pricing. ## Voice/Video calling and screen sharing
communication-services Privacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/privacy.md
# Region availability and data residency - Azure Communication Services is committed to helping our customers meet their privacy and personal data requirements. As a developer using Communication Services with a direct relationship with humans using the application, you are potentially a controller of their data. Since Azure Communication Services is storing and encrypting this data at rest on your behalf, we are most likely a processor of this data. This page summarizes how the service retains data and how you can identify, export, and delete this data. ## Data residency
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/reference.md
# Reference documentation overview -- The following table details the available Communication Services packages along with corresponding reference documentation: <!--note that this table also exists here and should be synced: https://github.com/Azure/Communication/blob/master/README.md -->
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/concepts.md
# SMS concepts+ [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
communication-services Plan Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/plan-solution.md
# Phone number types in Azure Communication Services -- > [!IMPORTANT] > Phone number availability is currently restricted to paid Azure subscriptions that have a billing address in the United States and Communication Services resources that have a US data location. Phone numbers cannot be acquired on trial accounts or using Azure free credits. For more information, visit the [subscription eligibility](#azure-subscriptions-eligibility) section of this document.
communication-services Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sdk-features.md
# SMS SDK overview [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
communication-services Telephony Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/telephony-concept.md
# Telephony concepts + [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)] Azure Communication Services Calling SDKs can be used to add telephony and PSTN to your applications. This page summarizes key telephony concepts and capabilities. See the [calling library](../../quickstarts/voice-video-calling/calling-client-samples.md) to learn more about specific SDK languages and capabilities.
communication-services About Call Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/about-call-types.md
# Voice and video concepts -- You can use Azure Communication Services to make and receive one to one or group voice and video calls. Your calls can be made to other Internet-connected devices and to plain-old telephones. You can use the Communication Services JavaScript, Android, or iOS SDKs to build applications that allow your users to speak to one another in private conversations or in group discussions. Azure Communication Services supports calls to and from services or Bots. ## Call types in Azure Communication Services
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
# Calling SDK overview -- There are two separate families of Calling SDKs, for *clients* and *services.* Currently available SDKs are intended for end-user experiences: websites and native apps. The Service SDKs are not yet available, and provide access to the raw voice and video data planes, suitable for integration with bots and other services.
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/overview.md
# What is Azure Communication Services? - > [!IMPORTANT] > Applications that you build using Azure Communication Services can talk to Microsoft Teams. To learn more, visit our [Teams Interop](./quickstarts/voice-video-calling/get-started-teams-interop.md) documentation.
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/access-tokens.md
The output of the app describes each action that is completed:
```console Azure Communication Services - Access Tokens Quickstart
-Created an identity: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-19e0-2727-80f5-8b3a0d003502
+Created an identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-19e0-2727-80f5-8b3a0d003502
-Issued an access token with 'voip' scope that expires at Fri Nov 27 2020 16:47:05 GMT-0800 (Pacific Standard Time):
+Issued an access token with 'voip' scope that expires at 30/03/21 08:09 09 AM:
+<token signature here>
+
+Created an identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-1ce9-31b4-54b7-a43a0d006a52
+
+Issued an access token with 'voip' scope that expires at 30/03/21 08:09 09 AM:
<token signature here> Successfully revoked all access tokens for identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-19e0-2727-80f5-8b3a0d003502
communication-services Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/get-started.md
zone_pivot_groups: acs-js-csharp-java-python-swift-android
# Quickstart: Add Chat to your App -- Get started with Azure Communication Services by using the Communication Services Chat SDK to add real-time chat to your application. In this quickstart, we'll use the Chat SDK to create chat threads that allow users to have conversations with one another. To learn more about Chat concepts, visit the [chat conceptual documentation](../../concepts/chat/concepts.md). ::: zone pivot="programming-language-javascript"
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/create-communication-resource.md
zone_pivot_groups: acs-plat-azp-azcli-net-ps
# Quickstart: Create and manage Communication Services resources - Get started with Azure Communication Services by provisioning your first Communication Services resource. Communication services resources can be provisioned through the [Azure portal](https://portal.azure.com) or with the .NET management SDK. The management SDK and the Azure portal allow you to create, configure, update and delete your resources and interface with [Azure Resource Manager](../../azure-resource-manager/management/overview.md), Azure's deployment and management service. All functionality available in the SDKs is available in the Azure portal.
communication-services Getting Started With Teams Embed https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/meeting/getting-started-with-teams-embed.md
Title: Quickstart - Add joining a teams meeting to your app
+ Title: Quickstart - Add joining a Teams meeting to your app
-description: In this quickstart, you'll learn how to add join teams meeting capabilities to your app using Azure Communication Services.
+description: In this quickstart, you'll learn how to add join Teams meeting capabilities to your app using Azure Communication Services.
Last updated 01/25/2021
zone_pivot_groups: acs-plat-ios-android
-# Quickstart: Add joining a teams meeting to your app
+# Quickstart: Add joining a Teams meeting to your app
[!INCLUDE [Public Preview Notice](../../includes/public-preview-include.md)]
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/get-phone-number.md
zone_pivot_groups: acs-azp-java-net-python-csharp-js
[!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)]
-Get started with Azure Communication Services by using the Azure portal or the Communication Services Phone Numbers Client Library to manage telephone numbers.
- ::: zone pivot="platform-azp" [!INCLUDE [Azure portal](./includes/phone-numbers-portal.md)] ::: zone-end
communication-services Handle Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/handle-sms-events.md
# Quickstart: Handle SMS events for Delivery Reports and Inbound Messages -- [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)] Get started with Azure Communication Services by using Azure Event Grid to handle Communication Services SMS events.
communication-services Port Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/port-phone-number.md
+
+ Title: Quickstart - Port a phone number into Azure Communication Services
+description: Learn how to port a phone number into your Communication Services resource
+++++ Last updated : 03/20/2021+++++
+# Quickstart: Port a phone number
++
+Get started with Azure Communication Services by porting your phone number into your Azure Communication Services resource. Toll-free and geographic numbers based in the United States are eligible for porting. For more information about phone number types, visit the [phone number conceptual documentation](../../concepts/telephony-sms/plan-solution.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [An active Communication Services resource.](../create-communication-resource.md)
+
+## Gather your Azure resource details
+
+Before initiating a port request, navigate to the Azure portal and select your Communication Services resource. With the `Overview` pane selected, click on the `JSON View` link in the upper right-hand corner:
++
+Record your resource's **Azure ID** and **Immutable Resource ID**:
++
+## Initiate the port request
+
+Toll-free and geographic numbers based in the United States are eligible for porting. Use one of the following forms to submit your port request:
+
+- For toll-free numbers: [Toll-free number port request](https://aka.ms/acs-port-form-tollfree)
+- For geographic numbers based in the US: [Geographic number port request](https://aka.ms/acs-port-form-geographic)
+
+When completed, send your completed port request form to acsporting@microsoft.com. Please ensure that your email subject line begins with "ACS Port-In Request".
+
+## Next steps
+
+In this quickstart you learned how to:
+
+> [!div class="checklist"]
+> * Acquire your Communication Services resource metadata
+> * Submit a request to port your phone number
+
+> [!div class="nextstepaction"]
+> [Send an SMS](../telephony-sms/send.md)
+> [Get started with calling](../voice-video-calling/getting-started-with-calling.md)
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/telephony-sms/send.md
zone_pivot_groups: acs-js-csharp-java-python
# Quickstart: Send an SMS message -- [!INCLUDE [Regional Availability Notice](../../includes/regional-availability-include.md)] > [!IMPORTANT]
communication-services Calling Client Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/calling-client-samples.md
zone_pivot_groups: acs-plat-web-ios-android
# Quickstart: Use the Communication Services Calling SDK -- Get started with Azure Communication Services by using the Communication Services Calling SDK to add voice and video calling to your app. ::: zone pivot="platform-web"
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
zone_pivot_groups: acs-plat-web-ios-android
# Quickstart: Add voice calling to your app -- Get started with Azure Communication Services by using the Communication Services Calling SDK to add voice and video calling to your app. [!INCLUDE [Emergency Calling Notice](../../includes/emergency-calling-notice-include.md)]
communication-services Pstn Call https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/pstn-call.md
zone_pivot_groups: acs-plat-web-ios-android
# Quickstart: Call To Phone -- Get started with Azure Communication Services by using the Communication Services Calling SDK to add PSTN calling to your app. ::: zone pivot="platform-web"
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/chat-hero-sample.md
# Get started with the group chat hero sample --
-<!-
-> [!WARNING]
-> links to our Hero Sample repo need to be updated when the sample is publicly available.
-->- > [!IMPORTANT] > [This sample is available on GitHub.](https://github.com/Azure-Samples/communication-services-web-chat-hero)
communication-services Web Calling Sample https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/samples/web-calling-sample.md
This sample was built for developers and makes it very easy for you to get start
## Get started with the web calling sample -- > [!IMPORTANT] > [This sample is available on Github.](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/).
communication-services Building App Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/building-app-start.md
# Tutorial: Prepare a web app for Azure Communication Services (Node.js) - You can use Azure Communication Services to add real-time communications to your applications. In this tutorial, you'll learn how to set up a web application that supports Azure Communication Services. This is an introductory tutorial for new developers who want to get started with real-time communications. By the end of this tutorial, you'll have a baseline web application that's configured with Azure Communication Services SDKs. You can then use that application to begin building your real-time communications solution.
communication-services Hmac Header Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/hmac-header-tutorial.md
In this tutorial, you'll learn how to sign an HTTP request with an HMAC signature. -- [!INCLUDE [Sign an HTTP request C#](./includes/hmac-header-csharp.md)] ## Clean up resources
communication-services Trusted Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/tutorials/trusted-service-tutorial.md
# Build a trusted authentication service using Azure Functions -- [!INCLUDE [Trusted Service JavaScript](./includes/trusted-service-js.md)] ## Clean up resources
confidential-computing Confidential Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-containers.md
Last updated 2/11/2020 + # Confidential Containers
Have questions with your implementation or want to become an enabler? Send an em
[DCsv2 Virtual Machines](virtual-machine-solutions.md)
-[Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
+[Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md)
confidential-computing Confidential Nodes Aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-faq.md
Title: Frequently asked questions for confidential nodes support on Azure Kubern
description: Find answers to some of the common questions about Azure Kubernetes Service (AKS) & Azure Confidential Computing (ACC) nodes support. + Last updated 02/09/2020
confidential-computing Confidential Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-get-started.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster by using Az
description: In this quickstart, you will learn to create an AKS cluster with confidential nodes and deploy a hello world app using the Azure CLI. + Last updated 03/18/2020
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-aks-overview.md
description: Confidential computing nodes on AKS
+ Last updated 2/08/2021
confidential-computing Confidential Nodes Out Of Proc Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/confidential-nodes-out-of-proc-attestation.md
Title: Out-of-proc attestation support with Intel SGX quote helper Daemonset on Azure (preview) description: DaemonSet for generating the quote outside of the SGX application process. This article explains how the out-of-proc attestation facility is rovided for confidential workloads running inside a container. + Last updated 2/12/2021
confidential-computing Enclave Aware Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/confidential-computing/enclave-aware-containers.md
Title: Enclave aware containers on Azure
description: enclave ready application containers support on Azure Kubernetes Service (AKS) + Last updated 9/22/2020
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-synapse-link.md
Azure Synapse Link is available for Azure Cosmos DB SQL API containers or for Az
The following links shows how to enabled Synapse Link by using Azure CLI:
-* [Create a new Azure Cosmos DB account with Synapse Link enabled](https://docs.microsoft.com/cli/azure/cosmosdb?view=azure-cli-latest#az_cosmosdb_create-optional-parameters&preserve-view=true)
-* [Update an existing Azure Cosmos DB account to enable Synapse Link](https://docs.microsoft.com/cli/azure/cosmosdb?view=azure-cli-latest#az_cosmosdb_update-optional-parameters&preserve-view=true)
+* [Create a new Azure Cosmos DB account with Synapse Link enabled](/cli/azure/cosmosdb#az_cosmosdb_create-optional-parameters)
+* [Update an existing Azure Cosmos DB account to enable Synapse Link](/cli/azure/cosmosdb#az_cosmosdb_update-optional-parameters)
### PowerShell
-* [Create a new Azure Cosmos DB account with Synapse Link enabled](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbaccount?view=azps-5.5.0#description&preserve-view=true)
-* [Update an existing Azure Cosmos DB account to enable Synapse Link](https://docs.microsoft.com/powershell/module/az.cosmosdb/update-azcosmosdbaccount?view=azps-5.5.0&preserve-view=true)
+* [Create a new Azure Cosmos DB account with Synapse Link enabled](/powershell/module/az.cosmosdb/new-azcosmosdbaccount#description)
+* [Update an existing Azure Cosmos DB account to enable Synapse Link](/powershell/module/az.cosmosdb/update-azcosmosdbaccount)
The following links shows how to enabled Synapse Link by using PowerShell:
except exceptions.CosmosResourceExistsError:
The following links show how to create an analytical store enabled containers by using Azure CLI:
-* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/cli/azure/cosmosdb/mongodb/collection?view=azure-cli-latest#az_cosmosdb_mongodb_collection_create-examples&preserve-view=true)
-* [Azure Cosmos DB SQL API](https://docs.microsoft.com/cli/azure/cosmosdb/sql/container?view=azure-cli-latest#az_cosmosdb_sql_container_create&preserve-view=true)
+* [Azure Cosmos DB API for Mongo DB](/cli/azure/cosmosdb/mongodb/collection#az_cosmosdb_mongodb_collection_create-examples)
+* [Azure Cosmos DB SQL API](/cli/azure/cosmosdb/sql/container#az_cosmosdb_sql_container_create)
### PowerShell The following links show how to create an analytical store enabled containers by using PowerShell:
-* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection?view=azps-5.5.0#description&preserve-view=true)
-* [Azure Cosmos DB SQL API](https://docs.microsoft.com/cli/azure/cosmosdb/sql/container?view=azure-cli-latest#az_cosmosdb_sql_container_create&preserve-view=true)
+* [Azure Cosmos DB API for Mongo DB](/powershell/module/az.cosmosdb/new-azcosmosdbmongodbcollection#description)
+* [Azure Cosmos DB SQL API](/cli/azure/cosmosdb/sql/container#az_cosmosdb_sql_container_create)
## <a id="update-analytical-ttl"></a> Optional - Update the analytical store time to live
Currently not supported.
The following links show how to update containers analytical TTL by using Azure CLI:
-* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/cli/azure/cosmosdb/mongodb/collection?view=azure-cli-latest#az_cosmosdb_mongodb_collection_update&preserve-view=true)
-* [Azure Cosmos DB SQL API](https://docs.microsoft.com/cli/azure/cosmosdb/sql/container?view=azure-cli-latest#az_cosmosdb_sql_container_update&preserve-view=true)
+* [Azure Cosmos DB API for Mongo DB](/cli/azure/cosmosdb/mongodb/collection#az_cosmosdb_mongodb_collection_update)
+* [Azure Cosmos DB SQL API](/cli/azure/cosmosdb/sql/container#az_cosmosdb_sql_container_update)
### PowerShell The following links show how to update containers analytical TTL by using PowerShell:
-* [Azure Cosmos DB API for Mongo DB](https://docs.microsoft.com/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollection?view=azps-5.5.0&preserve-view=true)
-* [Azure Cosmos DB SQL API](https://docs.microsoft.com/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer?view=azps-5.5.0&preserve-view=true)
+* [Azure Cosmos DB API for Mongo DB](/powershell/module/az.cosmosdb/update-azcosmosdbmongodbcollection)
+* [Azure Cosmos DB SQL API](/powershell/module/az.cosmosdb/update-azcosmosdbsqlcontainer)
## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace
cosmos-db Estimate Ru With Capacity Planner https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/estimate-ru-with-capacity-planner.md
Previously updated : 07/30/2019 Last updated : 03/29/2021
The capacity planner can be used in two modes.
|**Mode** |**Description** | |||
-|Basic|Provides a quick, high-level RU/s and cost estimate. This mode assumes the default Azure Cosmos DB settings for indexing policy, consistency, and other parameters. <br/><br/>Use basic mode for a quick, high-level estimate when you are evaluating a potential workload to run on Azure Cosmos DB.|
-|Advanced|Provides a more detailed RU/s and cost estimate, with the ability to tune additional settings ΓÇö indexing policy, consistency level, and other parameters that affect the cost and throughput. <br/><br/>Use advanced mode when you are estimating RU/s for a new project or want a more detailed estimate. |
+|Basic|Provides a quick, high-level RU/s and cost estimate. This mode assumes the default Azure Cosmos DB settings for indexing policy, consistency, and other parameters. <br/><br/>Use basic mode for a quick, high-level estimate when you are evaluating a potential workload to run on Azure Cosmos DB. To learn more see how to [estimate cost with basic mode](#basic-mode).|
+|Advanced|Provides a more detailed RU/s and cost estimate, with the ability to tune additional settings ΓÇö indexing policy, consistency level, and other parameters that affect the cost and throughput. <br/><br/>Use advanced mode when you are estimating RU/s for a new project or want a more detailed estimate. To learn more see how to [estimate cost with advanced mode](#advanced-mode).|
-
-## Estimate provisioned throughput and cost using basic mode
-To get a quick estimate for your workload using the basic mode, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/). Enter in the following parameters based on your workload:
+## <a id="basic-mode"></a>Estimate provisioned throughput and cost using basic mode
+To get a quick estimate for your workload using the basic mode, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/). Enter in the following parameters based on your workload:
|**Input** |**Description** | |||
+| API |Choose the API type of your account. |
|Number of regions|Azure Cosmos DB is available in all Azure regions. Select the number of regions required for your workload. You can associate any number of regions with your Cosmos account. See [global distribution](distribute-data-globally.md) in Azure Cosmos DB for more details.| |Multi-region writes|If you enable [multi-region writes](distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](optimize-cost-regions.md) article.|
-|Total data stored (per region)|Total estimated data stored in GB in a single region.|
+|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
+|Total data stored in analytical store | This option is showed if you turn **On** the **Use Analytical Store** option. It represents the total estimated data stored(GB) in the analytical store in a single region. |
|Item size|The estimated size of the data item (e.g. document), ranging from 1 KB to 2 MB. |
-|Reads/sec per region|Number of reads expected per second. |
-|Writes/sec per region|Number of writes expected per second. |
+|Reads/sec per region|Number of read operations expected per second. |
+|Creates/sec per region|Number of create operations expected per second. |
+|Updates/sec per region|Number of update operations expected per second. |
+|Deletes/sec per region|Number of delete operations expected per second. |
-After filling the required details, select **Calculate**. The **Cost Estimate** tab shows the total cost for storage and provisioned throughput. You can expand the **Show Details** link in this tab to get the breakdown of the throughput required for read and write requests. Each time you change the value of any field, select **Calculate** to re-calculate the estimated cost.
+After filling the required details, select **Calculate**. The **Cost Estimate** tab shows the total cost for storage and provisioned throughput. You can expand the **Show Details** link in this tab to get the breakdown of the throughput required for different CRUD requests. Each time you change the value of any field, select **Calculate** to re-calculate the estimated cost.
-## Estimate provisioned throughput and cost using advanced mode
+## <a id="advanced-mode"></a>Estimate provisioned throughput and cost using advanced mode
-Advanced mode allows you to provide more settings that impact the RU/s estimate. To use this option, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/) and sign in to the tool with an account you use for Azure. The sign-in option is available at the right-hand corner.
+Advanced mode allows you to provide more settings that impact the RU/s estimate. To use this option, navigate to the [capacity planner](https://cosmos.azure.com/capacitycalculator/) and **sign in** to the tool with an account you use for Azure. The sign-in option is available at the right-hand corner.
-After you sign in, you can see additional fields compared to the fields in basic mode. Enter the additional parameters based on your workload.
+After you sign in, you can see additional fields compared to the fields in basic mode. Enter the additional parameters based on your workload.
|**Input** |**Description** | |||
After you sign in, you can see additional fields compared to the fields in basic
|Multi-region writes|If you enable [multi-region writes](distribute-data-globally.md#key-benefits-of-global-distribution), your application can read and write to any Azure region. If you disable multi-region writes, your application can write data to a single region. <br/><br/> Enable multi-region writes if you expect to have an active-active workload that requires low latency writes in different regions. For example, an IOT workload that writes data to the database at high volumes in different regions. <br/><br/> Multi-region writes guarantees 99.999% read and write availability. Multi-region writes require more throughput when compared to the single write regions. To learn more, see [how RUs are different for single and multiple-write regions](optimize-cost-regions.md) article.| |Default consistency|Azure Cosmos DB supports 5 consistency levels, to allow developers to balance the tradeoff between consistency, availability, and latency tradeoffs. To learn more, see the [consistency levels](consistency-levels.md) article. <br/><br/> By default, Azure Cosmos DB uses session consistency, which guarantees the ability to read your own writes in a session. <br/><br/> Choosing strong or bounded staleness will require double the required RU/s for reads, when compared to session, consistent prefix, and eventual consistency. Strong consistency with multi-region writes is not supported and will automatically default to single-region writes with strong consistency. | |Indexing policy|By default, Azure Cosmos DB [indexes all properties](index-policy.md) in all items for flexible and efficient queries (maps to the **Automatic** indexing policy). <br/><br/> If you choose **off**, none of the properties are indexed. This results in the lowest RU charge for writes. Select **off** policy if you expect to only do [point reads](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) (key value lookups) and/or writes, and no queries. <br/><br/> Custom indexing policy allows you to include or exclude specific properties from the index for lower write throughput and storage. To learn more, see [indexing policy](index-overview.md) and [sample indexing policies](how-to-manage-indexing-policy.md#indexing-policy-examples) articles.|
-|Total data stored (per region)|Total estimated data stored in GB in a single region.|
+|Total data stored in transactional store |Total estimated data stored(GB) in the transactional store in a single region.|
+|Total data stored in analytical store | This option is showed if you turn **On** the **Use Analytical Store** option. It represents the total estimated data stored(GB) in the analytical store in a single region. |
|Workload mode|Select **Steady** if your workload volume is constant. <br/><br/> Select **Variable** if your workload volume changes over time. For example, during a specific day or a month. <br/><br/> The following settings are available if you choose the variable workload option:<ul><li>Percentage of time at peak: Percentage of time in a month where your workload requires peak (highest) throughput. <br/><br/> For example, if you have a workload that has high activity during 9am ΓÇô 6pm weekday business hours, then the percentage of time at peak is: 45 hours at peak / 730 hours / month = ~6%.<br/><br/></li><li>Reads/sec per region at peak - Number of reads expected per second at peak.</li><li>Writes/sec per region at peak ΓÇô Number of writes expected per second at peak.</li><li>Reads/sec per region off peak ΓÇô Number of reads expected per second during off peak.</li><li>Writes/sec per region off peak ΓÇô Number of writes expected per second during off peak.</li></ul>With peak and off-peak intervals, you can optimize your cost by [programmatically scaling your provisioned throughput](set-throughput.md#update-throughput-on-a-database-or-a-container) up and down accordingly.| |Item size|The size of the data item (e.g. document), ranging from 1 KB to 2 MB. <br/><br/>You can also **Upload sample (JSON)** document for a more accurate estimate.<br/><br/>If your workload has multiple types of items (with different JSON content) in the same container, you can upload multiple JSON documents and get the estimate. Use the **Add new item** button to add multiple sample JSON documents.|
+|Reads/sec per region|Number of read operations expected per second. |
+|Creates/sec per region|Number of create operations expected per second. |
+|Updates/sec per region|Number of update operations expected per second. |
+|Deletes/sec per region|Number of delete operations expected per second. |
-You can also use the **Save Estimate** button to download a CSV file containing the current estimate.
+You can also use the **Save Estimate** button to download a CSV file containing the current estimate.
The prices shown in the Azure Cosmos DB capacity planner are estimates based on the public pricing rates for throughput and storage. All prices are shown in US dollars. Refer to the [Azure Cosmos DB pricing page](https://azure.microsoft.com/pricing/details/cosmos-db/) to see all rates by region.
cosmos-db How To Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-configure-private-endpoints.md
description: Learn how to set up Azure Private Link to access an Azure Cosmos ac
Previously updated : 03/02/2021 Last updated : 03/26/2021
Use the following steps to create a private endpoint for an existing Azure Cosmo
| Virtual network| Select your virtual network. | | Subnet | Select your subnet. | |**Private DNS Integration**||
- |Integrate with private DNS zone |Select **Yes**. <br><br/> To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. |
+ |Integrate with private DNS zone |Select **Yes**. <br><br/> To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. <br><br/> When you select yes for this option, a private DNS zone group is also created. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS Zone when there is an update to the private endpoint. For example, when you add or remove regions,the private DNS zone is automatically updated. |
|Private DNS Zone |Select **privatelink.documents.azure.com**. <br><br/> The private DNS zone is determined automatically. You can't change it by using the Azure portal.| |||
Use the following steps to create a private endpoint for an existing Azure Cosmo
When you have approved Private Link for an Azure Cosmos account, in the Azure portal, the **All networks** option in the **Firewall and virtual networks** pane is unavailable.
+## <a id="private-zone-name-mapping"></a>API types and private zone names
+ The following table shows the mapping between different Azure Cosmos account API types, supported sub-resources, and the corresponding private zone names. You can also access the Gremlin and Table API accounts through the SQL API, so there are two entries for these APIs. |Azure Cosmos account API type |Supported sub-resources (or group IDs) |Private zone name |
After you create the private endpoint, you can integrate it with a private DNS z
```azurepowershell-interactive Import-Module Az.PrivateDns+
+# Zone name differs based on the API type and group ID you are using.
$zoneName = "privatelink.documents.azure.com" $zone = New-AzPrivateDnsZone -ResourceGroupName $ResourceGroupName ` -Name $zoneName
$pe = Get-AzPrivateEndpoint -Name $PrivateEndpointName `
$networkInterface = Get-AzResource -ResourceId $pe.NetworkInterfaces[0].Id ` -ApiVersion "2019-04-01"
-
-foreach ($ipconfig in $networkInterface.properties.ipConfigurations) {
-foreach ($fqdn in $ipconfig.properties.privateLinkConnectionProperties.fqdns) {
-Write-Host "$($ipconfig.properties.privateIPAddress) $($fqdn)"
-$recordName = $fqdn.split('.',2)[0]
-$dnsZone = $fqdn.split('.',2)[1]
-New-AzPrivateDnsRecordSet -Name $recordName `
- -RecordType A -ZoneName $zoneName `
- -ResourceGroupName $ResourceGroupName -Ttl 600 `
- -PrivateDnsRecords (New-AzPrivateDnsRecordConfig `
- -IPv4Address $ipconfig.properties.privateIPAddress)
-}
-}
+
+# Create DNS configuration
+
+$PrivateDnsZoneId = $zone.ResourceId
+
+$config = New-AzPrivateDnsZoneConfig -Name $zoneName`
+ -PrivateDnsZoneId $PrivateDnsZoneId
+
+## Create a DNS zone group
+New-AzPrivateDnsZoneGroup -ResourceGroupName $ResourceGroupName`
+ -PrivateEndpointName $PrivateEndpointName`
+ -Name "MyPrivateZoneGroup"`
+ -PrivateDnsZoneConfig $config
``` ### Fetch the private IP addresses
az network private-endpoint create \
After you create the private endpoint, you can integrate it with a private DNS zone by using the following Azure CLI script: ```azurecli-interactive
+#Zone name differs based on the API type and group ID you are using.
zoneName="privatelink.documents.azure.com" az network private-dns zone create --resource-group $ResourceGroupName \
az network private-dns link vnet create --resource-group $ResourceGroupName \
--virtual-network $VNetName \ --registration-enabled false
-#Query for the network interface ID
-networkInterfaceId=$(az network private-endpoint show --name $PrivateEndpointName --resource-group $ResourceGroupName --query 'networkInterfaces[0].id' -o tsv)
-
-# Copy the content for privateIPAddress and FQDN matching the Azure Cosmos account
-az resource show --ids $networkInterfaceId --api-version 2019-04-01 -o json
-
-#Create DNS records
-az network private-dns record-set a create --name recordSet1 --zone-name privatelink.documents.azure.com --resource-group $ResourceGroupName
-az network private-dns record-set a add-record --record-set-name recordSet2 --zone-name privatelink.documents.azure.com --resource-group $ResourceGroupName -a <Private IP Address>
+#Create a DNS zone group
+az network private-endpoint dns-zone-group create \
+ --resource-group $ResourceGroupName \
+ --endpoint-name $PrivateEndpointName \
+ --name "MyPrivateZoneGroup" \
+ --private-dns-zone $zoneName \
+ --zone-name "myzone"
``` ## Create a private endpoint by using a Resource Manager template
After the template is deployed successfully, you can see an output similar to wh
After the template is deployed, the private IP addresses are reserved within the subnet. The firewall rule of the Azure Cosmos account is configured to accept connections from the private endpoint only.
-### Integrate the private endpoint with a Private DNS Zone
+### Integrate the private endpoint with a Private DNS zone
Use the following code to create a Resource Manager template named "PrivateZone_template.json." This template creates a private DNS zone for an existing Azure Cosmos SQL API account in an existing virtual network.
Use the following code to create a Resource Manager template named "PrivateZone_
} ```
-Use the following code to create a Resource Manager template named "PrivateZoneRecords_template.json."
+**Define the parameters file for the template**
+
+Create the following two parameters file for the template. Create the "PrivateZone_parameters.json." with the following code:
```json {
- "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0", "parameters": {
- "DNSRecordName": {
- "type": "string"
+ "privateZoneName": {
+ "value": ""
},
- "IPAddress": {
- "type":"string"
- }
- },
- "resources": [
- {
- "type": "Microsoft.Network/privateDnsZones/A",
- "apiVersion": "2018-09-01",
- "name": "[parameters('DNSRecordName')]",
- "properties": {
- "ttl": 300,
- "aRecords": [
- {
- "ipv4Address": "[parameters('IPAddress')]"
- }
- ]
- }
- }
- ]
+ "VNetId": {
+ "value": ""
+ }
+ }
} ```
-**Define the parameters file for the template**
-
-Create the following two parameters file for the template. Create the "PrivateZone_parameters.json." with the following code:
+Use the following code to create a Resource Manager template named "PrivateZoneGroup_template.json." This template creates a private DNS zone group for an existing Azure Cosmos SQL API account in an existing virtual network.
```json {
- "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0", "parameters": { "privateZoneName": {
- "value": ""
+ "type": "string"
},
- "VNetId": {
- "value": ""
+ "PrivateEndpointDnsGroupName": {
+ "value": "string"
+ },
+ "privateEndpointName":{
+ "value": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/privateEndpoints/privateDnsZoneGroups",
+ "apiVersion": "2020-06-01",
+ "name": "[parameters('PrivateEndpointDnsGroupName')]",
+ "location": "global",
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/privateDnsZones', parameters('privateZoneName'))]",
+ "[variables('privateEndpointName')]"
+ ],
+ "properties": {
+ "privateDnsZoneConfigs": [
+ {
+ "name": "config1",
+ "properties": {
+ "privateDnsZoneId": "[resourceId('Microsoft.Network/privateDnsZones', parameters('privateZoneName'))]"
+ }
+ }
+ ]
+ }
}
- }
+ ]
} ```
-Create the "PrivateZoneRecords_parameters.json." with the following code:
+**Define the parameters file for the template**
+
+Create the following two parameters file for the template. Create the "PrivateZoneGroup_parameters.json." with the following code:
```json { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": {
- "DNSRecordName": {
+ "privateZoneName": {
+ "value": ""
+ },
+ "PrivateEndpointDnsGroupName": {
"value": "" },
- "IPAddress": {
- "type":"object"
+ "privateEndpointName":{
+ "value": ""
} } }
$PrivateZoneName = "myPrivateZone.documents.azure.com"
# Name of the private endpoint to create $PrivateEndpointName = "myPrivateEndpoint"
+# Name of the DNS zone group to create
+$PrivateEndpointDnsGroupName = "myPrivateDNSZoneGroup"
+ $cosmosDbResourceId = "/subscriptions/$($SubscriptionId)/resourceGroups/$($ResourceGroupName)/providers/Microsoft.DocumentDB/databaseAccounts/$($CosmosDbAccountName)" $VNetResourceId = "/subscriptions/$($SubscriptionId)/resourceGroups/$($ResourceGroupName)/providers/Microsoft.Network/virtualNetworks/$($VNetName)" $SubnetResourceId = "$($VNetResourceId)/subnets/$($SubnetName)" $PrivateZoneTemplateFilePath = "PrivateZone_template.json" $PrivateZoneParametersFilePath = "PrivateZone_parameters.json"
-$PrivateZoneRecordsTemplateFilePath = "PrivateZoneRecords_template.json"
-$PrivateZoneRecordsParametersFilePath = "PrivateZoneRecords_parameters.json"
$PrivateEndpointTemplateFilePath = "PrivateEndpoint_template.json" $PrivateEndpointParametersFilePath = "PrivateEndpoint_parameters.json"
+$PrivateZoneGroupTemplateFilePath = "PrivateZoneGroup_template.json"
+$PrivateZoneGroupParametersFilePath = "PrivateZoneGroup_parameters.json"
## Step 2: Login your Azure account and select the target subscription Login-AzAccount
$deploymentOutput = New-AzResourceGroupDeployment -Name "PrivateCosmosDbEndpoint
-PrivateEndpointName $PrivateEndpointName $deploymentOutput
-## Step 6: Map the private endpoint to the private zone
-$networkInterface = Get-AzResource -ResourceId $deploymentOutput.Outputs.privateEndpointNetworkInterface.Value -ApiVersion "2019-04-01"
-foreach ($ipconfig in $networkInterface.properties.ipConfigurations) {
- foreach ($fqdn in $ipconfig.properties.privateLinkConnectionProperties.fqdns) {
- $recordName = $fqdn.split('.',2)[0]
- $dnsZone = $fqdn.split('.',2)[1]
- Write-Output "Deploying PrivateEndpoint DNS Record $($PrivateZoneName)/$($recordName) Template on $($resourceGroupName)"
- New-AzResourceGroupDeployment -Name "PrivateEndpointDNSDeployment" `
- -ResourceGroupName $ResourceGroupName `
- -TemplateFile $PrivateZoneRecordsTemplateFilePath `
- -TemplateParameterFile $PrivateZoneRecordsParametersFilePath `
- -DNSRecordName "$($PrivateZoneName)/$($RecordName)" `
- -IPAddress $ipconfig.properties.privateIPAddress
- }
-}
+## Step 6: Create the private zone
+New-AzResourceGroupDeployment -Name "PrivateZoneGroupDeployment" `
+ -ResourceGroupName $ResourceGroupName `
+ -TemplateFile $PrivateZoneGroupTemplateFilePath `
+ -TemplateParameterFile $PrivateZoneGroupParametersFilePath `
+ -PrivateZoneName $PrivateZoneName `
+ -PrivateEndpointName $PrivateEndpointName`
+ -PrivateEndpointDnsGroupName $PrivateEndpointDnsGroupName
+ ``` ## Configure custom DNS
When you're using Private Link with an Azure Cosmos account through a direct mod
## Update a private endpoint when you add or remove a region
-Unless you're using a private DNS zone group, adding or removing regions to an Azure Cosmos account requires you to add or remove DNS entries for that account. After regions have been added or removed, you can update the subnet's private DNS zone to reflect the added or removed DNS entries and their corresponding private IP addresses.
+For example, if you deploy an Azure Cosmos account in three regions: "West US," "Central US," and "West Europe." When you create a private endpoint for your account, four private IPs are reserved in the subnet. There's one IP for each of the three regions, and there's one IP for the global/region-agnostic endpoint. Later, you might add a new region (for example, "East US") to the Azure Cosmos account. The private DNS zone is updated as follows:
+
+* **If private DNS zone group is used:**
+
+ If you are using a private DNS zone group, the private DNS zone is automatically updated when the private endpoint is updated. In the previous example, after adding a new region, the private DNS zone is automatically updated.
-For example, imagine that you deploy an Azure Cosmos account in three regions: "West US," "Central US," and "West Europe." When you create a private endpoint for your account, four private IPs are reserved in the subnet. There's one IP for each of the three regions, and there's one IP for the global/region-agnostic endpoint.
+* **If private DNS zone group is not used:**
-Later, you might add a new region (for example, "East US") to the Azure Cosmos account. After adding the new region, you need to add a corresponding DNS record to either your private DNS zone or your custom DNS.
+ If you are not using a private DNS zone group, adding or removing regions to an Azure Cosmos account requires you to add or remove DNS entries for that account. After regions have been added or removed, you can update the subnet's private DNS zone to reflect the added or removed DNS entries and their corresponding private IP addresses.
-You can use the same steps when you remove a region. After removing the region, you need to remove the corresponding DNS record from either your private DNS zone or your custom DNS.
+ In the previous example, after adding the new region, you need to add a corresponding DNS record to either your private DNS zone or your custom DNS. You can use the same steps when you remove a region. After removing the region, you need to remove the corresponding DNS record from either your private DNS zone or your custom DNS.
## Current limitations
To learn more about Azure Cosmos DB security features, see the following article
* To learn how to configure a virtual network service endpoint for your Azure Cosmos account, see [Configure access from virtual networks](how-to-configure-vnet-service-endpoint.md).
-* To learn more about Private Link, see the [Azure Private Link](../private-link/private-link-overview.md) documentation.
+* To learn more about Private Link, see the [Azure Private Link](../private-link/private-link-overview.md) documentation.
cost-management-billing Programmatically Create Subscription Enterprise Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-enterprise-agreement.md
Previously updated : 03/12/2021 Last updated : 03/29/2021
To install the latest version of the module that contains the `New-AzSubscriptio
Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscription) command, using the billing scope `"/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321"`. ```azurepowershell-interactive
-New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321" -Workload 'Production"
+New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/BillingAccounts/1234567/enrollmentAccounts/7654321" -Workload "Production"
``` You get the subscriptionId as part of the response from the command.
cost-management-billing Spending Limit https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/spending-limit.md
tags: billing
Previously updated : 08/20/2020 Last updated : 03/29/2021 # Azure spending limit
-The spending limit in Azure prevents spending over your credit amount. All new customers who sign up for an Azure free account or subscription types that include credits over multiple months have the spending limit turned on by default. The spending limit is equal to the amount of credit and it canΓÇÖt be changed. For example, if you signed up for Azure free account, your spending limit is $200 and you can't change it to $500. However, you can remove the spending limit. So, you either have no limit, or you have a limit equal to the amount of credit. This prevents you from most kinds of spending. The spending limit isnΓÇÖt available for subscriptions with commitment plans or with pay-as-you-go pricing. See the [full list of Azure subscription types and the availability of the spending limit](https://azure.microsoft.com/support/legal/offer-details/).
+The spending limit in Azure prevents spending over your credit amount. All new customers who sign up for an Azure free account or subscription types that include credits over multiple months have the spending limit turned on by default. The spending limit is equal to the amount of credit. You can't change the amount of the spending limit. For example, if you signed up for Azure free account, your spending limit is $200 and you can't change it to $500. However, you can remove the spending limit. So, you either have no limit, or you have a limit equal to the amount of credit. This prevents you from most kinds of spending. The spending limit isnΓÇÖt available for subscriptions with commitment plans or with pay-as-you-go pricing. See the [full list of Azure subscription types and the availability of the spending limit](https://azure.microsoft.com/support/legal/offer-details/).
## Reaching a spending limit
cost-management-billing Manage Reserved Vm Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/reservations/manage-reserved-vm-instance.md
Previously updated : 02/09/2021 Last updated : 03/29/2021 # Manage Reservations for Azure resources
The scope only applies to individual subscriptions with pay-as-you-go rates (off
By default, the following users can view and manage reservations: -- The person who buys a reservation and the account administrator of the billing subscription used to buy the reservation are added to the reservation order.-- Enterprise Agreement and Microsoft Customer Agreement billing administrators.
+- The person who bought the reservation and the account owner for the billing subscription get Azure RBAC access to the reservation order.
+- Enterprise Agreement and Microsoft Customer Agreement billing contributors can manage all reservations from Cost Management + Billing > Reservation Transactions > select the blue banner.
To allow other people to manage reservations, you have two options:
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-databricks-delta-lake.md
Previously updated : 11/24/2020 Last updated : 03/29/2021 # Copy data to and from Azure Databricks Delta Lake by using Azure Data Factory
To use this Azure Databricks Delta Lake connector, you need to set up a cluster
The Databricks cluster needs to have access to Azure Blob or Azure Data Lake Storage Gen2 account, both the storage container/file system used for source/sink/staging and the container/file system where you want to write the Delta Lake tables. -- To use **Azure Data Lake Storage Gen2**, you can configure a **service principal** or **storage account access key** on the Databricks cluster as part of the Apache Spark configuration. Follow the steps in [Access directly with service principal](/azure/databricks/data/data-sources/azure/azure-datalake-gen2#--access-directly-with-service-principal-and-oauth-20) or [Access directly using the storage account access key](/azure/databricks/data/data-sources/azure/azure-datalake-gen2#--access-directly-using-the-storage-account-access-key).
+- To use **Azure Data Lake Storage Gen2**, you can configure a **service principal** on the Databricks cluster as part of the Apache Spark configuration. Follow the steps in [Access directly with service principal](/azure/databricks/data/data-sources/azure/azure-datalake-gen2#--access-directly-with-service-principal-and-oauth-20).
- To use **Azure Blob storage**, you can configure a **storage account access key** or **SAS token** on the Databricks cluster as part of the Apache Spark configuration. Follow the steps in [Access Azure Blob storage using the RDD API](/azure/databricks/data/data-sources/azure/azure-storage#access-azure-blob-storage-using-the-rdd-api).
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-file-system.md
description: Learn how to copy data from file system to supported sink data stor
Previously updated : 03/17/2021 Last updated : 03/29/2021
Specifically, this file system connector supports:
- Copying files using **Windows** authentication. - Copying files as-is or parsing/generating files with the [supported file formats and compression codecs](supported-file-formats-and-compression-codecs.md).
+> [!NOTE]
+> Mapped network drive is not supported when loading data from a network file share. Use the actual path instead e.g. ` \\server\share`.
+ ## Prerequisites [!INCLUDE [data-factory-v2-integration-runtime-requirements](../../includes/data-factory-v2-integration-runtime-requirements.md)]
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Previously updated : 03/10/2021 Last updated : 03/26/2021 # Data transformation expressions in mapping data flow
Returns the first not null value from a set of inputs. All inputs should be of t
* ``coalesce(10, 20) -> 10`` * ``coalesce(toString(null), toString(null), 'dumbo', 'bo', 'go') -> 'dumbo'`` ___
-### <code>collect</code>
-<code><b>collect(<i>&lt;value1&gt;</i> : any) => array</b></code><br/><br/>
-Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small.
-* ``collect(salesPerson)``
-* ``collect(firstName + lastName))``
-* ``collect(@(name = salesPerson, sales = salesAmount) )``
-___
### <code>columnNames</code> <code><b>columnNames(<i>&lt;value1&gt;</i> : string) => array</b></code><br/><br/> Gets the names of all output columns for a stream. You can pass an optional stream name as the second argument.
___
<code><b>escape(<i>&lt;string_to_escape&gt;</i> : string, <i>&lt;format&gt;</i> : string) => string</b></code><br/><br/> Escapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'. ___
+### <code>expr</code>
+<code><b>expr(<i>&lt;expr&gt;</i> : string) => any</b></code><br/><br/>
+Results in a expression from a string. This is the same as writing this expression in a non-literal form. This can be used to pass parameters as string representations.
+* expr(ΓÇÿprice * discountΓÇÖ) => any
+___
### <code>factorial</code> <code><b>factorial(<i>&lt;value1&gt;</i> : number) => long</b></code><br/><br/> Calculates the factorial of a number.
___
Based on a criteria gets the average of values of a column. * ``avgIf(region == 'West', sales)`` ___
+### <code>collect</code>
+<code><b>collect(<i>&lt;value1&gt;</i> : any) => array</b></code><br/><br/>
+Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small.
+* ``collect(salesPerson)``
+* ``collect(firstName + lastName))``
+* ``collect(@(name = salesPerson, sales = salesAmount) )``
+___
### <code>count</code> <code><b>count([<i>&lt;value1&gt;</i> : any]) => long</b></code><br/><br/> Gets the aggregate count of values. If the optional column(s) is specified, it ignores NULL values in the count.
Gets the first value of a column group. If the second parameter ignoreNulls is o
* ``first(sales)`` * ``first(sales, false)`` ___
+### <code>isDistinct</code>
+<code><b>isDistinct(<i>&lt;value1&gt;</i> : any , <i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/>
+Finds if a column or set of columns is distinct. It does not count null as a distinct value
+* ``isDistinct(custId, custName) => boolean``
+* ___
### <code>kurtosis</code> <code><b>kurtosis(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/> Gets the kurtosis of a column.
___
Conversion functions are used to convert data and test for data types
+### <code>isBitSet</code>
+<code><b>isBitSet (<value1> : array, <value2>:integer ) => boolean</b></code><br/><br/>
+Checks if a bit position is set in this bitset
+* ``isBitSet(toBitSet([10, 32, 98]), 10) => true``
+___
+### <code>setBitSet</code>
+<code><b>setBitSet (<value1> : array, <value2>:array) => array</b></code><br/><br/>
+Sets bit positions in this bitset
+* ``setBitSet(toBitSet([10, 32]), [98]) => [4294968320L, 17179869184L]``
+___
### <code>isBoolean</code> <code><b>isBoolean(<value1> : string) => boolean</b></code><br/><br/> Checks if the string value is a boolean value according to the rules of ``toBoolean()``
Select an array of columns by name in the stream. You can pass a optional stream
* ``toString(byNames(['a Column'], 'DeriveStream'))`` * ``byNames(['orderItem']) ? (itemName as string, itemQty as integer)`` ___
+### <code>byPath</code>
+<code><b>byPath(<i>&lt;value1&gt;</i> : string, [<i>&lt;streamName&gt;</i> : string]) => any</b></code><br/><br/>
+Finds a hierarchical path by name in the stream. You can pass an optional stream name as the second argument. If no such path is found it returns null. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs are not supported but you can use parameter substitutions.
+* ``byPath('grandpa.parent.child') => column``
+___
### <code>byPosition</code> <code><b>byPosition(<i>&lt;position&gt;</i> : integer) => any</b></code><br/><br/> Selects a column value by its relative position(1 based) in the stream. If the position is out of bounds it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Computed inputs are not supported but you can use parameter substitutions.
Selects a column value by its relative position(1 based) in the stream. If the p
* ``toString(byName($colName))`` * ``toString(byPosition(1234))`` ___
+### <code>hasPath</code>
+<code><b>hasPath(<i>&lt;value1&gt;</i> : string, [<i>&lt;streamName&gt;</i> : string]) => boolean</b></code><br/><br/>
+Checks if a certain hierarchical path exists by name in the stream. You can pass an optional stream name as the second argument. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs are not supported but you can use parameter substitutions.
+* ``hasPath('grandpa.parent.child') => boolean``
+___
### <code>hex</code> <code><b>hex(<value1>: binary) => string</b></code><br/><br/> Returns a hex string representation of a binary value
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 03/18/2021 Last updated : 03/25/2021 # Troubleshoot mapping data flows in Azure Data Factory
This article explores common troubleshooting methods for mapping data flows in A
2. Check the status of your file and table connections in the data flow designer. In debug mode, select **Data Preview** on your source transformations to ensure that you can access your data. 3. If everything looks correct in data preview, go into the Pipeline designer and put your data flow in a Pipeline activity. Debug the pipeline for an end-to-end test.
+### Improvement on CSV/CDM format in Data Flow
+
+If you use the **Delimited Text or CDM formatting for mapping data flow in Azure Data Factory V2**, you may face the behavior changes to your existing pipelines because of the improvement for Delimited Text/CDM in data flow starting from **1 May 2021**.
+
+You may encounter the following issues before the improvement, but after the improvement, the issues were fixed. Read the following content to determine whether this improvement affects you.
+
+#### Scenario 1: Encounter the unexpected row delimiter issue
+
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text with the Multiline setting set to True or CDM as the source.
+ - The first row has more than 128 characters.
+ - The row delimiter in data files is not `\n`.
+
+ Before the improvement, the default row delimiter `\n` may be unexpectedly used to parse delimited text files, because when Multiline setting is set to True, it invalidates the row delimiter setting, and the row delimiter is automatically detected based on the first 128 characters. If you fail to detect the actual row delimiter, it would fall back to `\n`.
+
+ After the improvement, any one of the three row delimiters: `\r`, `\n`, `\r\n` should be worked.
+
+ The following example shows you one pipeline behavior change after the improvement:
+
+ **Example**:<br/>
+ For the following column:<br/>
+ `C1, C2, {long first row}, C128\r\n `<br/>
+ `V1, V2, {values………………….}, V128\r\n `<br/>
+
+ Before the improvement, `\r` is kept in the column value. The parsed column result is:<br/>
+ `C1 C2 {long first row} C128`**`\r`**<br/>
+ `V1 V2 {values………………….} V128`**`\r`**<br/> 
+
+ After the improvement, the parsed column result should be:<br/>
+ `C1 C2 {long first row} C128`<br/>
+ `V1 V2 {values………………….} V128`<br/>
+
+#### Scenario 2: Encounter an issue of incorrectly reading column values containing '\r\n'
+
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text with the Multiline setting set to True or CDM as a source.
+ - The row delimiter is `\r\n`.
+
+ Before the improvement, when reading the column value, the `\r\n` in it may be incorrectly replaced by `\n`.
+
+ After the improvement, `\r\n` in the column value will not be replaced by `\n`.
+
+ The following example shows you one pipeline behavior change after the improvement:
+
+ **Example**:<br/>
+
+ For the following column:<br/>
+ **`"A\r\n"`**`, B, C\r\n`<br/>
+
+ Before the improvement, the parsed column result is:<br/>
+ **`A\n`**` B C`<br/>
+
+ After the improvement, the parsed column result should be:<br/>
+ **`A\r\n`**` B C`<br/>
+
+#### Scenario 3: Encounter an issue of incorrectly writing column values containing '\n'
+
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text as a sink.
+ - The column value contains `\n`.
+ - The row delimiter is set to `\r\n`.
+
+ Before the improvement, when writing the column value, the `\n` in it may be incorrectly replaced by `\r\n`.
+
+ After the improvement, `\n` in the column value will not be replaced by `\r\n`.
+
+ The following example shows you one pipeline behavior change after the improvement:
+
+ **Example**:<br/>
+
+ For the following column:<br/>
+ **`A\n`**` B C`<br/>
+
+ Before the improvement, the CSV sink is:<br/>
+ **`"A\r\n"`**`, B, C\r\n` <br/>
+
+ After the improvement, the CSV sink should be:<br/>
+ **`"A\n"`**`, B, C\r\n`<br/>
+
+#### Scenario 4: Encounter an issue of incorrectly reading empty string as NULL
+
+ You are affected if you are in the following conditions:
+ - Using the Delimited Text as a source.
+ - NULL value is set to non-empty value.
+ - The column value is empty string and is unquoted.
+
+ Before the improvement, the column value of unquoted empty string is read as NULL.
+
+ After the improvement, empty string will not be parsed as NULL value.
+
+ The following example shows you one pipeline behavior change after the improvement:
+
+ **Example**:<br/>
+
+ For the following column:<br/>
+ `A, ,B, `<br/>
+
+ Before the improvement, the parsed column result is:<br/>
+ `A null B null`<br/>
+
+ After the improvement, the parsed column result should be:<br/>
+ `A "" (empty string) B "" (empty string)`<br/>
++ ## Next steps For more help with troubleshooting, see these resources:
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/format-delta.md
description: Transform and move data from a delta lake using the delta format
Previously updated : 12/07/2020 Last updated : 03/26/2020
The below table lists the properties supported by a delta sink. You can edit the
| Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel | | Vacuum | Specify retention threshold in hours for older versions of table. A value of 0 or less defaults to 30 days | yes | Integer | vacuum | | Update method | Specify which update operations are allowed on the delta lake. For methods that aren't insert, a preceding alter row transformation is required to mark rows. | yes | `true` or `false` | deletable <br> insertable <br> updateable <br> merge |
+| Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
+| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
### Delta sink script example
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-create-event-trigger.md
This section shows you how to create a storage event trigger within the Azure Da
1. If your pipeline has parameters, you can specify them on the trigger runs parameter side nav. The storage event trigger captures the folder path and file name of the blob into the properties `@triggerBody().folderPath` and `@triggerBody().fileName`. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the `@pipeline().parameters.parameterName` expression throughout the pipeline. For detailed explanation, see [Reference Trigger Metadata in Pipelines](how-to-use-trigger-parameterization.md)
- :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image4.png" alt-text="Screenshot of storage event trigger mapping properties to pipeline parameters.":::
+ :::image type="content" source="media/how-to-create-event-trigger/event-based-trigger-image4.png" alt-text="Screenshot of storage event trigger mapping properties to pipeline parameters.":::
- In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively.
+ In the preceding example, the trigger is configured to fire when a blob path ending in .csv is created in the folder _event-testing_ in the container _sample-data_. The **folderPath** and **fileName** properties capture the location of the new blob. For example, when MoviesDB.csv is added to the path sample-data/event-testing, `@triggerBody().folderPath` has a value of `sample-data/event-testing` and `@triggerBody().fileName` has a value of `moviesDB.csv`. These values are mapped, in the example, to the pipeline parameters `sourceFolder` and `sourceFile`, which can be used throughout the pipeline as `@pipeline().parameters.sourceFolder` and `@pipeline().parameters.sourceFile` respectively.
+
+ > [!NOTE]
+ > If you are creating your pipeline and trigger in [Azure Synapse Analytics](https://docs.microsoft.com/azure/synapse-analytics/), you must use `@trigger().outputs.body.fileName` and `@trigger().outputs.body.folderPath` as parameters. Those two properties capture blob information. Use those properties instead of using `@triggerBody().fileName` and `@triggerBody().folderPath`.
+
+ > [!NOTE]
+ > If you are creating your pipeline and trigger in Azure Synapse Analytics you must use `@trigger().outputs.body.fileName` and `@trigger().outputs.body.folderPath` as parameters to capture blob information instead of `@triggerBody().fileName` and `@triggerBody().folderPath`.
1. Click **Finish** once you are done.
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-using-azure-monitor.md
https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnost
| | | | | **storageAccountId** |String | The resource ID of the storage account to which you want to send diagnostic logs. | | **serviceBusRuleId** |String | The service-bus rule ID of the service-bus namespace in which you want to have Event Hubs created for streaming diagnostic logs. The rule ID has the format `{service bus resource ID}/authorizationrules/{key name}`.|
-| **workspaceId** | Complex Type | An array of metric time grains and their retention policies. This property's value is empty. |
+| **workspaceId** | String | The workspace ID of the workspace where the logs will be saved. |
|**metrics**| Parameter values of the pipeline run to be passed to the invoked pipeline| A JSON object that maps parameter names to argument values. | | **logs**| Complex Type| The name of a diagnostic-log category for a resource type. To get the list of diagnostic-log categories for a resource, perform a GET diagnostic-settings operation. | | **category**| String| An array of log categories and their retention policies. |
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-azure-cli.md
+
+ Title: "Quickstart: Create an Azure Data Factory using Azure CLI"
+description: This quickstart creates an Azure Data Factory, including a linked service, datasets, and a pipeline. You can run the pipeline to do a file copy action.
++++ Last updated : 03/24/2021+
+ - template-quickstart
+ - devx-track-azurecli
++
+# Quickstart: Create an Azure Data Factory using Azure CLI
+
+This quickstart describes how to use Azure CLI to create an Azure Data Factory. The pipeline you create in this data factory copies data from one folder to another folder in an Azure Blob Storage. For information on how to transform data using Azure Data Factory, see [Transform data in Azure Data Factory](transform-data.md).
+
+For an introduction to the Azure Data Factory service, see [Introduction to Azure Data Factory](introduction.md).
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
++
+> [!NOTE]
+> To create Data Factory instances, the user account that you use to sign in to Azure must be a member of the contributor or owner role, or an administrator of the Azure subscription. For more information, see [Azure roles](quickstart-create-data-factory-powershell.md#azure-roles).
+
+## Prepare a container and test file
+
+This quickstart uses an Azure Storage account, which includes a container with a file.
+
+1. To create a resource group named `ADFQuickStartRG`, use the [az group create](/cli/azure/group#az_group_create) command:
+
+ ```azurecli
+ az group create --name ADFQuickStartRG --location eastus
+ ```
+
+1. Create a storage account by using the [az storage account create](/cli/azure/storage/container#az_storage_container_create) command:
+
+ ```azurecli
+ az storage account create --resource-group ADFQuickStartRG \
+ --name adfquickstartstorage --location eastus
+ ```
+
+1. Create a container named `adftutorial` by using the [az storage container create](/cli/azure/storage/container#az_storage_container_create) command:
+
+ ```azurecli
+ az storage container create --resource-group ADFQuickStartRG --name adftutorial \
+ --account-name adfquickstartstorage --auth-mode key
+ ```
+
+1. In the local directory, create a file named `emp.txt` to upload. If you're working in Azure Cloud Shell, you can find the current working directory by using the `echo $PWD` Bash command. You can use standard Bash commands, like `cat`, to create a file:
+
+ ```console
+ cat > emp.txt
+ This is text.
+ ```
+
+ Use **Ctrl+D** to save your new file.
+
+1. To upload the new file to your Azure storage container, use the [az storage blob upload](/cli/azure/storage/blob#az_storage_blob_upload) command:
+
+ ```azurecli
+ az storage blob upload --account-name adfquickstartstorage --name input/emp.txt \
+ --container-name adftutorial --file emp.txt --auth-mode key
+ ```
+
+ This command uploads to a new folder named `input`.
+
+## Create a data factory
+
+To create an Azure data factory, run the [az datafactory factory create](/cli/azure/ext/datafactory/datafactory/factory#ext_datafactory_az_datafactory_factory_create) command:
+
+```azurecli
+az datafactory factory create --resource-group ADFQuickStartRG \
+ --factory-name ADFTutorialFactory
+```
+
+> [!IMPORTANT]
+> Replace `ADFTutorialFactory` with a globally unique data factory name, for example, ADFTutorialFactorySP1127.
+
+You can see the data factory that you created by using the [az datafactory factory show](/cli/azure/ext/datafactory/datafactory/factory#ext_datafactory_az_datafactory_factory_show) command:
+
+```azurecli
+az datafactory factory show --resource-group ADFQuickStartRG \
+ --factory-name ADFTutorialFactory
+```
+
+## Create a linked service and datasets
+
+Next, create a linked service and two datasets.
+
+1. Get the connection string for your storage account by using the [az storage account show-connection-string](/cli/azure/ext/datafactory/datafactory/factory#ext_datafactory_az_datafactory_factory_show) command:
+
+ ```azurecli
+ az storage account show-connection-string --resource-group ADFQuickStartRG \
+ --name adfquickstartstorage --key primary
+ ```
+
+1. In your working directory, create a JSON file with this content, which includes your own connection string from the previous step. Name the file `AzureStorageLinkedService.json`:
+
+ ```json
+ {
+ "type":"AzureStorage",
+ "typeProperties":{
+ "connectionString":{
+ "type": "SecureString",
+ "value":"DefaultEndpointsProtocol=https;AccountName=adfquickstartstorage;AccountKey=K9F4Xk/EhYrMBIR98rtgJ0HRSIDU4eWQILLh2iXo05Xnr145+syIKNczQfORkQ3QIOZAd/eSDsvED19dAwW/tw==;EndpointSuffix=core.windows.net"
+ }
+ }
+ }
+ ```
+
+1. Create a linked service, named `AzureStorageLinkedService`, by using the [az datafactory linked-service create](/cli/azure/ext/datafactory/datafactory/linked-service#ext_datafactory_az_datafactory_linked_service_create) command:
+
+ ```azurecli
+ az datafactory linked-service create --resource-group ADFQuickStartRG \
+ --factory-name ADFTutorialFactory --linked-service-name AzureStorageLinkedService \
+ --properties @AzureStorageLinkedService.json
+ ```
+
+1. In your working directory, create a JSON file with this content, named `InputDataset.json`:
+
+ ```json
+ {
+ "type":
+ "AzureBlob",
+ "linkedServiceName": {
+ "type":"LinkedServiceReference",
+ "referenceName":"AzureStorageLinkedService"
+ },
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "fileName": "emp.txt",
+ "folderPath": "input",
+ "container": "adftutorial"
+ }
+ }
+ }
+ ```
+
+1. Create an input dataset named `InputDataset` by using the [az datafactory dataset create](/cli/azure/ext/datafactory/datafactory/dataset#ext_datafactory_az_datafactory_dataset_create) command:
+
+ ```azurecli
+ az datafactory dataset create --resource-group ADFQuickStartRG \
+ --dataset-name InputDataset --factory-name ADFQuickStartFactory \
+ --properties @InputDataset.json
+ ```
+
+1. In your working directory, create a JSON file with this content, named `OutputDataset.json`:
+
+ ```json
+ {
+ "type":
+ "AzureBlob",
+ "linkedServiceName": {
+ "type":"LinkedServiceReference",
+ "referenceName":"AzureStorageLinkedService"
+ },
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "fileName": "emp.txt",
+ "folderPath": "output",
+ "container": "adftutorial"
+ }
+ }
+ }
+ ```
+
+1. Create an output dataset named `OutputDataset` by using the [az datafactory dataset create](/cli/azure/ext/datafactory/datafactory/dataset#ext_datafactory_az_datafactory_dataset_create) command:
+
+ ```azurecli
+ az datafactory dataset create --resource-group ADFQuickStartRG \
+ --dataset-name OutputDataset --factory-name ADFQuickStartFactory \
+ --properties @OutputDataset.json
+ ```
+
+## Create and run the pipeline
+
+Finally, create and run the pipeline.
+
+1. In your working directory, create a JSON file with this content named `Adfv2QuickStartPipeline.json`:
+
+ ```json
+ {
+ "name": "Adfv2QuickStartPipeline",
+ "properties": {
+ "activities": [
+ {
+ "name": "CopyFromBlobToBlob",
+ "type": "Copy",
+ "dependsOn": [],
+ "policy": {
+ "timeout": "7.00:00:00",
+ "retry": 0,
+ "retryIntervalInSeconds": 30,
+ "secureOutput": false,
+ "secureInput": false
+ },
+ "userProperties": [],
+ "typeProperties": {
+ "source": {
+ "type": "BinarySource",
+ "storeSettings": {
+ "type": "AzureBlobStorageReadSettings",
+ "recursive": true
+ }
+ },
+ "sink": {
+ "type": "BinarySink",
+ "storeSettings": {
+ "type": "AzureBlobStorageWriteSettings"
+ }
+ },
+ "enableStaging": false
+ },
+ "inputs": [
+ {
+ "referenceName": "InputDataset",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "OutputDataset",
+ "type": "DatasetReference"
+ }
+ ]
+ }
+ ],
+ "annotations": []
+ }
+ }
+ ```
+
+1. Create a pipeline named `Adfv2QuickStartPipeline` by using the [az datafactory pipeline create](/cli/azure/ext/datafactory/datafactory/pipeline#ext_datafactory_az_datafactory_pipeline_create) command:
+
+ ```azurecli
+ az datafactory pipeline create --resource-group ADFQuickStartRG \
+ --factory-name ADFTutorialFactory --name Adfv2QuickStartPipeline \
+ --pipeline @Adfv2QuickStartPipeline.json
+ ```
+
+1. Run the pipeline by using the [az datafactory pipeline create-run](/cli/azure/ext/datafactory/datafactory/pipeline#ext_datafactory_az_datafactory_pipeline_create_run) command:
+
+ ```azurecli
+ az datafactory pipeline create-run --resource-group ADFQuickStartRG \
+ --name Adfv2QuickStartPipeline --factory-name ADFTutorialFactory
+ ```
+
+ This command returns a run ID. Copy it for use in the next command.
+
+1. Verify that the pipeline run succeeded by using the [az datafactory pipeline-run show](/cli/azure/ext/datafactory/datafactory/pipeline-run#ext_datafactory_az_datafactory_pipeline_run_show) command:
+
+ ```azurecli
+ az datafactory pipeline-run show --resource-group ADFQuickStartRG \
+ --factory-name ADFTutorialFactory --run-id 00000000-0000-0000-0000-000000000000
+ ```
+
+You can also verify that your pipeline ran as expected by using the [Azure portal](https://portal.azure.com/). For more information, see [Review deployed resources](quickstart-create-data-factory-powershell.md#review-deployed-resources).
+
+## Clean up resources
+
+All of the resources in this quickstart are part of the same resource group. To remove them all, use the [az group delete](/cli/azure/group#az_group_delete) command:
+
+```azurecli
+az group delete --name ADFQuickStartRG
+```
+
+If you're using this resource group for anything else, instead, delete individual resources. For instance, to remove the linked service, use the [az datafactory linked-service delete](/cli/azure/ext/datafactory/datafactory/linked-service#ext_datafactory_az_datafactory_linked_service_delete) command.
+
+In this quickstart, you created the following JSON files:
+
+- AzureStorageLinkedService.json
+- InputDataset.json
+- OutputDataset.json
+- Adfv2QuickStartPipeline.json
+
+Delete them by using standard Bash commands.
+
+## Next steps
+
+- [Pipelines and activities in Azure Data Factory](concepts-pipelines-activities.md)
+- [Linked services in Azure Data Factory](concepts-linked-services.md)
+- [Datasets in Azure Data Factory](concepts-datasets-linked-services.md)
+- [Transform data in Azure Data Factory](transform-data.md)
data-factory Ssis Integration Runtime Ssis Activity Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ssis-integration-runtime-ssis-activity-faq.md
description: "This article provides troubleshooting guidance for SSIS package ex
-+ Last updated 04/15/2019
This error occurs when the SSIS integration runtime can't access the storage con
One potential cause is that the username or password with Azure AD Multi-Factor Authentication enabled is configured for Azure Analysis Services authentication. This authentication isn't supported in the SSIS integration runtime. Try to use a service principal for Azure Analysis Services authentication: 1. Prepare a service principal as described in [Automation with service principals](../analysis-services/analysis-services-service-principal.md).
-2. In Connection Manager, configure **Use a specific user name and password**: set **AppID** as the username and **clientSecret** as the password.
+2. In the Connection Manager, configure **Use a specific user name and password:** set **app:*&lt;AppID&gt;*@*&lt;TenantID&gt;*** as the username and clientSecret as the password. Here is an example of a correctly formatted user name:
+
+ `app:12345678-9012-3456-789a-bcdef012345678@9abcdef0-1234-5678-9abc-def0123456789abc`
+1. In Connection Manager, configure **Use a specific user name and password**: set **AppID** as the username and **clientSecret** as the password.
### Error message: "ADONET Source has failed to acquire the connection {GUID} with the following error message: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'" when using a managed identity
Try these actions:
* Check IR node performance in the Azure portal: * For information about how to monitor the SSIS integration runtime, see [Azure-SSIS integration runtime](monitor-integration-runtime.md#azure-ssis-integration-runtime). * You can find CPU/memory history for the SSIS integration runtime by viewing the metrics of the data factory in the Azure portal.
- ![Monitor metrics of the SSIS integration runtime](media/ssis-integration-runtime-ssis-activity-faq/monitor-metrics-ssis-integration-runtime.png)
+ ![Monitor metrics of the SSIS integration runtime](media/ssis-integration-runtime-ssis-activity-faq/monitor-metrics-ssis-integration-runtime.png)
data-factory Wrangling Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/wrangling-tutorial.md
Last updated 01/19/2021
Data wrangling in data factory allows you to build interactive Power Query mash-ups natively in ADF and then execute those at scale inside of an ADF pipeline. > [!NOTE]
-> Power Query acitivty in ADF is currently avilable in public preview
+> Power Query activity in ADF is currently available in public preview
## Create a Power Query activity
databox-online Azure Stack Edge Gpu 2103 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-2103-release-notes.md
The following new features are available in the Azure Stack Edge 2103 release.
- **Improvements for Compute** - Several enhancements and improvements were made including those for: - **Overall compute platform quality**. This release has bug fixes to improve the overall compute platform quality. See the [Issues fixed in 2103 release](#issues-fixed-in-2103-release).
- - **Compute platform components**. Security updates were applied to Compute VM image. IoT Edge and Azure Arc for Kubernetes versions were also updated.
- - **Diagnostics**. A new API is released to check resource and network conditions. You can connect to the PowerShell interface of the device and use the `Test-HcsKubernetesStatus` command to verify the network readiness of the device.
- - **Log collection** that would lead to improved debugging.
- - **Alerting infrastructure** that will allow you to detect IP address conflicts for compute IP addresses.
- - **Mix workload** of Kubernetes and local Azure Resource Manager.
+ - **Compute platform components**. Security updates were applied to Compute VM image. IoT Edge and Azure Arc for Kubernetes versions were also updated.
+ - **Diagnostics**. A new API is released to check resource and network conditions. You can connect to the PowerShell interface of the device and use the `Test-HcsKubernetesStatus` command to verify the network readiness of the device.
+ - **Log collection** that would lead to improved debugging.
+ - **Alerting infrastructure** that will allow you to detect IP address conflicts for compute IP addresses.
+ - **Mix workload** of Kubernetes and local Azure Resource Manager.
- **Proactive logging by default** - Starting this release, proactive log collection is enabled by default on your device. This feature allows Microsoft to collect logs proactively based on the system health indicators to help efficiently troubleshoot any device issues. For more information, see [Proactive log collection on your device](azure-stack-edge-gpu-proactive-log-collection.md).
The following table provides a summary of known issues carried over from the pre
|**19.**|Kubernetes + update |Earlier software versions such as 2008 releases have a race condition update issue that causes the update to fail with ClusterConnectionException. |Using the newer builds should help avoid this issue. If you still see this issue, the workaround is to retry the upgrade, and it should work.| |**20**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| |**21.**|Kubernetes Dashboard | *Https* endpoint for Kubernetes Dashboard with SSL certificate is not supported. | |
-|**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration&view=aspnetcore-3.1&preserve-view=true#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
+|**22.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)|
|**23.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources are not deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/use-gitops-connected-cluster.md#additional-parameters). | |**24.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | |**25.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that do not exist on the network.| |
ddos-protection Manage Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/manage-ddos-protection.md
You cannot move a virtual network to another resource group or subscription when
### Enable DDoS protection for an existing virtual network 1. Create a DDoS protection plan by completing the steps in [Create a DDoS protection plan](#create-a-ddos-protection-plan), if you don't have an existing DDoS protection plan.
-2. Select **Create a resource** in the upper left corner of the Azure portal.
-3. Enter the name of the virtual network that you want to enable DDoS Protection Standard for in the **Search resources, services, and docs box** at the top of the portal. When the name of the virtual network appears in the search results, select it.
-4. Select **DDoS protection**, under **SETTINGS**.
-5. Select **Standard**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
+2. Enter the name of the virtual network that you want to enable DDoS Protection Standard for in the **Search resources, services, and docs box** at the top of the Azure portal. When the name of the virtual network appears in the search results, select it.
+3. Select **DDoS protection**, under **SETTINGS**.
+4. Select **Standard**. Under **DDoS protection plan**, select an existing DDoS protection plan, or the plan you created in step 1, and then select **Save**. The plan you select can be in the same, or different subscription than the virtual network, but both subscriptions must be associated to the same Azure Active Directory tenant.
### Enable DDoS protection for all virtual networks
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/alert-engine-messages.md
description: Review Defender for IoT Alert descriptions.
Previously updated : 03/22/2021 Last updated : 03/29/2021
-# Defender for IoT Engine alerts
+# Alert types and descriptions
-This article describes alerts that may be generated from the Defender for IoT engines. Alerts appear in the Alerts window, where you can manage the alert event.
+This article describes all opf the alert types, that may be generated from the Defender for IoT engines. Alerts appear in the Alerts window, which allows you to manage the alert event.
## Policy engine alerts
defender-for-iot How To Work With Alerts On Your Sensor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/how-to-work-with-alerts-on-your-sensor.md
Title: About sensor alerts
+ Title: Understand sensor alerts
description: Work with alerts to help you enhance the security and operation of your network. Previously updated : 11/30/2020 Last updated : 3/29/2021
Alerts are triggered when sensor engines detect changes in network traffic and b
| Alert type | Description | |-|-|
-| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected.  <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
-| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
+| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
+| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
| Operational alerts | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
-| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
+| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but is not defined as a scanning device. | Tools are available to enable and disable sensor engines. Alerts are not triggered from engines that are disabled. See [Control what traffic is monitored](how-to-control-what-traffic-is-monitored.md).
For example:
- Malware events detected on network devices are reported in Risk Assessment reports. When alerts about malware events are *muted*, affected devices won't be calculated in the Risk Assessment report.
-## See also
+## Next steps
-- [Learning and Smart IT Learning modes](how-to-control-what-traffic-is-monitored.md#learning-and-smart-it-learning-modes)-- [View information provided in alerts](how-to-view-information-provided-in-alerts.md)-- [Manage the alert event](how-to-manage-the-alert-event.md)-- [Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
+[Learning and Smart IT Learning modes](how-to-control-what-traffic-is-monitored.md#learning-and-smart-it-learning-modes)
+[View information provided in alerts](how-to-view-information-provided-in-alerts.md)
+[Manage the alert event](how-to-manage-the-alert-event.md)
+[Accelerate alert workflows](how-to-accelerate-alert-incident-response.md)
+[Alert types and descriptions](alert-engine-messages.md)
devtest-labs Automate Add Lab User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/automate-add-lab-user.md
The role ID is defined in the variables section and named `devTestLabUserRoleId`
### Principal ID Principal ID is the object ID of the Active Directory user, group, or service principal that you want to add as a lab user to the lab. The template uses the `ObjectId` as a parameter.
-You can get the ObjectId by using the [Get-AzureRMADUser](/powershell/module/azurerm.resources/get-azurermaduser?view=azurermps-6.13.0), [Get-AzureRMADGroup, or [Get-AzureRMADServicePrincipal](/powershell/module/azurerm.resources/get-azurermadserviceprincipal?view=azurermps-6.13.0) PowerShell cmdlets. These cmdlets return a single or lists of Active Directory objects that have an ID property, which is the object ID that you need. The following example shows you how to get the object ID of a single user at a company.
+You can get the ObjectId by using the [Get-AzureRMADUser](/powershell/module/azurerm.resources/get-azurermaduser?view=azurermps-6.13.0&preserve-view=true), [Get-AzureRMADGroup, or [Get-AzureRMADServicePrincipal](/powershell/module/azurerm.resources/get-azurermadserviceprincipal?view=azurermps-6.13.0&preserve-view=true) PowerShell cmdlets. These cmdlets return a single or lists of Active Directory objects that have an ID property, which is the object ID that you need. The following example shows you how to get the object ID of a single user at a company.
```powershell
-$userObjectId = (Get-AzureRmADUser -UserPrincipalName ΓÇÿemail@company.com').Id
+$userObjectId = (Get-AzureRmADUser -UserPrincipalName 'email@company.com').Id
```
-You can also use the Azure Active Directory PowerShell cmdlets that include [Get-MsolUser](/powershell/module/msonline/get-msoluser?view=azureadps-1.0), [Get-MsolGroup](/powershell/module/msonline/get-msolgroup?view=azureadps-1.0), and [Get-MsolServicePrincipal](/powershell/module/msonline/get-msolserviceprincipal?view=azureadps-1.0).
+You can also use the Azure Active Directory PowerShell cmdlets that include [Get-MsolUser](/powershell/module/msonline/get-msoluser?preserve-view=true&view=azureadps-1.0), [Get-MsolGroup](/powershell/module/msonline/get-msolgroup?preserve-view=true&view=azureadps-1.0), and [Get-MsolServicePrincipal](/powershell/module/msonline/get-msolserviceprincipal?preserve-view=true&view=azureadps-1.0).
### Scope Scope specifies the resource or resource group for which the role assignment should apply. For resources, the scope is in the form: `/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/{provider-namespace}/{resource-type}/{resource-name}`. The template uses the `subscription().subscriptionId` function to fill in the `subscription-id` part and the `resourceGroup().name` template function to fill in the `resource-group-name` part. Using these functions means that the lab to which you're assigning a role must exist in the current subscription and the same resource group to which the template deployment is made. The last part, `resource-name`, is the name of the lab. This value is received via the template parameter in this example.
First, create a parameter file (for example: azuredeploy.parameters.json) that p
} ```
-Then, use the [New-AzureRmResourceGroupDeployment](/powershell/module/azurerm.resources/new-azurermresourcegroupdeployment?view=azurermps-6.13.0) PowerShell cmdlet to deploy the Resource Manager template. The following example command assigns a person, group, or a service principal to the DevTest Labs User role for a lab.
+Then, use the [New-AzureRmResourceGroupDeployment](/powershell/module/azurerm.resources/new-azurermresourcegroupdeployment) PowerShell cmdlet to deploy the Resource Manager template. The following example command assigns a person, group, or a service principal to the DevTest Labs User role for a lab.
```powershell New-AzureRmResourceGroupDeployment -Name "MyLabResourceGroup-$(New-Guid)" -ResourceGroupName 'MyLabResourceGroup' -TemplateParameterFile .\azuredeploy.parameters.json -TemplateFile .\azuredeploy.json
New-AzureRmResourceGroupDeployment -Name "MyLabResourceGroup-$(New-Guid)" -Resou
``` ## Use Azure PowerShell
-As discussed in the introduction, you create a new Azure role assignment to add a user to the **DevTest Labs User** role for the lab. In PowerShell, you do so by using the [New-AzureRMRoleAssignment](/powershell/module/azurerm.resources/new-azurermroleassignment?view=azurermps-6.13.0) cmdlet. This cmdlet has many optional parameters to allow for flexibility. The `ObjectId`, `SigninName`, or `ServicePrincipalName` can be specified as the object being granted permissions.
+As discussed in the introduction, you create a new Azure role assignment to add a user to the **DevTest Labs User** role for the lab. In PowerShell, you do so by using the [New-AzureRMRoleAssignment](/powershell/module/azurerm.resources/new-azurermroleassignment) cmdlet. This cmdlet has many optional parameters to allow for flexibility. The `ObjectId`, `SigninName`, or `ServicePrincipalName` can be specified as the object being granted permissions.
Here is a sample Azure PowerShell command that adds a user to the DevTest Labs User role in the specified lab.
The object that is being granted access can be specified by the `objectId`, `sig
The following Azure CLI example shows you how to add a person to the DevTest Labs User role for the specified Lab. ```azurecli
-az role assignment create --roleName "DevTest Labs User" --signInName <email@company.com> -ΓÇôresource-name "<Lab Name>" --resource-type ΓÇ£Microsoft.DevTestLab/labs" --resource-group "<Resource Group Name>"
+az role assignment create --roleName "DevTest Labs User" --signInName <email@company.com> -ΓÇôresource-name "<Lab Name>" --resource-type "Microsoft.DevTestLab/labs" --resource-group "<Resource Group Name>"
``` ## Next steps
devtest-labs Image Factory Set Retention Policy Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/devtest-labs/image-factory-set-retention-policy-cleanup.md
Now that you have completed the build definition, queue up a new build to make s
## Summary
-Now you have a running image factory that can generate and distribute custom images to your labs on demand. At this point, itΓÇÖs just a matter of getting your images set up properly and identifying the target labs. As mentioned in the previous article, the **Labs.json** file located in your **Configuration** folder specifies which images should be made available in each of the target labs. As you add other DevTest Labs to your organization, you simply need to add an entry in the Labs.json for the new lab.
+Now you have a running image factory that can generate and distribute custom images to your labs on demand. At this point, it's just a matter of getting your images set up properly and identifying the target labs. As mentioned in the previous article, the **Labs.json** file located in your **Configuration** folder specifies which images should be made available in each of the target labs. As you add other DevTest Labs to your organization, you simply need to add an entry in the Labs.json for the new lab.
-Adding a new image to your factory is also simple. When you want to include a new image in your factory you open the [Azure portal](https://portal.azure.com), navigate to your factory DevTest Labs, select the button to add a VM, and choose the desired marketplace image and artifacts. Instead of selecting the **Create** button to make the new VM, select **View Azure Resource Manager template**ΓÇ¥ and save the template as a .json file somewhere within the **GoldenImages** folder in your repository. The next time you run your image factory, it will create your custom image.
+Adding a new image to your factory is also simple. When you want to include a new image in your factory you open the [Azure portal](https://portal.azure.com), navigate to your factory DevTest Labs, select the button to add a VM, and choose the desired marketplace image and artifacts. Instead of selecting the **Create** button to make the new VM, select **View Azure Resource Manager template**" and save the template as a .json file somewhere within the **GoldenImages** folder in your repository. The next time you run your image factory, it will create your custom image.
## Next steps 1. [Schedule your build/release](/azure/devops/pipelines/build/triggers?tabs=designer) to run the image factory periodically. It refreshes your factory-generated images on a regular basis. 2. Make more golden images for your factory. You may also consider [creating artifacts](devtest-lab-artifact-author.md) to script additional pieces of your VM setup tasks and include the artifacts in your factory images.
-4. Create a [separate build/release](/azure/devops/pipelines/overview?view=azure-devops-2019) to run the **DistributeImages** script separately. You can run this script when you make changes to Labs.json and get images copied to target labs without having to recreate all the images again.
+4. Create a [separate build/release](/azure/devops/pipelines/overview) to run the **DistributeImages** script separately. You can run this script when you make changes to Labs.json and get images copied to target labs without having to recreate all the images again.
digital-twins How To Interpret Event Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-interpret-event-data.md
This chart shows the different notification types:
[!INCLUDE [digital-twins-notifications.md](../../includes/digital-twins-notifications.md)]
+## Notification structure
+ In general, notifications are made up of two parts: the header and the body. ### Event notification headers
Telemetry message:
} ```
-Life-cycle notification message:
+Lifecycle notification message:
```json {
Life-cycle notification message:
} ```
-## Message-format detail for different event types
+The following sections go into more detail about the different types of notifications emitted by IoT Hub and Azure Digital Twins (or other Azure IoT services). You will read about the things that trigger each notification type, and the set of fields included with each type of notification body.
+
+## Digital twin change notifications
+
+**Digital twin change notifications** are triggered when a digital twin is being updated, like:
+* When property values or metadata changes.
+* When digital twin or component metadata changes. An example of this scenario is changing the model of a digital twin.
+
+### Properties
+
+Here are the fields in the body of a digital twin change notification.
+
+| Name | Value |
+| | |
+| `id` | Identifier of the notification, such as a UUID or a counter maintained by the service. `source` + `id` is unique for each distinct event |
+| `source` | Name of the IoT hub or Azure Digital Twins instance, like *myhub.azure-devices.net* or *mydigitaltwins.westus2.azuredigitaltwins.net*
+| `specversion` | *1.0*<br>The message conforms to this version of the [CloudEvents spec](https://github.com/cloudevents/spec). |
+| `type` | `Microsoft.DigitalTwins.Twin.Update` |
+| `datacontenttype` | `application/json` |
+| `subject` | ID of the digital twin |
+| `time` | Timestamp for when the operation occurred on the digital twin |
+| `traceparent` | A W3C trace context for the event |
+
+### Body details
+
+The body for the `Twin.Update` notification is a JSON Patch document containing the update to the digital twin.
+
+For example, say that a digital twin was updated using the following patch.
++
+The corresponding notification (if synchronously executed by the service, such as Azure Digital Twins updating a digital twin) would have a body like:
-This section goes into more detail about the different types of notifications emitted by IoT Hub and Azure Digital Twins (or other Azure IoT services). You will read about the things that trigger each notification type, and the set of fields included with each type of notification body.
+```json
+{
+ "modelId": "dtmi:example:com:floor4;2",
+ "patch": [
+ {
+ "value": 40,
+ "path": "/Temperature",
+ "op": "replace"
+ },
+ {
+ "value": 30,
+ "path": "/comp1/prop1",
+ "op": "add"
+ }
+ ]
+ }
+```
-### Digital twin life-cycle notifications
+## Digital twin lifecycle notifications
-All [digital twins](concepts-twins-graph.md) emit notifications, regardless of whether they represent [IoT Hub devices in Azure Digital Twins](how-to-ingest-iot-hub-data.md) or not. This is because of **life-cycle notifications**, which are about the digital twin itself.
+All [digital twins](concepts-twins-graph.md) emit notifications, regardless of whether they represent [IoT Hub devices in Azure Digital Twins](how-to-ingest-iot-hub-data.md) or not. This is because of **lifecycle notifications**, which are about the digital twin itself.
-Life-cycle notifications are triggered when:
+Lifecycle notifications are triggered when:
* A digital twin is created * A digital twin is deleted
-#### Properties
+### Properties
-Here are the fields in the body of a life-cycle notification.
+Here are the fields in the body of a lifecycle notification.
| Name | Value | | | |
Here are the fields in the body of a life-cycle notification.
| `time` | Timestamp for when the operation occurred on the twin | | `traceparent` | A W3C trace context for the event |
-#### Body details
+### Body details
The body is the affected digital twin, represented in JSON format. The schema for this is *Digital Twins Resource 7.1*.
Here is another example of a digital twin. This one is based on a [model](concep
} ```
-### Digital twin relationship change notifications
+## Digital twin relationship change notifications
**Relationship change notifications** are triggered when any relationship of a digital twin is created, updated, or deleted.
-#### Properties
+### Properties
Here are the fields in the body of an edge change notification.
Here are the fields in the body of an edge change notification.
| `time` | Timestamp for when the operation occurred on the relationship | | `traceparent` | A W3C trace context for the event |
-#### Body details
+### Body details
The body is the payload of a relationship, also in JSON format. It uses the same format as a `GET` request for a relationship via the [DigitalTwins API](/rest/api/digital-twins/dataplane/twins).
Here is an example of a create or delete relationship notification:
} ```
-### Digital twin change notifications
-
-**Digital twin change notifications** are triggered when a digital twin is being updated, like:
-* When property values or metadata changes.
-* When digital twin or component metadata changes. An example of this scenario is changing the model of a digital twin.
-
-#### Properties
-
-Here are the fields in the body of a digital twin change notification.
-
-| Name | Value |
-| | |
-| `id` | Identifier of the notification, such as a UUID or a counter maintained by the service. `source` + `id` is unique for each distinct event |
-| `source` | Name of the IoT hub or Azure Digital Twins instance, like *myhub.azure-devices.net* or *mydigitaltwins.westus2.azuredigitaltwins.net*
-| `specversion` | *1.0*<br>The message conforms to this version of the [CloudEvents spec](https://github.com/cloudevents/spec). |
-| `type` | `Microsoft.DigitalTwins.Twin.Update` |
-| `datacontenttype` | `application/json` |
-| `subject` | ID of the digital twin |
-| `time` | Timestamp for when the operation occurred on the digital twin |
-| `traceparent` | A W3C trace context for the event |
-
-#### Body details
-
-The body for the `Twin.Update` notification is a JSON Patch document containing the update to the digital twin.
-
-For example, say that a digital twin was updated using the following patch.
--
-The corresponding notification (if synchronously executed by the service, such as Azure Digital Twins updating a digital twin) would have a body like:
-
-```json
-{
- "modelId": "dtmi:example:com:floor4;2",
- "patch": [
- {
- "value": 40,
- "path": "/Temperature",
- "op": "replace"
- },
- {
- "value": 30,
- "path": "/comp1/prop1",
- "op": "add"
- }
- ]
- }
-```
- ## Next steps See how to create endpoints and routes to deliver events:
digital-twins How To Manage Routes Apis Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-apis-cli.md
Once the endpoint with dead-lettering is set up, dead-lettered messages will be
Dead-lettered messages will match the schema of the original event that was intended to be delivered to your original endpoint.
-Here is an example of a dead-letter message for a [twin create notification](how-to-interpret-event-data.md#digital-twin-life-cycle-notifications):
+Here is an example of a dead-letter message for a [twin create notification](how-to-interpret-event-data.md#digital-twin-lifecycle-notifications):
```json {
For more information about using the CLI and what commands are available, see [*
Without filtering, endpoints receive a variety of events from Azure Digital Twins: * Telemetry fired by [digital twins](concepts-twins-graph.md) using the Azure Digital Twins service API * Twin property change notifications, fired on property changes for any twin in the Azure Digital Twins instance
-* Life-cycle events, fired when twins or relationships are created or deleted
+* Lifecycle events, fired when twins or relationships are created or deleted
You can restrict the events being sent by adding a **filter** for an endpoint to your event route. >[!NOTE]
-> Filters are **case-sensitive** and need to match on the payload case (which may not necessarily match the model case).
+> Filters are **case-sensitive** and need to match the payload case.
+>
+> For telemetry filters, this means that the casing needs to match the casing in the telemetry sent by the device, not necessarily the casing defined in the twin's model.
To add a filter, you can use a PUT request to *https://{Your-azure-digital-twins-hostname}/eventRoutes/{event-route-name}?api-version=2020-10-31* with the following body:
digital-twins How To Manage Routes Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes-portal.md
To add an event filter while you are creating an event route, use the _Add an ev
You can either select from some basic common filter options, or use the advanced filter options to write your own custom filters. >[!NOTE]
-> Filters are **case-sensitive** and need to match on the payload case (which may not necessarily match the model case).
+> Filters are **case-sensitive** and need to match the payload case.
+>
+> For telemetry filters, this means that the casing needs to match the casing in the telemetry sent by the device, not necessarily the casing defined in the twin's model.
#### Use the basic filters
digital-twins How To Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-cli.md
For instructions on how to install or update the Azure CLI to a newer version, s
### Get the extension
-You can make sure you have the latest version of the `azure-iot` extension with these steps. You can run these commands in the [Azure Cloud Shell](../cloud-shell/overview.md) or a [local Azure CLI](/cli/azure/install-azure-cli).
+The Azure CLI will automatically prompt you to install the extension on the first use of a command that requires it.
+Alternatively, you can use the following command to install the extension yourself at any time (or update it if it turns out that you already have an older version). The command can be run in either the [Azure Cloud Shell](../cloud-shell/overview.md) or a [local Azure CLI](/cli/azure/install-azure-cli).
+
+```azurecli-interactive
+az extension add --upgrade -n azure-iot
+```
## Next steps
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-azure-sql.md
Select either all databases or specific databases that you want to migrate to Az
1. On the **Select target** screen, provide authentication settings to your Azure SQL Database. ![Select target](media/tutorial-sql-server-to-azure-sql/select-target.png)
+
+ > [!NOTE]
+ > Currently, SQL authentication is the only supported authentication type.
1. Select **Next: Map to target databases** screen, map the source and the target database for migration.
dns Private Dns Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dns/private-dns-migration-guide.md
# Migrating legacy Azure DNS private zones to new resource model
-During public preview, private DNS zones were created using ΓÇ£dnszonesΓÇ¥ resource with ΓÇ£zoneTypeΓÇ¥ property set to ΓÇ£PrivateΓÇ¥. Such zones will not be supported after December 31, 2019 and must be migrated to GA resource model which makes use of ΓÇ£privateDnsZonesΓÇ¥ resource type instead of ΓÇ£dnszonesΓÇ¥. The migration process is simple, and we've provided a PowerShell script to automate this process. This guide provides step by step instruction to migrate your Azure DNS private zones to the new resource model.
+During public preview, private DNS zones were created using "dnszones" resource with "zoneType" property set to "Private". Such zones will not be supported after December 31, 2019 and must be migrated to GA resource model which makes use of "privateDnsZones" resource type instead of "dnszones". The migration process is simple, and we've provided a PowerShell script to automate this process. This guide provides step by step instruction to migrate your Azure DNS private zones to the new resource model.
To find out the dnszones resources that require migration; execute the below command in Azure CLI. ```azurecli
Open an elevated PowerShell window (Administrative mode) and run following comma
install-script PrivateDnsMigrationScript ```
-Enter ΓÇ£AΓÇ¥ when prompted to install the script
+Enter "A" when prompted to install the script
![Installing the script](./media/private-dns-migration-guide/install-migration-script.png)
PrivateDnsMigrationScript.ps1
### Enter the subscription ID and sign-in to Azure
-YouΓÇÖll be prompted to enter subscription ID containing the private DNS zones that you intend to migrate. YouΓÇÖll be asked to sign-in to your Azure account. Complete the sign-in so that script can access the private DNS zone resources in the subscription.
+You'll be prompted to enter subscription ID containing the private DNS zones that you intend to migrate. You'll be asked to sign-in to your Azure account. Complete the sign-in so that script can access the private DNS zone resources in the subscription.
![Login to Azure](./media/private-dns-migration-guide/login-migration-script.png) ### Select the DNS zones you want to migrate
-The script with get the list of all private DNS zones in the subscription and prompt you to confirm which ones you want to migrate. Enter ΓÇ£AΓÇ¥ to migrate all private DNS zones. Once you execute this step, the script will create new private DNS zones using new resource model and copy the data into the new DSN zone. This step will not alter your existing private DNS zones in anyway.
+The script with get the list of all private DNS zones in the subscription and prompt you to confirm which ones you want to migrate. Enter "A" to migrate all private DNS zones. Once you execute this step, the script will create new private DNS zones using new resource model and copy the data into the new DSN zone. This step will not alter your existing private DNS zones in anyway.
![Select DNS zones](./media/private-dns-migration-guide/migratezone-migration-script.png)
The script with get the list of all private DNS zones in the subscription and pr
Once the zones and records have been copied to the new resource model, the script will prompt you to switch the DNS resolution to new DNS zones. This step removes the association between legacy private DNS zones and your virtual networks. When the legacy zone is unlinked from the virtual networks, the new DNS zones created in above step would automatically take over the DNS resolution for those virtual networks.
-Select ΓÇÿAΓÇÖ to switch the DNS resolution for all virtual networks.
+Select 'A' to switch the DNS resolution for all virtual networks.
![Switching Name Resolution](./media/private-dns-migration-guide/switchresolution-migration-script.png)
Before proceeding further, verify that DNS resolution on your DNS zones is worki
![Verify Name Resolution](./media/private-dns-migration-guide/verifyresolution-migration-script.png)
-If you find that DNS queries aren't resolving, wait for a few minutes and retry the queries. If DNS queries are working as expected, enter ΓÇÿYΓÇÖ when script prompts you to remove the virtual network from the private DNS zone.
+If you find that DNS queries aren't resolving, wait for a few minutes and retry the queries. If DNS queries are working as expected, enter 'Y' when script prompts you to remove the virtual network from the private DNS zone.
![Confirm Name Resolution](./media/private-dns-migration-guide/confirmresolution-migration-script.png) >[!IMPORTANT]
->If because of any reason DNS resolution against the migrated zones isn't working as expected, enter ΓÇÿNΓÇÖ in above step and script will switch the DNS resolution back to legacy zones. Create a support ticket and we can help you with migration of your DNS zones.
+>If because of any reason DNS resolution against the migrated zones isn't working as expected, enter 'N' in above step and script will switch the DNS resolution back to legacy zones. Create a support ticket and we can help you with migration of your DNS zones.
## Cleanup
-This step will delete the legacy DNS zones and should be executed only after you've verified that DNS resolution is working as expected. YouΓÇÖll be prompted to delete each private DNS zone. Enter ΓÇÿYΓÇÖ at every prompt after verifying that DNS resolution for that zones is working properly.
+This step will delete the legacy DNS zones and should be executed only after you've verified that DNS resolution is working as expected. You'll be prompted to delete each private DNS zone. Enter 'Y' at every prompt after verifying that DNS resolution for that zones is working properly.
![Clean up](./media/private-dns-migration-guide/cleanup-migration-script.png)
This step will delete the legacy DNS zones and should be executed only after you
If you're using automation including templates, PowerShell scripts or custom code developed using SDK, you must update your automation to use the new resource model for the private DNS zones. Below are the links to new private DNS CLI/PS/SDK documentation. * [Azure DNS private zones REST API](/rest/api/dns/privatedns/privatezones)
-* [Azure DNS private zones CLI](/cli/azure/network/private-dns/link/vnet?view=azure-cli-latest)
+* [Azure DNS private zones CLI](/cli/azure/network/private-dns/link/vnet)
* [Azure DNS private zones PowerShell](/powershell/module/az.privatedns/)
-* [Azure DNS private zones SDK](/dotnet/api/overview/azure/privatedns/management?view=azure-dotnet-preview)
+* [Azure DNS private zones SDK](/dotnet/api/overview/azure/privatedns/management)
## Need further help
event-grid Event Schema Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema-policy.md
+
+ Title: Azure Policy as an Event Grid source
+description: This article describes how to use Azure Policy as an Event Grid event source. It provides the schema and links to tutorial and how-to articles.
+++ Last updated : 03/29/2021++
+# Azure Policy as an Event Grid source
+
+This article provides the properties and schema for [Azure Policy](../governance/policy/index.yml)
+events. For an introduction to event schemas, see
+[Azure Event Grid event schema](./event-schema.md). It also gives you a list of quick starts and
+tutorials to use Azure Policy as an event source.
+
+## Available event types
+
+Azure Policy emits the following event types:
+
+| Event type | Description |
+| - | -- |
+| Microsoft.PolicyInsights.PolicyStateCreated | Raised when a policy compliance state is created. |
+| Microsoft.PolicyInsights.PolicyStateChanged | Raised when a policy compliance state is changed. |
+| Microsoft.PolicyInsights.PolicyStateDeleted | Raised when a policy compliance state is deleted. |
+
+## Example event
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+The following example shows the schema of a policy state created event:
+
+```json
+[{
+ "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
+ "topic": "/subscriptions/<SubscriptionID>",
+ "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
+ "data": {
+ "timestamp": "2021-03-27T18:37:42.4496956Z",
+ "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
+ "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
+ "policyDefinitionReferenceId": "",
+ "complianceState": "NonCompliant",
+ "subscriptionId": "<subscription-id>",
+ "complianceReasonCode": ""
+ },
+ "eventType": "Microsoft.PolicyInsights.PolicyStateCreated",
+ "eventTime": "2021-03-27T18:37:42.5241536Z",
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}]
+```
+
+The schema for a policy state changed event is similar:
+
+```json
+[{
+ "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
+ "topic": "/subscriptions/<SubscriptionID>",
+ "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
+ "data": {
+ "timestamp": "2021-03-27T18:37:42.4496956Z",
+ "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
+ "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
+ "policyDefinitionReferenceId": "",
+ "complianceState": "NonCompliant",
+ "subscriptionId": "<subscription-id>",
+ "complianceReasonCode": ""
+ },
+ "eventType": "Microsoft.PolicyInsights.PolicyStateChanged",
+ "eventTime": "2021-03-27T18:37:42.5241536Z",
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}]
+```
+# [Cloud event schema](#tab/cloud-event-schema)
+
+The following example shows the schema of a policy state created event:
+
+```json
+[{
+ "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
+ "source": "/subscriptions/<SubscriptionID>",
+ "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
+ "data": {
+ "timestamp": "2021-03-27T18:37:42.4496956Z",
+ "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
+ "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
+ "policyDefinitionReferenceId": "",
+ "complianceState": "NonCompliant",
+ "subscriptionId": "<subscription-id>",
+ "complianceReasonCode": ""
+ },
+ "type": "Microsoft.PolicyInsights.PolicyStateCreated",
+ "time": "2021-03-27T18:37:42.5241536Z",
+ "specversion": "1.0"
+}]
+```
+
+The schema for a policy state changed event is similar:
+
+```json
+[{
+ "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
+ "source": "/subscriptions/<SubscriptionID>",
+ "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
+ "data": {
+ "timestamp": "2021-03-27T18:37:42.4496956Z",
+ "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
+ "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
+ "policyDefinitionReferenceId": "",
+ "complianceState": "NonCompliant",
+ "subscriptionId": "<subscription-id>",
+ "complianceReasonCode": ""
+ },
+ "type": "Microsoft.PolicyInsights.PolicyStateChanged",
+ "time": "2021-03-27T18:37:42.5241536Z",
+ "specversion": "1.0"
+}]
+```
+++
+## Event properties
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
+| `subject` | string | The fully qualified ID of the resource that the compliance state change is for, including the resource name and resource type. Uses the format, `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>` |
+| `eventType` | string | One of the registered event types for this event source. |
+| `eventTime` | string | The time the event is generated based on the provider's UTC time. |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Azure Policy event data. |
+| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. |
+| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `source` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
+| `subject` | string | The fully qualified ID of the resource that the compliance state change is for, including the resource name and resource type. Uses the format, `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>` |
+| `type` | string | One of the registered event types for this event source. |
+| `time` | string | The time the event is generated based on the provider's UTC time. |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Azure Policy event data. |
+| `specversion` | string | CloudEvents schema specification version. |
+++
+The data object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `timestamp` | string | The time (in UTC) that the resource was scanned by Azure Policy. For ordering events, use this property instead of the top-level `eventTime` or `time` properties. |
+| `policyAssignmentId` | string | The resource ID of the policy assignment. |
+| `policyDefinitionId` | string | The resource ID of the policy definition. |
+| `policyDefinitionReferenceId` | string | The reference ID for the policy definition inside the initiative definition, if the policy assignment is for an initiative. May be empty. |
+| `complianceState` | string | The compliance state of the resource with respect to the policy assignment. |
+| `subscriptionId` | string | The subscription ID of the resource. |
+| `complianceReasonCode` | string | The compliance reason code. May be empty. |
+
+## Next steps
+
+- For a walkthrough routing Azure Policy state change events, see
+ [Use Event Grid for policy state change notifications](../governance/policy/tutorials/route-state-change-events.md).
+- For an overview of integrating Azure Policy with Event Grid, see
+ [React to Azure Policy events by using Event Grid](../governance/policy/concepts/event-overview.md).
+- For an introduction to Azure Event Grid, see [What is Event Grid?](./overview.md)
+- For more information about creating an Azure Event Grid subscription, see
+ [Event Grid subscription schema](./subscription-creation-schema.md).
event-grid Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/event-schema.md
All events have the same following top-level data:
To learn about the properties in the data object, see the event source:
+* [Azure Policy](./event-schema-policy.md)
* [Azure subscriptions (management operations)](event-schema-subscriptions.md) * [Container Registry](event-schema-container-registry.md) * [Blob storage](event-schema-blob-storage.md)
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/overview.md
This article provides an overview of Azure Event Grid. If you want to get starte
:::image type="content" source="./media/overview/functional-model.png" alt-text="Event Grid model of sources and handlers" lightbox="./media/overview/functional-model-big.png":::
-This image shows how Event Grid connects sources and handlers, and isn't a comprehensive list of supported integrations.
+> [!NOTE]
+> This image shows how Event Grid connects sources and handlers, and isn't a comprehensive list of supported integrations. For a list of all supported event sources, see the following section.
## Event sources
Currently, the following Azure services support sending events to Event Grid. Fo
- [Azure Machine Learning](event-schema-machine-learning.md) - [Azure Maps](event-schema-azure-maps.md) - [Azure Media Services](event-schema-media-services.md)
+- [Azure Policy](./event-schema-policy.md)
- [Azure resource groups](event-schema-resource-groups.md) - [Azure Service Bus](event-schema-service-bus.md) - [Azure SignalR](event-schema-azure-signalr.md)
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/system-topics.md
Here is the current list of Azure services that support creation of system topic
- [Azure Machine Learning](event-schema-machine-learning.md) - [Azure Maps](event-schema-azure-maps.md) - [Azure Media Services](event-schema-media-services.md)
+- [Azure Policy](./event-schema-policy.md)
- [Azure resource groups](event-schema-resource-groups.md) - [Azure Service Bus](event-schema-service-bus.md) - [Azure SignalR](event-schema-azure-signalr.md)
expressroute Designing For Disaster Recovery With Expressroute Privatepeering https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md
However, if you load balance traffic across geo-redundant parallel paths, regard
### Different metros
-When using different metros for redundancy, the secondary location should be in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you'll need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are much lower but at the cost of increase latency end-to-end.
+When using different metros for redundancy, you should select the secondary location in the same [geo-political region](expressroute-locations-providers.md#locations). To choose a location outside of the geo-political region, you'll need to use Premium SKU for both circuits in the parallel paths. The advantage of this configuration is the chances of a natural disaster causing an outage to both links are much lower but at the cost of increased latency end-to-end.
In this article, let's discuss how to address challenges you may face when configuring geo-redundant paths.
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-faqs.md
Previously updated : 03/12/2021 Last updated : 03/29/2021
Yes. ExpressRoute can coexist with site-to-site VPNs. See [Configure ExpressRout
### How do I enable routing between my site-to-site VPN connection and my ExpressRoute?
-If you want to enable routing between your branch connected to Expressoute and your branch connected to a site-to-site VPN connection, you'll need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
+If you want to enable routing between your branch connected to ExpressRoute and your branch connected to a site-to-site VPN connection, you'll need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
### Why is there a public IP address associated with the ExpressRoute gateway on a virtual network?
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/overview.md
Previously updated : 03/16/2021 Last updated : 03/29/2021
Easily route traffic to your secured hub for filtering and logging without the n
This feature is available only with secured virtual hub deployments.
-You can use third-party providers for Branch to Internet (B2I) traffic filtering, side by side with Azure Firewall for Branch to VNet (B2V), VNet to VNet (V2V) and VNet to Internet (V2I). You can also use third-party providers for V2I traffic filtering as long as Azure Firewall isn't required for B2V or V2V.
+You can use third-party providers for Branch to Internet (B2I) traffic filtering, side by side with Azure Firewall for Branch to VNet (B2V), VNet to VNet (V2V) and VNet to Internet (V2I).
## Region availability
firewall-manager Secure Cloud Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/secure-cloud-network-powershell.md
In this tutorial, you learn how to:
- PowerShell 7
- This tutorial requires that you run Azure PowerShell locally on PowerShell 7. To install PowerShell 7, see [Migrating from Windows PowerShell 5.1 to PowerShell 7](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7).
+ This tutorial requires that you run Azure PowerShell locally on PowerShell 7. To install PowerShell 7, see [Migrating from Windows PowerShell 5.1 to PowerShell 7](/powershell/scripting/install/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7&preserve-view=true).
- Az.Network version 3.2.0 If you have Az.Network version 3.4.0 or later, you'll need to downgrade to use some of the commands in this tutorial. You can check the version of your Az.Network module with the command `Get-InstalledModule -Name Az.Network`. To uninstall the Az.Network module, run `Uninstall-Module -name az.network`. To install the Az.Network 3.2.0 module, run `Install-Module az.network -RequiredVersion 3.2.0 -force`.
firewall-manager Trusted Security Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall-manager/trusted-security-partners.md
Previously updated : 12/01/2020 Last updated : 03/29/2021
The supported security partners are **Zscaler**, **[Check Point](check-point-ove
You can use the security partners to filter Internet traffic in following scenarios: -- Virtual Network (VNet) to Internet
+- Virtual Network (VNet)-to-Internet
- Leverage advanced user-aware Internet protection for your cloud workloads running on Azure.
+ Use advanced user-aware Internet protection for your cloud workloads running on Azure.
-- Branch to Internet
+- Branch-to-Internet
- Leverage your Azure connectivity and global distribution to easily add third-party NSaaS filtering for branch to Internet scenarios. You can build your global transit network and security edge using Azure Virtual WAN.
+ Use your Azure connectivity and global distribution to easily add third-party NSaaS filtering for branch to Internet scenarios. You can build your global transit network and security edge using Azure Virtual WAN.
The following scenarios are supported:-- VNet/Branch to Internet via a security partner provider and the other traffic (spoke to spoke, spoke to branch, branch to spoke) via Azure Firewall.-- VNet/Branch to Internet via security partner provider
+- Two security providers in the hub
+
+ VNet/Branch-to-Internet via a security partner provider and the other traffic (spoke-to-spoke, spoke-to-branch, branch-to-spoke) via Azure Firewall.
+- Single provider in the hub
+
+ - All traffic (spoke-to-spoke, spoke-to-branch, branch-to-spoke, VNet/Branch-to-Internet) secured by Azure Firewall
+ - VNet/Branch-to-Internet via security partner provider
## Best practices for Internet traffic filtering in secured virtual hubs
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/front-door-custom-domain-https.md
na ms.devlang: na Previously updated : 10/21/2020 Last updated : 03/26/2021 # As a website owner, I want to enable HTTPS on the custom domain in my Front Door so that my users can use my custom domain to access their content securely.
# Tutorial: Configure HTTPS on a Front Door custom domain
-This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, https:\//www.contoso.com), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it is sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, https:\//www.contoso.com), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
-Azure Front Door supports HTTPS on a Front Door default hostname, by default. For example, if you create a Front Door (such as `https://contoso.azurefd.net`), HTTPS is automatically enabled for requests made to `https://contoso.azurefd.net`. However, once you onboard the custom domain 'www.contoso.com' you will need to additionally enable HTTPS for this frontend host.
+Azure Front Door supports HTTPS on a Front Door default hostname, by default. For example, if you create a Front Door (such as `https://contoso.azurefd.net`), HTTPS is automatically enabled for requests made to `https://contoso.azurefd.net`. However, once you onboard the custom domain 'www.contoso.com' you'll need to additionally enable HTTPS for this frontend host.
Some of the key attributes of the custom HTTPS feature are: -- No additional cost: There are no costs for certificate acquisition or renewal and no additional cost for HTTPS traffic.
+- No extra cost: There are no costs for certificate acquisition or renewal and no extra cost for HTTPS traffic.
- Simple enablement: One-click provisioning is available from the [Azure portal](https://portal.azure.com). You can also use REST API or other developer tools to enable the feature. -- Complete certificate management is available: All certificate procurement and management is handled for you. Certificates are automatically provisioned and renewed prior to expiration, which removes the risks of service interruption due to a certificate expiring.
+- Complete certificate management is available: All certificate procurement and management is handled for you. Certificates are automatically provisioned and renewed before expiration, which removes the risks of service interruption because of a certificate expiring.
In this tutorial, you learn how to: > [!div class="checklist"]
To enable HTTPS on a custom domain, follow these steps:
2. In the list of frontend hosts, select the custom domain you want to enable HTTPS for containing your custom domain.
-3. Under the section **Custom domain HTTPS**, click **Enabled**, and select **Front Door managed** as the certificate source.
+3. Under the section **Custom domain HTTPS**, select **Enabled**, and select **Front Door managed** as the certificate source.
-4. Click Save.
+4. Select Save.
-5. Proceed to [Validate the domain](#validate-the-domain).
+5. Continue to [Validate the domain](#validate-the-domain).
> [!NOTE] > For AFD managed certificates, DigiCertΓÇÖs 64 character limit is enforced. Validation will fail if that limit is exceeded.
To enable HTTPS on a custom domain, follow these steps:
### Option 2: Use your own certificate
-You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few additional steps. When you create your TLS/SSL certificate, you must create it with an allowed certificate authority (CA). Otherwise, if you use a non-allowed CA, your request will be rejected. For a list of allowed CAs, see [Allowed certificate authorities for enabling custom HTTPS on Azure Front Door](front-door-troubleshoot-allowed-ca.md).
+You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure Front Door uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create it with an allowed certificate authority (CA). Otherwise, if you use a non-allowed CA, your request will be rejected. For a list of allowed CAs, see [Allowed certificate authorities for enabling custom HTTPS on Azure Front Door](front-door-troubleshoot-allowed-ca.md).
#### Prepare your Azure Key vault account and certificate
Grant Azure Front Door permission to access the certificates in your Azure Key
3. Under Certificate management type, select **Use my own certificate**.
-4. Azure Front Door requires that the subscription of the Key Vault account is the same as for your Front Door. Select a key vault, certificate (secret), and certificate version.
+4. Azure Front Door requires that the subscription of the Key Vault account is the same as for your Front Door. Select a key vault, Secret, and Secret version.
Azure Front Door lists the following information: - The key vault accounts for your subscription ID.
- - The certificates (secrets) under the selected key vault.
- - The available certificate versions.
+ - The secrets under the selected key vault.
+ - The available secret versions.
-> [!NOTE]
-> Leaving the certificate version as blank would lead to:
-> - The latest version of the certificate getting selected.
-> - Automatic rotation of certificates to the latest version, when a newer version of the certificate is available in your Key Vault.
+ > [!NOTE]
+ > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be deployed.
-5. When you use your own certificate, domain validation is not required. Proceed to [Wait for propagation](#wait-for-propagation).
+5. When you use your own certificate, domain validation isn't required. Continue to [Wait for propagation](#wait-for-propagation).
## Validate the domain
-If you already have a custom domain in use that is mapped to your custom endpoint with a CNAME record or you're using your own certificate, proceed to
-[Custom domain is mapped to your Front Door](#custom-domain-is-mapped-to-your-front-door-by-a-cname-record). Otherwise, if the CNAME record entry for your domain no longer exists or it contains the afdverify subdomain, proceed to [Custom domain is not mapped to your Front Door](#custom-domain-is-not-mapped-to-your-front-door).
+If you already have a custom domain in use that gets mapped to your custom endpoint with a CNAME record or you're using your own certificate, continue to [Custom domain is mapped to your Front Door](#custom-domain-is-mapped-to-your-front-door-by-a-cname-record). Otherwise, if the CNAME record entry for your domain no longer exists or it contains the afdverify subdomain, continue to [Custom domain is not mapped to your Front Door](#custom-domain-is-not-mapped-to-your-front-door).
### Custom domain is mapped to your Front Door by a CNAME record
-When you added a custom domain to your Front Door's frontend hosts, you created a CNAME record in the DNS table of your domain registrar to map it to your Front Door's default .azurefd.net hostname. If that CNAME record still exists and does not contain the afdverify subdomain, the DigiCert Certificate Authority uses it to automatically validate ownership of your custom domain.
+When you added a custom domain to your Front Door's frontend hosts, you created a CNAME record in the DNS table of your domain registrar to map it to your Front Door's default .azurefd.net hostname. If that CNAME record still exists and doesn't contain the afdverify subdomain, the DigiCert Certificate Authority uses it to automatically validate ownership of your custom domain.
-If you're using your own certificate, domain validation is not required.
+If you're using your own certificate, domain validation isn't required.
Your CNAME record should be in the following format, where *Name* is your custom domain name and *Value* is your Front Door's default .azurefd.net hostname:
Your CNAME record should be in the following format, where *Name* is your custom
For more information about CNAME records, see [Create the CNAME DNS record](../cdn/cdn-map-content-to-custom-domain.md).
-If your CNAME record is in the correct format, DigiCert automatically verifies your custom domain name and creates a dedicated certificate for your domain name. DigitCert won't send you a verification email and you won't need to approve your request. The certificate is valid for one year and will be autorenewed before it expires. Proceed to [Wait for propagation](#wait-for-propagation).
+If your CNAME record is in the correct format, DigiCert automatically verifies your custom domain name and creates a dedicated certificate for your domain name. DigitCert won't send you a verification email and you won't need to approve your request. The certificate is valid for one year and will be autorenewed before it expires. Continue to [Wait for propagation](#wait-for-propagation).
Automatic validation typically takes a few mins. If you don't see your domain validated within an hour, open a support ticket.
Automatic validation typically takes a few mins. If you don't see your domain va
If the CNAME record entry for your endpoint no longer exists or it contains the afdverify subdomain, follow the rest of the instructions in this step.
-After you enable HTTPS on your custom domain, the DigiCert CA validates ownership of your domain by contacting its registrant, according to the domain's [WHOIS](http://whois.domaintools.com/) registrant information. Contact is made via the email address (by default) or the phone number listed in the WHOIS registration. You must complete domain validation before HTTPS will be active on your custom domain. You have six business days to approve the domain. Requests that are not approved within six business days are automatically canceled.
+After you enable HTTPS on your custom domain, the DigiCert CA validates ownership of your domain by contacting its registrant, according to the domain's [WHOIS](http://whois.domaintools.com/) registrant information. Contact is made via the email address (by default) or the phone number listed in the WHOIS registration. You must complete domain validation before HTTPS will be active on your custom domain. You have six business days to approve the domain. Requests that aren't approved within six business days are automatically canceled.
![WHOIS record](./media/front-door-custom-domain-https/whois-record.png)
-DigiCert also sends a verification email to additional email addresses. If the WHOIS registrant information is private, verify that you can approve directly from one of the following addresses:
+DigiCert also sends a verification email to other email addresses. If the WHOIS registrant information is private, verify that you can approve directly from one of the following addresses:
admin@&lt;your-domain-name.com&gt; administrator@&lt;your-domain-name.com&gt;
webmaster@&lt;your-domain-name.com&gt;
hostmaster@&lt;your-domain-name.com&gt; postmaster@&lt;your-domain-name.com&gt;
-You should receive an email in a few minutes, similar to the following example, asking you to approve the request. If you are using a spam filter, add admin@digicert.com to its allow list. If you don't receive an email within 24 hours, contact Microsoft support.
+You should receive an email in a few minutes, similar to the following example, asking you to approve the request. If you are using a spam filter, add admin@digicert.com to its allowlist. If you don't receive an email within 24 hours, contact Microsoft support.
-When you click on the approval link, you are directed to an online approval form. Follow the instructions on the form; you have two verification options:
+When you select the approval link, you're directed to an online approval form. Follow the instructions on the form; you have two verification options:
-- You can approve all future orders placed through the same account for the same root domain; for example, contoso.com. This approach is recommended if you plan to add additional custom domains for the same root domain.
+- You can approve all future orders placed through the same account for the same root domain; for example, contoso.com. This approach is recommended if you plan to add more custom domains for the same root domain.
-- You can approve just the specific host name used in this request. Additional approval is required for subsequent requests.
+- You can approve just the specific host name used in this request. Extra approval is required for subsequent requests.
After approval, DigiCert completes the certificate creation for your custom domain name. The certificate is valid for one year and will be autorenewed before it's expired.
After the domain name is validated, it can take up to 6-8 hours for the custom d
### Operation progress
-The following table shows the operation progress that occurs when you enable HTTPS. After you enable HTTPS, four operation steps appear in the custom domain dialog. As each step becomes active, additional substep details appear under the step as it progresses. Not all of these substeps will occur. After a step successfully completes, a green check mark appears next to it.
+The following table shows the operation progress that occurs when you enable HTTPS. After you enable HTTPS, four operation steps appear in the custom domain dialog. As each step becomes active, more substep details appear under the step as it progresses. Not all of these substeps will occur. After a step successfully completes, a green check mark appears next to it.
| Operation step | Operation substep details | | | | | 1 Submitting request | Submitting request | | | Your HTTPS request is being submitted. | | | Your HTTPS request has been submitted successfully. |
-| 2 Domain validation | Domain is automatically validated if it is CNAME mapped to the default .azurefd.net frontend host of your Front Door. Otherwise, a verification request will be sent to the email listed in your domain's registration record (WHOIS registrant). Verify the domain as soon as possible. |
+| 2 Domain validation | Domain is automatically validated if it's CNAME mapped to the default .azurefd.net frontend host of your Front Door. Otherwise, a verification request will be sent to the email listed in your domain's registration record (WHOIS registrant). Verify the domain as soon as possible. |
| | Your domain ownership has been successfully validated. |
-| | Domain ownership validation request expired (customer likely didn't respond within 6 days). HTTPS will not be enabled on your domain. * |
-| | Domain ownership validation request was rejected by the customer. HTTPS will not be enabled on your domain. * |
+| | Domain ownership validation request expired (customer likely didn't respond within 6 days). HTTPS won't be enabled on your domain. * |
+| | Domain ownership validation request was rejected by the customer. HTTPS won't be enabled on your domain. * |
| 3 Certificate provisioning | The certificate authority is currently issuing the certificate needed to enable HTTPS on your domain. | | | The certificate has been issued and is currently being deployed for your Front Door. This process could take up to 1 hour. | | | The certificate has been successfully deployed for your Front Door. |
We encountered an unexpected error while processing your HTTPS request. Please t
3. *What if I don't receive the domain verification email from DigiCert?*
- If you have a CNAME entry for your custom domain that points directly to your endpoint hostname (and you are not using the afdverify subdomain name), you won't receive a domain verification email. Validation occurs automatically. Otherwise, if you don't have a CNAME entry and you haven't received an email within 24 hours, contact Microsoft support.
+ If you have a CNAME entry for your custom domain that points directly to your endpoint hostname (and you aren't using the afdverify subdomain name), you won't receive a domain verification email. Validation occurs automatically. Otherwise, if you don't have a CNAME entry and you haven't received an email within 24 hours, contact Microsoft support.
4. *Is using a SAN certificate less secure than a dedicated certificate?*
We encountered an unexpected error while processing your HTTPS request. Please t
5. *Do I need a Certificate Authority Authorization record with my DNS provider?*
- No, a Certificate Authority Authorization record is not currently required. However, if you do have one, it must include DigiCert as a valid CA.
+ No, a Certificate Authority Authorization record isn't currently required. However, if you do have one, it must include DigiCert as a valid CA.
## Clean up resources
-In the preceding steps, you enabled the HTTPS protocol on your custom domain. If you no longer want to use your custom domain with HTTPS, you can disable HTTPS by performing theses steps:
+In the preceding steps, you enabled the HTTPS protocol on your custom domain. If you no longer want to use your custom domain with HTTPS, you can disable HTTPS by doing theses steps:
### Disable the HTTPS feature 1. In the [Azure portal](https://portal.azure.com), browse to your **Azure Front Door** configuration.
-2. In the list of frontend hosts, click the custom domain for which you want to disable HTTPS.
+2. In the list of frontend hosts, select the custom domain for which you want to disable HTTPS.
3. Click **Disabled** to disable HTTPS, then click **Save**. ### Wait for propagation
-After the custom domain HTTPS feature is disabled, it can take up to 6-8 hours for it to take effect. When the process is complete, the custom HTTPS status in the Azure portal is set to **Disabled** and the three operation steps in the custom domain dialog are marked as complete. Your custom domain can no longer use HTTPS.
+After the custom domain HTTPS feature is disabled, it can take up to 6-8 hours for it to take effect. When the process is complete, the custom HTTPS status in the Azure portal gets set to **Disabled** and the three operation steps in the custom domain dialog are marked as complete. Your custom domain can no longer use HTTPS.
#### Operation progress
-The following table shows the operation progress that occurs when you disable HTTPS. After you disable HTTPS, three operation steps appear in the Custom domain dialog. As each step becomes active, additional details appear under the step. After a step successfully completes, a green check mark appears next to it.
+The following table shows the operation progress that occurs when you disable HTTPS. After you disable HTTPS, three operation steps appear in the Custom domain dialog. As each step becomes active, more details appear under the step. After a step successfully completes, a green check mark appears next to it.
| Operation progress | Operation details | | | |
In this tutorial, you learned how to:
* Upload a certificate to Key Vault. * Validate a domain.
-* Enable HTTPS for you custom domain.
+* Enable HTTPS for your custom domain.
-To learn how to set up a geo-filtering policy for you Front Door, continue to the next tutorial.
+To learn how to set up a geo-filtering policy for your Front Door, continue to the next tutorial.
> [!div class="nextstepaction"] > [Set up a geo-filtering policy](front-door-geo-filtering.md)
frontdoor Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/frontdoor/standard-premium/overview.md
With Azure Front Door Standard/Premium, you can transform your global consumer a
Azure Front Door Standard/Premium works at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network to improve global connectivity. Based on your customized routing method using rules set, you can ensure that Azure Front Door will route your client requests to the fastest and most available origin. An application origin is any Internet-facing service hosted inside or outside of Azure. Azure Front Door Standard/Premium provides a range of traffic-routing methods and origin health monitoring options to suit different application needs and automatic failover scenarios. Similar to Traffic Manager, Front Door is resilient to failures, including failures to an entire Azure region.
-Azure Front Door also protect your app at the edges with integrated Web Application Firewall protection, Bot Protection, and built-in lay 3/layer 4 distributed denial of service (DDoS) protection. It also secures your private back-ends with private link service. Azure Front Door gives you Microsoft’s best-in-practice security at global scale. 
+Azure Front Door also protect your app at the edges with integrated Web Application Firewall protection, Bot Protection, and built-in layer 3/layer 4 distributed denial of service (DDoS) protection. It also secures your private back-ends with private link service. Azure Front Door gives you Microsoft’s best-in-practice security at global scale. 
>[!NOTE] > Azure provides a suite of fully managed load-balancing solutions for your scenarios.
germany Germany Migration Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/germany/germany-migration-databases.md
Title: Migrate Azure database resources, Azure Germany to global Azure description: This article provides information about migrating your Azure database resources from Azure Germany to global Azure Previously updated : 02/16/2021 Last updated : 03/29/2021
For more information the following tables below indicates T-SQL commands for man
|[sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync?view=azuresqldb-current&preserve-view=true) | Causes the application to wait until all committed transactions are replicated and acknowledged by the active secondary database. |
+## Migrate SQL Database long-term retention backups
+
+Migrating a database with geo-replication or BACPAC file does not copy over the long-term retention backups, that the database might have in Azure Germany. To migrate existing long-term retention backups to the target global Azure region, you can use the COPY long-term retention backup procedure.
+
+>[!Note]
+>LTR backup copy methods documented here can only copy the LTR backups from Azure Germany to global Azure. Copying PITR backups using these methods is not supported.
+>
+
+### Pre-requisites
+
+1. Target database where you are copying the LTR backups, in global Azure must exist before you start the copying the backups. It is recommended that you first migrate the source database using [active geo-replication](#migrate-sql-database-using-active-geo-replication) and then initiate the LTR backup copy. This will ensure that the database backups are copied to the correct destination database. This step is not required, if you are copying over LTR backups of a dropped database. When copying LTR backups of a dropped database, a dummy DatabaseID will be created in the target region.
+2. Install this [PowerShell Az Module](https://www.powershellgallery.com/packages/Az.Sql/3.0.0-preview)
+3. Before you begin, ensure that required [Azure RBAC roles](https://docs.microsoft.com/azure/azure-sql/database/long-term-backup-retention-configure#azure-roles-to-manage-long-term-retention) are granted at either **subscription** or **resource group** scope. Note: To access LTR backups that belong to a dropped server, the permission must be granted in the subscription scope of that server. .
### Limitations
For more information the following tables below indicates T-SQL commands for man
- Creation of a geo secondary must be initiated from the Azure Germany region. - Customers can migrate databases out of Azure Germany only to global Azure. Currently no other cross-cloud migration is supported. - Azure AD users in Azure Germany user databases are migrated but are not available in the new Azure AD tenant where the migrated database resides. To enable these users, they must be manually dropped and recreated using the current Azure AD users available in the new Azure AD tenant where the newly migrated database resides. ++
+### Copy long-term retention backups using PowerShell
+
+A new PowerShell command **Copy-AzSqlDatabaseLongTermRetentionBackup** has been introduced, which can be used to copy the long-term retention backups from Azure Germany to Azure global regions.
+
+1. **Copy LTR backup using backup name**
+Following example shows how you can copy a LTR backup from Azure Germany to Azure global region, using the backupname.
+
+```powershell
+# Source database and target database info
+$location = "<location>"
+$sourceRGName = "<source resourcegroup name>"
+$sourceServerName = "<source server name>"
+$sourceDatabaseName = "<source database name>"
+$backupName = "<backup name>"
+$targetDatabaseName = "<target database name>"
+$targetSubscriptionId = "<target subscriptionID>"
+$targetRGName = "<target resource group name>"
+$targetServerFQDN = "<targetservername.database.windows.net>"
+
+Copy-AzSqlDatabaseLongTermRetentionBackup
+ -Location $location
+ -ResourceGroupName $sourceRGName
+ -ServerName $sourceServerName
+ -DatabaseName $sourceDatabaseName
+ -BackupName $backupName
+ -TargetDatabaseName $targetDatabaseName
+ -TargetSubscriptionId $targetSubscriptionId
+ -TargetResourceGroupName $targetRGName
+ - TargetServerFullyQualifiedDomainName $targetServerFQDN
+```
+
+2. **Copy LTR backup using backup resourceID**
+Following example shows how you can copy LTR backup from Azure Germany to Azure global region, using a backup resourceID.
+
+```powershell
+# Source Database and target database info
+$resourceID = "/subscriptions/000000000-eeee-4444-9999-e9999a5555ab/resourceGroups/mysourcergname/providers/Microsoft.Sql/locations/germanynorth/longTermRetentionServers/mysourceserver/longTermRetentionDatabases/mysourcedb/longTermRetentionBackups/0e848ed8-c229-444c-a3ba-75ac0507dd31;132567894740000000"
+$targetDatabaseName = "<target database name>"
+$targetSubscriptionId = "<target subscriptionID>"
+$targetRGName = "<target resource group name>"
+$targetServerFQDN = "<targetservername.database.windows.net>"
++
+Copy-AzSqlDatabaseLongTermRetentionBackup
+ -ResourceId $sourceRGName
+ -TargetDatabaseName $targetDatabaseName
+ -TargetSubscriptionId $targetSubscriptionId
+ -TargetResourceGroupName $targetRGName
+ - TargetServerFullyQualifiedDomainName $targetServerFQDN
+```
+
+3. **Copy LTR backup of a deleted database**
+Following example shows how to copy LTR backup of a deleted or dropped database from Azure Germany to Azure global. If the target database does not exist, a dummy database ID will be created.
+
+```powershell
+# Source Database and target database info
+$targetDatabaseName = "<target database name>"
+$targetSubscriptionId = "<target subscriptionID>"
+$targetRGName = "<target resource group name>"
+$targetServerFQDN = "<targetservername.database.windows.net>"
+
+Copy-AzSqlDatabaseLongTermRetentionBackup
+-TargetDatabaseName $targetDatabaseName
+-TargetSubscriptionId $targetSubscriptionId
+-TargetResourceGroupName $targetRGName
+- TargetServerFullyQualifiedDomainName $targetServerFQDN
+```
+
+4. **Get details of a LTR backup copy operation**
+Following command can be used to get details of a given LTR backup. The output will show if there is a copy operation in progress or if a backup storage redundancy change is requested for the database. This will return the OperationId for the copy operation.
+
+```powershell
+Get-AzSqlDatabaseLongTermRetentionBackup
+-BackupName $backupname
+-DatabaseName $databasename
+-Location $location
+```
+
+5. **Cancel LTR backup copy operation**
+Following command can be used to cancel a LTR backup copy operation. The OperationId can be obtained used the Get-AzSqlDatabaseLongTermRetentionBackup command, as shown above.
+
+```powershell
+Stop-AzSqlDatabaseActivity
+-ResourceGroupName $sourceRGName
+-ServerName $sourceServerName
+-DatabaseName $sourceDatabaseName
+-OperationId aaaaaa-9999-4444-8888-44444ccccfff
+```
+
+### Limitations
+ - [Point-in-time restore (PITR)](../azure-sql/database/recovery-using-backups.md#point-in-time-restore) backups are only taken on the primary database, this is by design. When migrating databases from Azure Germany using Geo-DR, PITR backups will start happening on the new primary after failover. However, the existing PITR backups (on the previous primary in Azure Germany) will not be migrated. If you need PITR backups to support any point-in-time restore scenarios, you need to restore the database from PITR backups in Azure Germany and then migrate the recovered database to global Azure. -- Long-term retention policies are not migrated with the database. If you have a [long-term retention (LTR)](../azure-sql/database/long-term-retention-overview.md) policy on your database in Azure Germany, you need to manually copy and recreate the LTR policy on the new database after migrating. Functionality to migrate LTR backups from Azure Germany to global Azure are not currently available.
+- Long-term retention policies are not migrated with the database. If you have a [long-term retention (LTR)](../azure-sql/database/long-term-retention-overview.md) policy on your database in Azure Germany, you need to manually copy and recreate the LTR policy on the new database after migrating.
### Requesting access
Learn about tools, techniques, and recommendations for migrating resources in th
- [Identity](./germany-migration-identity.md) - [Security](./germany-migration-security.md) - [Management tools](./germany-migration-management-tools.md)-- [Media](./germany-migration-media.md)
+- [Media](./germany-migration-media.md)
governance Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/event-overview.md
+
+ Title: Reacting to Azure Policy state change events
+description: Use Azure Event Grid to subscribe to App Policy events, which allow applications to react to state changes without the need for complicated code.
Last updated : 03/29/2021++
+# Reacting to Azure Policy state change events
+
+Azure Policy events enable applications to react to state changes. This integration is done without
+the need for complicated code or expensive and inefficient polling services. Instead, events are
+pushed through [Azure Event Grid](../../../event-grid/index.yml) to subscribers such as
+[Azure Functions](../../../azure-functions/index.yml),
+[Azure Logic Apps](../../../logic-apps/index.yml), or even to your own custom http listener.
+Critically, you only pay for what you use.
+
+Azure Policy events are sent to the Azure Event Grid, which provides reliable delivery services to
+your applications through rich retry policies and dead-letter delivery. To learn more, see
+[Event Grid message delivery and retry](../../../event-grid/delivery-and-retry.md).
+
+The common Azure Policy event scenario is tracking when the compliance state of a resource changes
+during policy evaluation. Event-based architecture is an efficient way to react to these changes
+instead of scanning the compliance state of resources on a fixed schedule.
+
+> [!NOTE]
+> Azure Policy state change events are sent to Event Grid after an
+> [evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers) finishes resource
+> evaluation.
+
+See
+[Route policy state change events to Event Grid with Azure CLI](../tutorials/route-state-change-events.md)
+for a full tutorial.
++
+## Available Azure Policy events
+
+Event grid uses [event subscriptions](../../../event-grid/concepts.md#event-subscriptions) to route
+event messages to subscribers. Azure Policy event subscriptions can include three types of events:
+
+| Event type | Description |
+| - | -- |
+| Microsoft.PolicyInsights.PolicyStateCreated | Raised when a policy compliance state is created. |
+| Microsoft.PolicyInsights.PolicyStateChanged | Raised when a policy compliance state is changed. |
+| Microsoft.PolicyInsights.PolicyStateDeleted | Raised when a policy compliance state is deleted. |
+
+## Event schema
+
+Azure Policy events contain all the information you need to respond to changes in your data. You can
+identify an Azure Policy event when the `eventType` property starts with "Microsoft.PolicyInsights".
+Additional information about the usage of Event Grid event properties is documented in
+[Event Grid event schema](../../../event-grid/event-schema.md).
+
+| Property | Type | Description |
+| -- | - | -- |
+| `id` | string | Unique identifier for the event. |
+| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
+| `subject` | string | The fully qualified ID of the resource that the compliance state change is for, including the resource name and resource type. Uses the format, `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>` |
+| `data` | object | Azure Policy event data. |
+| `data.timestamp` | string | The time (in UTC) that the resource was scanned by Azure Policy. For ordering events, use this property instead of the top-level `eventTime` or `time` properties. |
+| `data.policyAssignmentId` | string | The resource ID of the policy assignment. |
+| `data.policyDefinitionId` | string | The resource ID of the policy definition. |
+| `data.policyDefinitionReferenceId` | string | The reference ID for the policy definition inside the initiative definition, if the policy assignment is for an initiative. May be empty. |
+| `data.complianceState` | string | The compliance state of the resource with respect to the policy assignment. |
+| `data.subscriptionId` | string | The subscription ID of the resource. |
+| `data.complianceReasonCode` | string | The compliance reason code. May be empty. |
+| `eventType` | string | One of the registered event types for this event source. |
+| `eventTime` | string | The time the event is generated based on the provider's UTC time. |
+| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. |
+| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
+
+Here's an example of a policy state change event:
+
+```json
+[{
+ "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
+ "topic": "/subscriptions/<SubscriptionID>",
+ "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
+ "data": {
+ "timestamp": "2021-03-27T18:37:42.4496956Z",
+ "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
+ "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
+ "policyDefinitionReferenceId": "",
+ "complianceState": "NonCompliant",
+ "subscriptionId": "<subscription-id>",
+ "complianceReasonCode": ""
+ },
+ "eventType": "Microsoft.PolicyInsights.PolicyStateChanged",
+ "eventTime": "2021-03-27T18:37:42.5241536Z",
+ "dataVersion": "1",
+ "metadataVersion": "1"
+}]
+```
+
+For more information, see [Azure Policy events schema](../../../event-grid/event-schema-policy.md).
+
+## Practices for consuming events
+
+Applications that handle Azure Policy events should follow these recommended practices:
+
+> [!div class="checklist"]
+> - Multiple subscriptions can be configured to route events to the same event handler, so don't
+> assume events are from a particular source. Instead, check the topic of the message to ensure the
+> policy assignment, policy definition, and resource the state change event is for.
+> - Check the `eventType` and don't assume that all events you receive are the types you expect.
+> - Use `data.timestamp` to determine the order of the events in Azure Policy, instead of the
+> top-level `eventTime` or `time` properties.
+> - Use the subject field to access the resource that had a policy state change.
+
+## Next steps
+
+Learn more about Event Grid and give Azure Policy state change events a try:
+
+- [Route policy state change events to Event Grid with Azure CLI](../tutorials/route-state-change-events.md)
+- [Azure Policy schema details for Event Grid](../../../event-grid/event-schema-policy.md)
+- [About Event Grid](../../../event-grid/overview.md)
governance Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration.md
The Guest Configuration extension writes log files to the following locations:
Windows: `C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log`
-Linux: `/var/lib/GuestConfig/gc_agent_logs/gc_agent.log`
+Linux
+
+- Azure VM: `/var/lib/GuestConfig/gc_agent_logs/gc_agent.log`
+- Azure VM: `/var/lib/GuestConfig/arc_policy_logs/gc_agent.log`
### Collecting logs remotely
egrep -B $linesToIncludeBeforeMatch -A $linesToIncludeAfterMatch 'DSCEngine|DSCM
The Guest Configuration client downloads content packages to a machine and extracts the contents. To verify what content has been downloaded and stored, view the folder locations given below.
-Windows: `c:\programdata\guestconfig\configurations`
+Windows: `c:\programdata\guestconfig\configuration`
-Linux: `/var/lib/guestconfig/configurations`
+Linux: `/var/lib/GuestConfig/Configuration`
## Guest Configuration samples
governance Route State Change Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/tutorials/route-state-change-events.md
+
+ Title: "Tutorial: Route policy state change events to Event Grid with Azure CLI"
+description: In this tutorial, you configure Event Grid to listen for policy state change events and call a webhook.
Last updated : 03/29/2021+++
+# Tutorial: Route policy state change events to Event Grid with Azure CLI
+
+In this article, you learn how to set up Azure Policy event subscriptions to send policy state
+change events to a web endpoint. Azure Policy users can subscribe to events emitted when policy
+state changes occur on resources. These events can trigger web hooks,
+[Azure Functions](../../../azure-functions/index.yml),
+[Azure Storage Queues](../../../storage/queues/index.yml), or any other event handler that is
+supported by [Azure Event Grid](../../../event-grid/index.yml). Typically, you send events to an
+endpoint that processes the event data and takes actions. However, to simplify this tutorial, you
+send the events to a web app that collects and displays the messages.
+
+## Prerequisites
+
+- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
+ account before you begin.
+
+- This quickstart requires that you run Azure CLI version 2.0.76 or later. To find the version, run
+ `az --version`. If you need to install or upgrade, see
+ [Install Azure CLI](/cli/azure/install-azure-cli).
+
+- Even if you've previously used Azure Policy or Event Grid, re-register their respective resource
+ providers:
+
+ ```azurecli-interactive
+ # Log in first with az login if you're not using Cloud Shell
+
+ # Provider register: Register the Azure Policy provider
+ az provider register --namespace Microsoft.PolicyInsights
+
+ # Provider register: Register the Azure Event Grid provider
+ az provider register --namespace Microsoft.EventGrid
+ ```
++
+## Create a resource group
+
+Event Grid topics are Azure resources, and must be placed in an Azure resource group. The resource
+group is a logical collection into which Azure resources are deployed and managed.
+
+Create a resource group with the [az group create](/cli/azure/group) command.
+
+The following example creates a resource group named `<resource_group_name>` in the _westus_
+location. Replace `<resource_group_name>` with a unique name for your resource group.
+
+```azurecli-interactive
+# Log in first with az login if you're not using Cloud Shell
+
+az group create --name <resource_group_name> --location westus
+```
+
+## Create an Event Grid system topic
+
+Now that we have a resource group, we create a
+[system topic](../../../event-grid/system-topics.md). A system topic in Event Grid represents one or
+more events published by Azure services such as Azure Policy and Azure Event Hubs. This system topic
+uses the `Microsoft.PolicyInsights.PolicyStates` topic type for Azure Policy state changes. Replace
+`<SubscriptionID>` in the **scope** parameter with the ID of your subscription and
+`<resource_group_name>` in **resource-group** parameter with the previously created resource group.
+
+```azurecli-interactive
+# Log in first with az login if you're not using Cloud Shell
+
+az eventgrid system-topic create --name PolicyStateChanges --location global --topic-type Microsoft.PolicyInsights.PolicyStates --source "/subscriptions/<SubscriptionID>" --resource-group "<resource_group_name>"
+```
+
+## Create a message endpoint
+
+Before subscribing to the topic, let's create the endpoint for the event message. Typically, the
+endpoint takes actions based on the event data. To simplify this quickstart, you deploy a
+[pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the
+event messages. The deployed solution includes an App Service plan, an App Service web app, and
+source code from GitHub.
+
+Replace `<your-site-name>` with a unique name for your web app. The web app name must be unique
+because it's part of the DNS entry.
+
+```azurecli-interactive
+# Log in first with az login if you're not using Cloud Shell
+
+az deployment group create \
+ --resource-group <resource_group_name> \
+ --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" \
+ --parameters siteName=<your-site-name> hostingPlanName=viewerhost
+```
+
+The deployment may take a few minutes to complete. After the deployment has succeeded, view your web
+app to make sure it's running. In a web browser, navigate to:
+`https://<your-site-name>.azurewebsites.net`
+
+You should see the site with no messages currently displayed.
+
+## Subscribe to the system topic
+
+You subscribe to a topic to tell Event Grid which events you want to track and where to send those
+events. The following example subscribes to the system topic you created, and passes the URL from
+your web app as the endpoint to receive event notifications. Replace `<event_subscription_name>`
+with a name for your event subscription. For `<resource_group_name>` and `<your-site-name>`, use the
+values you created earlier.
+
+The endpoint for your web app must include the suffix `/api/updates/`.
+
+```azurecli-interactive
+# Log in first with az login if you're not using Cloud Shell
+
+# Create the subscription
+az eventgrid system-topic event-subscription create \
+ --name <event_subscription_name> \
+ --resource-group <resource_group_name> \
+ --system-topic-name PolicyStateChanges \
+ --endpoint https://<your-site-name>.azurewebsites.net/api/updates
+```
+
+View your web app again, and notice that a subscription validation event has been sent to it. Select
+the eye icon to expand the event data. Event Grid sends the validation event so the endpoint can
+verify that it wants to receive event data. The web app includes code to validate the subscription.
++
+## Create a policy assignment
+
+In this quickstart, you create a policy assignment and assign the **Require a tag on resource
+groups** definition. This policy definition identifies resource groups that are missing the tag
+configured during policy assignment.
+
+Run the following command to create a policy assignment scoped to the resource group you created to
+hold the event grid topic:
+
+```azurecli-interactive
+# Log in first with az login if you're not using Cloud Shell
+
+az policy assignment create --name 'requiredtags-events' --display-name 'Require tag on RG' --scope '<ResourceGroupScope>' --policy '<policy definition ID>' --params '{ "tagName": { "value": "EventTest" } }'
+```
+
+The preceding command uses the following information:
+
+- **Name** - The actual name of the assignment. For this example, _requiredtags-events_ was used.
+- **DisplayName** - Display name for the policy assignment. In this case, you're using _Require tag
+ on RG_.
+- **Scope** - A scope determines what resources or grouping of resources the policy assignment gets
+ enforced on. It could range from a subscription to resource groups. Be sure to replace
+ &lt;scope&gt; with the name of your resource group. The format for a resource group scope is
+ `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>`.
+- **Policy** ΓÇô The policy definition ID, based on which you're using to create the assignment. In
+ this case, it's the ID of policy definition _Require a tag on resource groups_. To get the policy
+ definition ID, run this command:
+ `az policy definition list --query "[?displayName=='Require a tag on resource groups']"`
+
+After creating the policy assignment, wait for a **Microsoft.PolicyInsights.PolicyStateCreated**
+event notification to appear in the web app. The resource group we created show a
+`data.complianceState` value of _NonCompliant_ to start.
++
+> [!NOTE]
+> If the resource group inherits other policy assignments from the subscription or management group
+> hierarchy, events for each is also displayed. Confirm the event is for the assignment in this
+> tutorial by evaluating the `data.policyDefinitionId` property.
+
+## Trigger a change on the resource group
+
+To make the resource group compliant, a tag with the name **EventTest** is required. Add the tag to
+the resource group with the following command replacing `<SubscriptionID>` with your subscription ID
+and `<ResourceGroup>` with the name of the resource group:
+
+```azurecli-interactive
+# Log in first with az login if you're not using Cloud Shell
+
+az tag create --resource-id '/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>' --tags EventTest=true
+```
+
+After adding the required tag to the resource group, wait for a
+**Microsoft.PolicyInsights.PolicyStateChanged** event notification to appear in the web app. Expand
+the event and the `data.complianceState` value now shows _Compliant_.
+
+## Clean up resources
+
+If you plan to continue working with this web app and Azure Policy event subscription, don't clean
+up the resources created in this article. If you don't plan to continue, use the following command
+to delete the resources you created in this article.
+
+Replace `<resource_group_name>` with the resource group you created above.
+
+```azurecli-interactive
+az group delete --name <resource_group_name>
+```
+
+## Next steps
+
+Now that you know how to create topics and event subscriptions for Azure Policy, learn more about
+policy state change events and what Event Grid can help you do:
+
+- [Reacting to Azure Policy state change events](../concepts/event-overview.md)
+- [Azure Policy schema details for Event Grid](../../../event-grid/event-schema-policy.md)
+- [About Event Grid](../../../event-grid/overview.md)
hdinsight Apache Hadoop Mahout Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hadoop/apache-hadoop-mahout-linux-mac.md
Learn how to use the [Apache Mahout](https://mahout.apache.org) machine learning
Mahout is a [machine learning](https://en.wikipedia.org/wiki/Machine_learning) library for Apache Hadoop. Mahout contains algorithms for processing data, such as filtering, classification, and clustering. In this article, you use a recommendation engine to generate movie recommendations that are based on movies your friends have seen.
-For more information about the version of Mahout in HDInsight, see [HDInsight versions and Apache Hadoop components](../hdinsight-component-versioning.md).
+Mahout is avaiable in HDInsight 3.6, and is not available in HDInsight 4.0. For more information about the version of Mahout in HDInsight, see [HDInsight 3.6 component versions](../hdinsight-36-component-versioning.md).
## Prerequisites
hdinsight Hdinsight 36 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-36-component-versioning.md
In this article, you learn about the Apache Hadoop environment components and ve
## Support for HDInsight 3.6
+Starting July 1st, 2021 Microsoft will offer Basic support for certain HDI 3.6 cluster types.
The table below lists the support timeframe for HDInsight 3.6 cluster types.
-| Cluster Type | Framework version | Current support expiration | New support expiration date |
-||-|--|--|
-| HDInsight 3.6 Hadoop | 2.7.3 | Dec 31, 2020 | June 30, 2021 |
-| HDInsight 3.6 Spark | 2.3 | Dec 31, 2020 | June 30, 2021 |
-| HDInsight 3.6 Spark | 2.2 | Retired on June 30, 2020 | |
-| HDInsight 3.6 Spark | 2.1 | Retired on June 30, 2020 | |
-| HDInsight 3.6 Kafka | 1.1 | Dec 31, 2020 | June 30, 2021 |
-| HDInsight 3.6 Kafka | 1.0 | Retired on June 30, 2020. | |
-| HDInsight 3.6 HBase | 1.1 | Dec 31, 2020 | June 30, 2021 |
-| HDInsight 3.6 Interactive Query | 2.1 | Dec 31, 2020 | June 30, 2021 |
-| HDInsight 3.6 Storm | 1.1 | Dec 31, 2020 | June 30, 2021 |
-| HDInsight 3.6 ML Services | 9.3 | Dec 31, 2020 | Dec 31, 2020 |
+| Cluster Type | Framework version | Standard support expiration | Basic support expiration date | Retirement date |
+||-|--||--|
+| HDInsight 3.6 Hadoop | 2.7.3 | June 30, 2021 | April 3, 2022 | April 4, 2022 |
+| HDInsight 3.6 Spark | 2.3 | June 30, 2021 | April 3, 2022 | April 4, 2022 |
+| HDInsight 3.6 Kafka | 1.1 | June 30, 2021 | April 3, 2022 | April 4, 2022 |
+| HDInsight 3.6 HBase | 1.1 | June 30, 2021 | April 3, 2022 | April 4, 2022 |
+| HDInsight 3.6 Interactive Query | 2.1 | June 30, 2021 | April 3, 2022 | April 4, 2022 |
+| HDInsight 3.6 Storm | 1.1 | June 30, 2021 | April 3, 2022 | April 4, 2022 |
+| HDInsight 3.6 ML Services | 9.3 | - | - | December 31, 2020 |
+| HDInsight 3.6 Spark | 2.2 | - | - | June 30, 2020 |
+| HDInsight 3.6 Spark | 2.1 | - | - | June 30, 2020 |
+| HDInsight 3.6 Kafka | 1.0 | - | - | June 30, 2020 |
+ ## Apache components available with HDInsight version 3.6 The OSS component versions associated with HDInsight 3.6 are listed in the following table.
The OSS component versions associated with HDInsight 3.6 are listed in the follo
- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) - [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
+- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-component-versioning.md
HDInsight bundles Apache Hadoop environment components and HDInsight platform in
This table lists the versions of HDInsight that are available in the Azure portal and other deployment methods like PowerShell, CLI and the .NET SDK.
-| HDInsight version | VM OS | Release date | Support expiration date | Retirement date | High availability |
-| | | | | | |
-| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 16.0.4 LTS |September 24, 2018 | | |Yes |
-| [HDInsight 3.6](hdinsight-36-component-versioning.md) |Ubuntu 16.0.4 LTS |April 4, 2017 | *June 30, 2021 |June 30, 2021 |Yes |
+| HDInsight version | VM OS | Release date| Support type | Support expiration date | Retirement date | High availability |
+| | | | | | | |
+| [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 16.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | | |Yes |
+| [HDInsight 3.6](hdinsight-36-component-versioning.md) |Ubuntu 16.0.4 LTS |April 4, 2017 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Standard support expiration - June 30, 2021 <br> Basic support expiration - April 3, 2022 |April 4, 2022 |Yes |
-*We are extending the support timeframe for certain HDInsight 3.6 cluster types. See [HDInsight 3.6 component versions](hdinsight-36-component-versioning.md).
+*Starting July 1st, 2021 Microsoft will offer Basic support for certain HDI 3.6 cluster types. See [HDInsight 3.6 component versions](hdinsight-36-component-versioning.md).
## Release notes
For additional release notes on the latest versions of HDInsight, see [HDInsight
## Support options for HDInsight versions
-HDInsight offers Standard support which is defined as a time period that an HDInsight version is supported by Microsoft Customer Service and Support.
+Support is defined as a time period that an HDInsight version is supported by Microsoft Customer Service and Support. HDInsight offers two types of support:
+- **Standard support** is a time period in which Microsoft provides updates and support on HDInsight clusters.
+ We recommend building solutions using the most recent fully supported version.
+- **Basic support** is a time period in which Microsoft will provide limited servicing to HDInsight Resource provider. HDInsight images and open-source software (OSS) components will not be serviced. Only critical security fixes will be patched on HDInsight clusters.
+ Microsoft does not encourage creating new clusters or building any fresh solutions when a version is in Basic support. We recommend migrating existing clusters to the most recent fully supported version.
-**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. And it's no longer available through the Azure portal for cluster creation.
+**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. And it may no longer available through the Azure portal for cluster creation.
**Retirement** means that existing clusters of an HDInsight version continue to run as is. New clusters of this version can't be created through any means, which includes the CLI and SDKs. Other control plane features, such as manual scaling and autoscaling, are not guaranteed to work after retirement date. Support isn't available for retired versions.
HDInsight offers Standard support which is defined as a time period that an HDIn
- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) - [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
+- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
healthcare-apis Access Fhir Postman Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/access-fhir-postman-tutorial.md
Previously updated : 03/16/2021 Last updated : 03/26/2021 # Access Azure API for FHIR with Postman
A client application can access the Azure API for FHIR through a [REST API](http
- A FHIR endpoint in Azure.
- To deploy the Azure API for FHIR (a managed service), you can use the [Azure portal](fhir-paas-portal-quickstart.md), [PowerShell](fhir-paas-powershell-quickstart.md), or [Azure CLI](fhir-paas-cli-quickstart.md).
+ To deploy the Azure API for FHIR (a managed service), you can use the [Azure portal](fhir-paas-portal-quickstart.md), [PowerShell](fhir-paas-powershell-quickstart.md), or [Azure CLI](fhir-paas-cli-quickstart.md).
+ - A registered [confidential client application](register-confidential-azure-ad-client-app.md) to access the FHIR service. - You have granted permissions to the confidential client application, for example, "FHIR Data Contributor", to access the FHIR service. For more information, see [Configure Azure RBAC for FHIR](./configure-azure-rbac.md). - Postman installed.
- For more information about Postman, see [Get Started with Postman](https://www.getpostman.com).
+ For more information about Postman, see [Get Started with Postman](https://www.getpostman.com).
## FHIR server and authentication details
If you attempt to access restricted resources, an "Authentication failed" respon
![Authentication Failed](media/tutorial-postman/postman-authentication-failed.png) ## Obtaining an access token
+Select **Get New Access Token**.
+ To obtain a valid access token, select **Authorization** and select **OAuth 2.0** from the **TYPE** drop-down menu. ![Set OAuth 2.0](media/tutorial-postman/postman-select-oauth2.png)
In the **Get New Access Token** dialog box, enter the following details:
|--|--|-| | Token Name | MYTOKEN | A name you choose | | Grant Type | Authorization Code | |
-| Callback URL | `https://www.getpostman.com/oauth2/callback` | |
+| Callback URL | `https://www.getpostman.com/oauth2/callback` | |
| Auth URL | `https://login.microsoftonline.com/{TENANT-ID}/oauth2/authorize?resource=<audience>` | `audience` is `https://MYACCOUNT.azurehealthcareapis.com` for Azure API for FHIR |
-| Access Token URL | `https://login.microsoftonline.com/{TENANT ID}/oauth2/token` | |
-| Client ID | `XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX` | Application ID |
-| Client Secret | `XXXXXXXX` | Secret client key |
-| Scope | `<Leave Blank>` |
-| State | `1234` | |
+| Access Token URL | `https://login.microsoftonline.com/{TENANT ID}/oauth2/token` | |
+| Client ID | `XXXXXXXX-XXX-XXXX-XXXX-XXXXXXXXXXXX` | Application ID |
+| Client Secret | `XXXXXXXX` | Secret client key |
+| Scope | `<Leave Blank>` | Scope is not used; therefore, it can be left blank.
+| State | `1234` | [State](https://learning.postman.com/docs/sending-requests/authorization/) is an opaque value to prevent cross-site request forgery. It is optional and can take an arbitrary value such as '1234'. |
| Client Authentication | Send client credentials in body | Select **Request Token** to be guided through the Azure Active Directory Authentication flow, and a token will be returned to Postman. If an authentication failure occurs, refer to the Postman Console for more details. **Note**: On the ribbon, select **View**, and then select **Show Postman Console**. The keyboard shortcut to the Postman Console is **Alt-Ctrl+C**.
Select **Send** to determine that the patient is successfully created.
![Screenshot that shows that the patient is successfully created.](media/tutorial-postman/postman-patient-created.png)
-If you repeat the patient search, you should now see the patient record:
+If you repeat the patient search, you should now see the patient record.
![Patient Created](media/tutorial-postman/postman-patient-found.png)
healthcare-apis Register Confidential Azure Ad Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/register-confidential-azure-ad-client-app.md
Previously updated : 02/07/2019 Last updated : 03/16/2021 # Register a confidential client application in Azure Active Directory
-In this tutorial, you'll learn how to register a confidential client application in Azure Active Directory.
+In this tutorial, you'll learn how to register a confidential client application in Azure Active Directory (Azure AD).
-A client application registration is an Azure Active Directory representation of an application that can be used to authenticate on behalf of a user and request access to [resource applications](register-resource-azure-ad-client-app.md). A confidential client application is an application that can be trusted to hold a secret and present that secret when requesting access tokens. Examples of confidential applications are server-side applications.
+A client application registration is an Azure AD representation of an application that can be used to authenticate on behalf of a user and request access to [resource applications](register-resource-azure-ad-client-app.md). A confidential client application is an application that can be trusted to hold a secret and present that secret when requesting access tokens. Examples of confidential applications are server-side applications.
-To register a new confidential application in the portal, follow these steps.
+To register a new confidential client application, refer to the steps below.
## Register a new application
-1. In the [Azure portal](https://portal.azure.com), navigate to **Azure Active Directory**.
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**.
-1. Select **App registrations**.
+1. Select **App registrations**.
![Azure portal. New App Registration.](media/how-to-aad/portal-aad-new-app-registration.png) 1. Select **New registration**.
-1. Give the application a display name.
+1. Give the application a user-facing display name.
-1. Provide a reply URL. These details can be changed later, but if you know the reply URL of your application, enter it now.
+1. For **Supported account types**, select who can use the application or access the API.
+
+1. (Optional) Provide a **Redirect URI**. These details can be changed later, but if you know the reply URL of your application, enter it now.
![New Confidential Client App Registration.](media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT.png)+ 1. Select **Register**. ## API permissions
-Now that you have registered your application, you'll need to select which API permissions this application should be able to request on behalf of users:
+Now that you've registered your application, you must select which API permissions this application should request on behalf of users.
1. Select **API permissions**.
Now that you have registered your application, you'll need to select which API p
1. Select **Add a permission**.
- If you are using the Azure API for FHIR, you will add a permission to the Azure Healthcare APIs by searching for **Azure Healthcare APIs** under **APIs my organization uses**. You will only be able to find this if you have already [deployed the Azure API for FHIR](fhir-paas-powershell-quickstart.md).
+ If you're using the Azure API for FHIR, you'll add a permission to the Azure Healthcare APIs by searching for **Azure Healthcare API** under **APIs my organization uses**. The search result for Azure Healthcare API will only return if you've already [deployed the Azure API for FHIR](fhir-paas-powershell-quickstart.md).
- If you are referencing a different Resource Application, select your [FHIR API Resource Application Registration](register-resource-azure-ad-client-app.md) that you created previously under **My APIs**.
+ If you're referencing a different resource application, select your [FHIR API Resource Application Registration](register-resource-azure-ad-client-app.md) that you created previously under **My APIs**.
:::image type="content" source="media/conf-client-app/confidential-client-org-api.png" alt-text="Confidential client. My Org APIs" lightbox="media/conf-client-app/confidential-app-org-api-expanded.png":::
-3. Select scopes (permissions) that the confidential application should be able to ask for on behalf of a user:
+1. Select scopes (permissions) that the confidential client application will ask for on behalf of a user. Select **user_impersonation**, and then select **Add permissions**.
:::image type="content" source="media/conf-client-app/confidential-client-add-permission.png" alt-text="Confidential client. Delegated Permissions"::: + ## Application secret
-1. Select **Certificates & secrets**.
-1. Select **New client secret**.
+1. Select **Certificates & secrets**, and then select **New client secret**.
![Confidential client. Application Secret](media/how-to-aad/portal-aad-register-new-app-registration-CONF-CLIENT-SECRET.png)
-2. Provide a description and duration of the secret (either 1 year, 2 years or never).
+1. Enter a **Description** for the client secret. Select the Expires (In 1 year, In 2 years, or Never), and then click **Add**.
-3. Once generated, it will be displayed in the portal only once. Make a note of it and store it securely.
+ ![Add a client secret](media/how-to-aad/add-a-client-secret.png)
+1. After the client secret string is created, copy its **Value** and **ID**, and store them in a secure location of your choice.
+
+ :::image type="content" source="media/how-to-aad/client-secret-string-password.png" alt-text="Client secret string.":::
+
+> [!NOTE]
+>The client secret string is visible only once in the Azure portal. When you navigate away from the Certificates & secrets web page and then return back to it, the Value string becomes masked. It's important to make a copy your client secret string immediately after it is generated. If you don't have a backup copy of your client secret, you must repeat the above steps to regenerate it.
+
## Next steps
-In this article, you've learned how to register a confidential client application in Azure Active Directory. Next you can access your FHIR server using Postman
+In this article, you were guided through the steps of how to register a confidential client application in the Azure AD. You were also guided through the steps of how to add API permissions to the Azure Healthcare API. Lastly, you were shown how to create an application secret. Furthermore, you can learn how to access your FHIR server using Postman.
>[!div class="nextstepaction"] >[Access Azure API for FHIR with Postman](access-fhir-postman-tutorial.md)
iot-edge How To Publish Subscribe https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-publish-subscribe.md
Below is an example of an IoT Edge MQTT bridge configuration that republishes al
}, { "direction": "out",
- "topic": "",
- "inPrefix": "/local/telemetry",
- "outPrefix": "/remote/messages"
+ "topic": "#",
+ "inPrefix": "/local/telemetry/",
+ "outPrefix": "/remote/messages/"
} ] }]
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-ubuntu-agent.md
Device Update for IoT Hub supports two forms of updates ΓÇô image-based and pack
Package-based updates are targeted updates that alter only a specific component or application on the device. This leads to lower consumption of bandwidth and helps reduce the time to download and install the update. Package updates typically allow for less downtime of devices when applying an update and avoid the overhead of creating images.
-This tutorial walks you through the steps to complete an end-to-end package-based update through Device Update for IoT Hub. For this tutorial we use an Ubuntu Server 18.04 x64 running Azure IoT Edge and the Device Update package agent. The tutorial demonstrates updating a sample package, but using similar steps you could update other packages such as Azure IoT Edge or the container engine it uses.
+This end-to-end tutorial walks you through updating Azure IoT Edge on Ubuntu Server 18.04 x64 by using the Device Update package agent. Although the tutorial demonstrates updating IoT Edge, using similar steps you could update other packages such as the container engine it uses.
The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration. Complete this introduction to an end-to-end update process, then choose your preferred form of updating and OS platform to dive into the details.
iot-hub-device-update Import Update https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-update.md
Learn how to import a new update into Device Update for IoT Hub. If you haven't
2. Create a text file named **AduUpdate.psm1** in the directory where your update image file or APT Manifest file is located. Then open the [AduUpdate.psm1](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets) PowerShell cmdlet, copy the contents to your text file, and then save the text file.
-3. In PowerShell, navigate to the directory where you created your PowerShell cmdlet from step 2. Then run:
+3. In PowerShell, navigate to the directory where you created your PowerShell cmdlet from step 2. Use the Copy option below and then paste into PowerShell to run the commands:
```powershell Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process
Learn how to import a new update into Device Update for IoT Hub. If you haven't
| updateName | Identifier for a class of updates. The class can be anything you choose. It will often be a device or model name. | updateVersion | Version number distinguishing this update from others that have the same Provider and Name. Does not have match a version of an individual software component on the device (but can if you choose). | updateType | <ul><li>Specify `microsoft/swupdate:1` for image update</li><li>Specify `microsoft/apt:1` for package update</li></ul>
- | installedCriteria | <ul><li>Specify value of SWVersion for `microsoft/swupdate:1` update type</li><li>Specify recommended value for `microsoft/apt:1` update type.
+ | installedCriteria | <ul><li>Specify value of SWVersion for `microsoft/swupdate:1` update type</li><li>Specify **name-version**, where _name_ is the name of the APT Manifest and _version_ is the version of the APT Manifest. For example, contoso-iot-edge-1.0.0.0.
| updateFilePath(s) | Path to the update file(s) on your computer
key-vault How To Export Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/how-to-export-certificate.md
$pfxFileByte = $x509Cert.Export($type, $password)
``` This command exports the entire chain of certificates with private key (i.e. the same as it was imported). The certificate is password protected.
-For more information on the **Get-AzKeyVaultCertificate** command and parameters, see [Get-AzKeyVaultCertificate - Example 2](/powershell/module/az.keyvault/Get-AzKeyVaultCertificate?view=azps-4.4.0).
+For more information on the **Get-AzKeyVaultCertificate** command and parameters, see [Get-AzKeyVaultCertificate - Example 2](/powershell/module/az.keyvault/Get-AzKeyVaultCertificate).
# [Portal](#tab/azure-portal)
key-vault Import Cert Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/import-cert-faqs.md
This article answers frequently asked questions about importing Azure Key Vault
For a certificate import operation, Azure Key Vault accepts two certificate file formats: PEM and PFX. Although there are PEM files with only the public portion, Key Vault requires and accepts only a PEM or PFX file with a private key. For more information, see [Import a certificate to Key Vault](./tutorial-import-certificate.md#import-a-certificate-to-key-vault). ### After I import a password-protected certificate to Key Vault and then download it, why can't I see the password that's associated with it?
-
+
After a certificate is imported and protected in Key Vault, its associated password isn't saved. The password is required only once during the import operation. This is by design, but you can always get the certificate as a secret and convert it from Base64 to PFX by adding the password through [Azure PowerShell](https://social.technet.microsoft.com/wiki/contents/articles/37431.exporting-azure-app-service-certificates.aspx). ### How can I resolve a "Bad parameter" error? What are the supported certificate formats for importing to Key Vault?
For more information, see [certificate requirements](./certificate-scenarios.md#
No, it isn't possible to perform certificate operations by using an Azure Resource Manager (ARM) template. A recommended workaround would be to use the certificate import methods in the Azure API, the Azure CLI, or PowerShell. If you have an existing certificate, you can import it as a secret. ### When I import a certificate via the Azure portal, I get a "Something went wrong" error. How can I investigate further?
-
-To view a more descriptive error, import the certificate file by using [the Azure CLI](/cli/azure/keyvault/certificate#az-keyvault-certificate-import) or [PowerShell](/powershell/module/azurerm.keyvault/import-azurekeyvaultcertificate?view=azurermps-6.13.0).
+
+To view a more descriptive error, import the certificate file by using [the Azure CLI](/cli/azure/keyvault/certificate#az-keyvault-certificate-import) or [PowerShell](/powershell/module/azurerm.keyvault/import-azurekeyvaultcertificate).
### How can I resolve "Error type: Access denied or user is unauthorized to import certificate"?
-
+
The import operation requires that you grant the user permissions to import the certificate under the access policies. To do so, go to your key vault, select **Access policies** > **Add Access Policy** > **Select Certificate Permissions** > **Principal**, search for the user, and then add the user's email address. For more information about certificate-related access policies, see [About Azure Key Vault certificates](./about-certificates.md#certificate-access-control). ### How can I resolve "Error type: Conflict when creating a certificate"?
-
+
Each certificate name must be unique. A certificate with the same name might be in a soft-deleted state. Also, according to the [composition of a certificate](./about-certificates.md#composition-of-a-certificate), when new certificate is created, it creates an addressable secret with the same name so if there's another key or secret in the key vault with the same name as the one you're trying to specify for your certificate, the certificate creation will fail and you'll need to either remove that key or secret or use a different name for your certificate. For more information, see [Get Deleted Certificate operation](/rest/api/keyvault/getdeletedcertificate/getdeletedcertificate). ### Why am I getting "Error type: char length is too long"?
-This error could be caused by either of two reasons:
+This error could be caused by either of two reasons:
* The certificate subject name is limited to 200 characters. * The certificate password is limited to 200 characters.
This error could be caused by either of two reasons:
Please verify that the content in the PEM file is uses UNIX-style line separators `(\n)` ### Can I import an expired certificate to Azure Key Vault?
-
+
No, expired PFX certificates can't be imported to Key Vault. ### How can I convert my certificate to the proper format?
You can ask your CA to provide the certificate in the required format. There are
Yes, you can import certificates from any CA, but your key vault won't be able to renew them automatically. You can set reminders to be notified about the certificate expiration. ### If I import a certificate from a partner CA, will the autorenewal feature still work?
-Yes. After you've uploaded the certificate, be sure to specify the autorotation in the certificateΓÇÖs issuance policy. Your settings will remain in effect until the next cycle or certificate version is released.
+Yes. After you've uploaded the certificate, be sure to specify the autorotation in the certificate's issuance policy. Your settings will remain in effect until the next cycle or certificate version is released.
### Why can't I see the App Service certificate that I imported to Key Vault? If you've imported the certificate successfully, you should be able to confirm it by going to the **Secrets** pane.
key-vault Overview Renew Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/overview-renew-certificate.md
This article discusses how to renew your Azure Key Vault certificates.
To get notified about certificate life events, you would need to add certificate contact. Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. A notification is sent to all the specified contacts for an event for any certificate in the key vault. ### Steps to set certificate notifications:
-First, add a certificate contact to your key vault. You can add using Azure portal or PowerShell cmdlet [`Add-AzureKeyVaultCertificateContact`](/powershell/module/azurerm.keyvault/add-azurekeyvaultcertificatecontact?view=azurermps-6.13.0).
+First, add a certificate contact to your key vault. You can add using Azure portal or PowerShell cmdlet [`Add-AzureKeyVaultCertificateContact`](/powershell/module/azurerm.keyvault/add-azurekeyvaultcertificatecontact).
Second, configure when you want to be notified about the certificate expiration. To configure the lifecycle attributes of the certificate, see [Configure certificate autorotation in Key Vault](./tutorial-rotate-certificates.md#update-lifecycle-attributes-of-a-stored-certificate).
If a certificate's policy is set to auto renewal, then a notification is sent on
When a certificate policy that is set to be manually renewed (email only), a notification is sent when it's time to renew the certificate. In Key Vault, there are three categories of certificates:-- Certificates that are created with an integrated certificate authority (CA), such as DigiCert or GlobalSign-- Certificates that are created with a nonintegrated CA-- Self-signed certificates
+- Certificates that are created with an integrated certificate authority (CA), such as DigiCert or GlobalSign
+- Certificates that are created with a nonintegrated CA
+- Self-signed certificates
## Renew an integrated CA certificate Azure Key Vault handles the end-to-end maintenance of certificates that are issued by trusted Microsoft certificate authorities DigiCert and GlobalSign. Learn how to [integrate a trusted CA with Key Vault](./how-to-integrate-certificate-authority.md).
Create a certificate with a validity of **1 month**, and then set the lifetime a
Yes, the tags are replicated after autorenewal. ## Next steps
-* [Integrate Key Vault with DigiCert certificate authority](how-to-integrate-certificate-authority.md)
-* [Tutorial: Configure certificate autorotation in Key Vault](tutorial-rotate-certificates.md)
+* [Integrate Key Vault with DigiCert certificate authority](how-to-integrate-certificate-authority.md)
+* [Tutorial: Configure certificate autorotation in Key Vault](tutorial-rotate-certificates.md)
key-vault Tutorial Import Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/certificates/tutorial-import-certificate.md
Import-AzureKeyVaultCertificate
[<CommonParameters>] ```
-Learn more about the [parameters](/powershell/module/azurerm.keyvault/import-azurekeyvaultcertificate?view=azurermps-6.13.0).
+Learn more about the [parameters](/powershell/module/azurerm.keyvault/import-azurekeyvaultcertificate?).
## Clean up resources
key-vault Alert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/alert.md
This document will cover the following topics:
+ How to configure metrics and create a dashboard + How to create alerts at specified thresholds
-Azure Monitor for Key Vault combines both logs and metrics to provide a global monitoring solution. [Learn more about Azure Monitor for Key Vualt here](https://docs.microsoft.com/azure/azure-monitor/insights/key-vault-insights-overview#introduction-to-azure-monitor-for-key-vault)
+Azure Monitor for Key Vault combines both logs and metrics to provide a global monitoring solution. [Learn more about Azure Monitor for Key Vault here](https://docs.microsoft.com/azure/azure-monitor/insights/key-vault-insights-overview#introduction-to-azure-monitor-for-key-vault)
## Basic Key Vault metrics to monitor
key-vault Move Region https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/move-region.md
Before you begin, keep in mind the following concepts:
## Option 1: Use the key vault backup and restore commands
-You can back up each individual secret, key, and certificate in your vault by using the backup command. Your secrets are downloaded as an encrypted blob. You can then restore the blob into your new key vault. For a list of commands, see [Azure Key Vault commands](/powershell/module/azurerm.keyvault/?view=azurermps-6.13.0#key_vault).
+You can back up each individual secret, key, and certificate in your vault by using the backup command. Your secrets are downloaded as an encrypted blob. You can then restore the blob into your new key vault. For a list of commands, see [Azure Key Vault commands](/powershell/module/azurerm.keyvault#key_vault).
Using the backup and restore commands has two limitations:
lighthouse Cross Tenant Management Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/concepts/cross-tenant-management-experience.md
Title: Cross-tenant management experiences description: Azure delegated resource management enables a cross-tenant management experience. Previously updated : 03/23/2021 Last updated : 03/29/2021
Most tasks and services can be performed on delegated resources across managed t
[Azure Cost Management + Billing](../../cost-management-billing/index.yml): -- From the managing tenant, CSP partners can view, manage, and analyze pre-tax consumption costs (not inclusive of purchases) for customers who are under the Azure plan. The cost will be based on retail rates and the Azure role-based access control (Azure RBAC) access that the partner has for the customer's subscription.
+- From the managing tenant, CSP partners can view, manage, and analyze pre-tax consumption costs (not inclusive of purchases) for customers who are under the Azure plan. The cost will be based on retail rates and the Azure role-based access control (Azure RBAC) access that the partner has for the customer's subscription. Currently, you can view consumption costs at retail rates for each individual customer subscription based on Azure RBAC access.
[Azure Key Vault](../../key-vault/general/index.yml):
lighthouse Onboard Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/onboard-customer.md
Title: Onboard a customer to Azure Lighthouse description: Learn how to onboard a customer to Azure Lighthouse, allowing their resources to be accessed and managed through your own tenant using Azure delegated resource management. Previously updated : 02/16/2021 Last updated : 03/29/2021
The template you choose will depend on whether you are onboarding an entire subs
|Subscription (when using an offer published to Azure Marketplace) |[marketplaceDelegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/marketplace-delegated-resource-management/marketplaceDelegatedResourceManagement.json) |[marketplaceDelegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/marketplace-delegated-resource-management/marketplaceDelegatedResourceManagement.parameters.json) | > [!TIP]
-> While you can't onboard an entire management group in one deployment, you can [deploy a policy at the management group level](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/policy-delegate-management-groups). The policy will check if each subscription within the management group has been delegated to the specified managing tenant, and if not, will create the assignment based on the values you provide.
+> While you can't onboard an entire management group in one deployment, you can [deploy a policy at the management group level](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/policy-delegate-management-groups). The policy uses the [deployIfNotExists effect](../../governance/policy/concepts/effects.md#deployifnotexists) to check if each subscription within the management group has been delegated to the specified managing tenant, and if not, will create the assignment based on the values you provide. You will then have access to all of the subscriptions in the management group, although you'll have to work on them as individual subscriptions (rather than taking actions on the management group as a whole).
The following example shows a modified **delegatedResourceManagement.parameters.json** file that can be used to onboard a subscription. The resource group parameter files (located in the [rg-delegated-resource-management](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/rg-delegated-resource-management) folder) are similar, but also include an **rgName** parameter to identify the specific resource group(s) to be onboarded.
logic-apps Logic Apps Enterprise Integration B2b Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-enterprise-integration-b2b-business-continuity.md
To avoid sending duplicate generated control numbers
to partners during a disaster event, the recommendation is to increment the control numbers in the secondary region agreements by using
-[PowerShell cmdlets](/powershell/module/azurerm.logicapp/set-azurermintegrationaccountgeneratedicn?view=azurermps-6.13.0).
+[PowerShell cmdlets](/powershell/module/azurerm.logicapp/set-azurermintegrationaccountgeneratedicn).
## Fall back to a primary region post-disaster event
To fall back to a primary region when it is available, follow these steps:
2. Increment the generated control numbers for all the primary region agreements by using
-[PowerShell cmdlets](/powershell/module/azurerm.logicapp/set-azurermintegrationaccountgeneratedicn?view=azurermps-6.13.0).
+[PowerShell cmdlets](/powershell/module/azurerm.logicapp/set-azurermintegrationaccountgeneratedicn).
3. Direct traffic from the secondary region to the primary region.
logic-apps Logic Apps Using Sap Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-using-sap-connector.md
This article explains how you can access your SAP resources from Logic Apps usin
* An [SAP application server](https://wiki.scn.sap.com/wiki/display/ABAP/ABAP+Application+Server) or [SAP message server](https://help.sap.com/saphelp_nw70/helpdata/en/40/c235c15ab7468bb31599cc759179ef/frameset.htm) that you want to access from Logic Apps. For information about what SAP servers and SAP actions you can use with the connector, see [SAP compatibility](#sap-compatibility).
+ * You must configure your SAP server to allow the use of RFC. For more information, see the following SAP note: [460089 - Minimum authorization profiles for external RFC programs](https://launchpad.support.sap.com/#/notes/460089).
+ * Message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the SAP action you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](#send-flat-file-idocs). * If you want to use the **When a message is received from SAP** trigger, you must also do the following:
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-auto-train.md
Title: Create automated ML experiments
-description: Learn how to define data sources, computes and configuration settings for your automated machine learning experiments.
+description: Learn how to define data sources, computes, and configuration settings for your automated machine learning experiments.
Learn about the specific definitions of these metrics in [Understand automated m
### Primary metrics for classification scenarios
-Post thresholded metrics, like `accuracy`, `average_precision_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets which are very small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated machine learning completes, you can choose the winning model based on the metric best suited to your business needs.
+Post thresholded metrics, like `accuracy`, `average_precision_score_weighted`, `norm_macro_recall`, and `precision_score_weighted` may not optimize as well for datasets which are small, have very large class skew (class imbalance), or when the expected metric value is very close to 0.0 or 1.0. In those cases, `AUC_weighted` can be a better choice for the primary metric. After automated machine learning completes, you can choose the winning model based on the metric best suited to your business needs.
| Metric | Example use case(s) | | | - |
Configure `max_concurrent_iterations` in your `AutoMLConfig` object. If it is n
## Explore models and metrics
-You can view your training results in a widget or inline if you are in a notebook. See [Track and evaluate models](how-to-monitor-view-training-logs.md#monitor-automated-machine-learning-runs) for more details.
+Automated ML offers options for you to monitor and evaluate your training results.
-See [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md) for definitions and examples of the performance charts and metrics provided for each run.
+* You can view your training results in a widget or inline if you are in a notebook. See [how to monitor automated ML runs](how-to-monitor-view-training-logs.md#monitor-automated-machine-learning-runs) for more details.
-To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency).
+* For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md) .
+* To get a featurization summary and understand what features were added to a particular model, see [Featurization transparency](how-to-configure-auto-features.md#featurization-transparency).
+
+You can view the hyperparameters, the scaling and normalization techniques, and algorithm applied to a specific automated ML run with the following custom code solution.
+
+The following defines the custom method, `print_model()`, which prints the hyperparameters of each step of the automated ML training pipeline.
+
+```python
+from pprint import pprint
+
+def print_model(model, prefix=""):
+ for step in model.steps:
+ print(prefix + step[0])
+ if hasattr(step[1], 'estimators') and hasattr(step[1], 'weights'):
+ pprint({'estimators': list(e[0] for e in step[1].estimators), 'weights': step[1].weights})
+ print()
+ for estimator in step[1].estimators:
+ print_model(estimator[1], estimator[0]+ ' - ')
+ elif hasattr(step[1], '_base_learners') and hasattr(step[1], '_meta_learner'):
+ print("\nMeta Learner")
+ pprint(step[1]._meta_learner)
+ print()
+ for estimator in step[1]._base_learners:
+ print_model(estimator[1], estimator[0]+ ' - ')
+ else:
+ pprint(step[1].get_params())
+ print()
+```
+
+For a local or remote run that was just submitted and trained from within the same experiment notebook, you can pass in the best model using the `get_output()` method.
+
+```python
+best_run, fitted_model = run.get_output()
+print(best_run)
+
+print_model(fitted_model)
+```
+
+The following output indicates that:
+
+* The StandardScalerWrapper technique was used to scale and normalize the data prior to training.
+
+* The XGBoostClassifier algorithm was identified as the best run, and also shows the hyperparameter values.
+
+```python
+StandardScalerWrapper
+{'class_name': 'StandardScaler',
+ 'copy': True,
+ 'module_name': 'sklearn.preprocessing.data',
+ 'with_mean': False,
+ 'with_std': False}
+
+XGBoostClassifier
+{'base_score': 0.5,
+ 'booster': 'gbtree',
+ 'colsample_bylevel': 1,
+ 'colsample_bynode': 1,
+ 'colsample_bytree': 0.6,
+ 'eta': 0.4,
+ 'gamma': 0,
+ 'learning_rate': 0.1,
+ 'max_delta_step': 0,
+ 'max_depth': 8,
+ 'max_leaves': 0,
+ 'min_child_weight': 1,
+ 'missing': nan,
+ 'n_estimators': 400,
+ 'n_jobs': 1,
+ 'nthread': None,
+ 'objective': 'multi:softprob',
+ 'random_state': 0,
+ 'reg_alpha': 0,
+ 'reg_lambda': 1.6666666666666667,
+ 'scale_pos_weight': 1,
+ 'seed': None,
+ 'silent': None,
+ 'subsample': 0.8,
+ 'tree_method': 'auto',
+ 'verbose': -10,
+ 'verbosity': 1}
+```
+
+For an existing run from a different experiment in your workspace, obtain the specific run ID you want to explore and pass that into the `print_model()` method.
+
+```python
+from azureml.train.automl.run import AutoMLRun
+
+ws = Workspace.from_config()
+experiment = ws.experiments['automl-classification']
+automl_run = AutoMLRun(experiment, run_id = 'AutoML_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx')
+
+automl_run
+best_run, model_from_aml = automl_run.get_output()
+
+print_model(model_from_aml)
+
+```
> [!NOTE] > The algorithms automated ML employs have inherent randomness that can cause slight variation in a recommended model's final metrics score, like accuracy. Automated ML also performs operations on data such as train-test split, train-validation split or cross-validation when necessary. So if you run an experiment with the same configuration settings and primary metric multiple times, you'll likely see variation in each experiments final metrics score due to these factors. ## Register and deploy models+ You can register a model, so you can come back to it for later use. To register a model from an automated ML run, use the [`register_model()`](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun#register-model-model-name-none--description-none--tags-none--iteration-none--metric-none-) method.
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-tensorboard.md
In this article, you learn how to view your experiment runs and metrics in TensorBoard using [the `tensorboard` package](/python/api/azureml-tensorboard/) in the main Azure Machine Learning SDK. Once you've inspected your experiment runs, you can better tune and retrain your machine learning models.
-[TensorBoard](/python/api/azureml-tensorboard/azureml.tensorboard.tensorboard?view=azure-ml-py) is a suite of web applications for inspecting and understanding your experiment structure and performance.
+[TensorBoard](/python/api/azureml-tensorboard/azureml.tensorboard.tensorboard) is a suite of web applications for inspecting and understanding your experiment structure and performance.
How you launch TensorBoard with Azure Machine Learning experiments depends on the type of experiment: + If your experiment natively outputs log files that are consumable by TensorBoard, such as PyTorch, Chainer and TensorFlow experiments, then you can [launch TensorBoard directly](#launch-tensorboard) from experiment's run history.
machine-learning How To Monitor View Training Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-monitor-view-training-logs.md
You can also edit the run list table to select multiple runs and display either
![Run details in the Azure Machine Learning studio](media/how-to-track-experiments/experimentation-tab.gif)
-### View log files for a run
+### View and download log files for a run
Log files are an essential resource for debugging the Azure ML workloads. Drill down to a specific run to view its logs and outputs: 1. Navigate to the **Experiments** tab. 1. Select the runID for a specific run. 1. Select **Outputs and logs** at the top of the page.
+2. Select **Download all** to download all your logs into a zip folder.
The tables below show the contents of the log files in the folders you'll see in this section.
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
This article will show you how to:
- Split `Dataset` data into subsets, such as training and validation subsets - Create `OutputFileDatasetConfig` objects to transfer data to the next pipeline step - Use `OutputFileDatasetConfig` objects as input to pipeline steps-- Create new `Dataset` objects from `OutputFileDatasetConfig` you wisƒh to persist
+- Create new `Dataset` objects from `OutputFileDatasetConfig` you wish to persist
## Prerequisites
train_step = PythonScriptStep(
name="train_data", script_name="train.py", compute_target=cluster,
- arguments=['--training-folder', train.as_named_input('train').as_download()]
+ arguments=['--training-folder', train.as_named_input('train').as_download()],
inputs=[test.as_named_input('test').as_download()] )
machine-learning Tutorial 1St Experiment Bring Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-bring-data.md
Last updated 02/11/2021-+ # Tutorial: Use your own data (part 4 of 4)
machine-learning Tutorial 1St Experiment Sdk Setup Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-setup-local.md
Last updated 02/11/2021-+ adobe-target: true
machine-learning Tutorial 1St Experiment Sdk Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-setup.md
Last updated 02/10/2020-+ adobe-target: true
machine-learning Tutorial 1St Experiment Sdk Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-1st-experiment-sdk-train.md
Last updated 02/11/2021-+ # Tutorial: Train your first machine learning model (part 3 of 4)
media-services Live Events Outputs Concept https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/live-events-outputs-concept.md
Once the live event is created, you can get ingest URLs that you'll provide to t
|||| |REST|[properties.useStaticHostname](/rest/api/media/liveevents/create#liveevent)|[LiveEventInput.useStaticHostname](/rest/api/media/liveevents/create#liveeventinput)| |CLI|[--use-static-hostname](/cli/azure/ams/live-event#az-ams-live-event-create)|[--access-token](/cli/azure/ams/live-event#optional-parameters)|
- |.NET|[LiveEvent.useStaticHostname](/dotnet/api/microsoft.azure.management.media.models.liveevent.usestatichostname?view=azure-dotnet#Microsoft_Azure_Management_Media_Models_LiveEvent_UseStaticHostname)|[LiveEventInput.AccessToken](/dotnet/api/microsoft.azure.management.media.models.liveeventinput.accesstoken#Microsoft_Azure_Management_Media_Models_LiveEventInput_AccessToken)|
+ |.NET|[LiveEvent.useStaticHostname](/dotnet/api/microsoft.azure.management.media.models.liveevent.usestatichostname?view=azure-dotnet&preserve-view=true#Microsoft_Azure_Management_Media_Models_LiveEvent_UseStaticHostname)|[LiveEventInput.AccessToken](/dotnet/api/microsoft.azure.management.media.models.liveeventinput.accesstoken#Microsoft_Azure_Management_Media_Models_LiveEventInput_AccessToken)|
### Live ingest URL naming rules
media-services Migrate V 2 V 3 Migration Scenario Based Content Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-content-protection.md
See content protection concepts, tutorials and how to guides below for specific
During migration to the v3 API, you will find that you need to access some properties or content keys from your v2 Assets. One key difference is that the v2 API would use the **AssetId** as the primary identification key and the new v3 API uses the Azure Resource Management name of the entity as the primary identifier. The v2 **Asset.Name** property is not typically used as a unique identifier, so when migrating to v3 you will find that your v2 Asset names now appear in the **Asset.Description** field.
-For example, if you previously had a v2 Asset with the ID of **"nb:cid:UUID:8cb39104-122c-496e-9ac5-7f9e2c2547b8ΓÇ¥**, then you will find when listing the old v2 assets through the v3 API, the name will now be the GUID part at the end (in this case, **"8cb39104-122c-496e-9ac5-7f9e2c2547b8"**.)
+For example, if you previously had a v2 Asset with the ID of **"nb:cid:UUID:8cb39104-122c-496e-9ac5-7f9e2c2547b8"**, then you will find when listing the old v2 assets through the v3 API, the name will now be the GUID part at the end (in this case, **"8cb39104-122c-496e-9ac5-7f9e2c2547b8"**.)
-You can query the **StreamingLocators** associated with the Assets created in the v2 API using the new v3 method [ListStreamingLocators](https://docs.microsoft.com/rest/api/media/assets/liststreaminglocators) on the Asset entity. Also reference the .NET client SDK version of [ListStreamingLocatorsAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.assetsoperationsextensions.liststreaminglocatorsasync?view=azure-dotnet)
+You can query the **StreamingLocators** associated with the Assets created in the v2 API using the new v3 method [ListStreamingLocators](https://docs.microsoft.com/rest/api/media/assets/liststreaminglocators) on the Asset entity. Also reference the .NET client SDK version of [ListStreamingLocatorsAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.assetsoperationsextensions.liststreaminglocatorsasync?view=azure-dotnet&preserve-view=true)
The results of the **ListStreamingLocators** method will provide you the **Name** and **StreamingLocatorId** of the locator along with the **StreamingPolicyName**.
-To find the **ContentKeys** used in your **StreamingLocators** for content protection, you can call the [StreamingLocator.ListContentKeysAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.streaminglocatorsoperationsextensions.listcontentkeysasync?view=azure-dotnet) method.
+To find the **ContentKeys** used in your **StreamingLocators** for content protection, you can call the [StreamingLocator.ListContentKeysAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.management.media.streaminglocatorsoperationsextensions.listcontentkeysasync?view=azure-dotnet&preserve-view=true) method.
Any **Assets** that were created and published using the v2 API will have both a [Content Key Policy](https://docs.microsoft.com/azure/media-services/latest/content-key-policy-concept) and a Content Key defined on them in the v3 API, instead of using a default content key policy on the [Streaming Policy](https://docs.microsoft.com/azure/media-services/latest/streaming-policy-concept).
mysql Howto Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-configure-sign-in-azure-ad-authentication.md
Example (for Public Cloud):
```azurecli-interactive az account get-access-token --resource https://ossrdbms-aad.database.windows.net ```- The above resource value must be specified exactly as shown. For other clouds, the resource value can be looked up using: ```azurecli-interactive
For Azure CLI version 2.0.71 and later, the command can be specified in the foll
```azurecli-interactive az account get-access-token --resource-type oss-rdbms ```
+Using PowerShell, you can use the following command to acquire access token:
+
+```azurepowershell-interactive
+$accessToken = Get-AzAccessToken -ResourceUrl https://ossrdbms-aad.database.windows.net
+$accessToken.Token | out-file C:\temp\MySQLAccessToken.txt
+```
+ After authentication is successful, Azure AD will return an access token:
After authentication is successful, Azure AD will return an access token:
The token is a Base 64 string that encodes all the information about the authenticated user, and which is targeted to the Azure Database for MySQL service.
-> [!NOTE]
-> The access token validity is anywhere between 5 minutes to 60 minutes. We recommend you get the access token just before initiating the login to Azure Database for MySQL.
+The access token validity is anywhere between ***5 minutes to 60 minutes***. We recommend you get the access token just before initiating the login to Azure Database for MySQL. You can use the following Powershell command to see the token validity.
+
+```azurepowershell-interactive
+$accessToken.ExpiresOn.DateTime
+```
### Step 3: Use token as password for logging in with MySQL
-When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method above to retrieve the token.
+When connecting you need to use the access token as the MySQL user password. When using GUI clients such as MySQLWorkbench, you can use the method described above to retrieve the token.
+#### Using MySQL CLI
When using the CLI, you can use this short-hand to connect: **Example (Linux/macOS):**
mysql -h mydb.mysql.database.azure.com \
--enable-cleartext-plugin \ --password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken` ```-
-Important considerations when connecting:
+#### Using MySQL Workbench
+* Launch MySQL Workbench and Click the Database option, then click "Connect to database"
+* In the hostname field, enter the MySQL FQDN eg. mydb.mysql.database.azure.com
+* In the username field, enter the MySQL Azure Active Directory administrator name and append this with MySQL server name, not the FQDN e.g. user@tenant.onmicrosoft.com@mydb
+* In the password field, click "Store in Vault" and paste in the access token from file e.g. C:\temp\MySQLAccessToken.txt
+* Click the advanced tab and ensure that you check "Enable Cleartext Authentication Plugin"
+* Click OK to connect to the database
+
+#### Important considerations when connecting:
* `user@tenant.onmicrosoft.com` is the name of the Azure AD user or group you are trying to connect as * Always append the server name after the Azure AD user/group name (e.g. `@mydb`)
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Vandis](https://www.vandis.com/microsoft-azure-practice/)|[Managed NAC With Aruba ClearPass Policy Manager](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_aruba_clearpass?tab=Overview)|[Vandis Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_expressroute?tab=Overview)|[Vandis Managed VWAN Powered by Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_vwan_powered_by_fortinet?tab=Overview); [Vandis Managed VWAN Powered by Palo Alto Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_vwan_powered_by_palo_alto_networks?tab=Overview); [Managed VWAN Powered by Barracuda CloudGen WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_barracuda_vwan?tab=Overview)| Azure Marketplace offers for Managed ExpressRoute, Virtual WAN, Security Services and Private Edge Zone Services from the following Azure Networking MSP Partners are on our roadmap:
-[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [Deutsche Telekom](https://www.telekom.com/en/media/media-information/archive/deutsche-telekom-offers-managed-network-services-for-microsoft-azure-598406); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [Zertia](https://zertia.es/)
+[Amdocs](https://www.amdocs.com/); [Cirrus Core Networks](https://cirruscorenetworks.com/); [Cognizant](https://www.cognizant.com/cognizant-digital-systems-technology/cloud-enablement-services); [Deutsche Telekom](https://www.telekom.com/en/media/media-information/archive/deutsche-telekom-offers-managed-network-services-for-microsoft-azure-598406); [InterCloud](https://intercloud.com/partners/microsoft-azure/); [KINX](https://www.kinx.net/service/cloud/?lang=en); [OmniClouds](https://omniclouds.com/); [Sejong Telecom](https://www.sejongtelecom.net/en/pages/service/cloud_ms); [SES](https://www.ses.com/networks/cloud/ses-and-azure-expressroute); [Zertia](https://zertia.es/)
## <a name="expressroute"></a>ExpressRoute partners
postgresql Concepts Aad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/concepts-aad-authentication.md
Title: Active Directory authentication - Azure Database for PostgreSQL - Single Server description: Learn about the concepts of Azure Active Directory for authentication with Azure Database for PostgreSQL - Single Server--++ Last updated 07/23/2020
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-django-aks-database.md
Create a new [Django application](https://docs.djangoproject.com/en/3.1/intro/)
ΓööΓöÇΓöÇΓöÇmodels.py ΓööΓöÇΓöÇΓöÇforms.py Γö£ΓöÇΓöÇΓöÇtemplates
- . . . . . . .
+ . . . . . . .
Γö£ΓöÇΓöÇΓöÇstatic
- . . . . . . .
+ . . . . . . .
ΓööΓöÇΓöÇΓöÇmy-django-project ΓööΓöÇΓöÇΓöÇsettings.py ΓööΓöÇΓöÇΓöÇurls.py
Quit the server with CONTROL-C.
## Clean up the resources
-To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group?view=azure-cli-latest#az_group_delete) command to remove the resource group, container service, and all related resources.
+To avoid Azure charges, you should clean up unneeded resources. When the cluster is no longer needed, use the [az group delete](/cli/azure/group#az_group_delete) command to remove the resource group, container service, and all related resources.
```azurecli-interactive az group delete --name django-project --yes --no-wait
postgresql Howto Configure Sign In Aad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/howto-configure-sign-in-aad-authentication.md
Title: Use Azure Active Directory - Azure Database for PostgreSQL - Single Server description: Learn about how to set up Azure Active Directory (AAD) for authentication with Azure Database for PostgreSQL - Single Server--++ Last updated 07/23/2020
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/rbac-and-directory-admin-roles.md
At a high level, Azure roles control permissions to manage Azure resources, whil
| | | | Manage access to Azure resources | Manage access to Azure Active Directory resources | | Supports custom roles | Supports custom roles |
-| Scope can be specified at multiple levels (management group, subscription, resource group, resource) | Scope is at the tenant level |
+| Scope can be specified at multiple levels (management group, subscription, resource group, resource) | [Scope](../active-directory/roles/custom-overview.md#scope) can be specified at the tenant level (organization-wide) or on an individual object (for example, a specific application) |
| Role information can be accessed in Azure portal, Azure CLI, Azure PowerShell, Azure Resource Manager templates, REST API | Role information can be accessed in Azure admin portal, Microsoft 365 admin center, Microsoft Graph, AzureAD PowerShell | ### Do Azure roles and Azure AD roles overlap?
route-server Route Server Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/route-server/route-server-faq.md
Previously updated : 03/08/2021 Last updated : 03/29/2021
Azure Route Server supports Border Gateway Protocol (BGP) only. Your NVA needs t
No. Azure Route Server only exchanges BGP routes with your NVA. The data traffic goes directly from the NVA to the chosen VM and directly from the VM to the NVA.
+### Does Azure Route Server store customer data?
+No. Azure Route Server only exchanges BGP routes with your NVA and then propagates them to your virtual network.
+ ### If Azure Route Server receives the same route from more than one NVA, will it program all copies of the route (but each with a different next hop) to the VMs in the virtual network? Yes, only if the route has the same AS path length. When the VMs send traffic to the destination of this route, the VM hosts will do Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs. Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network.
security-center Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes-archive.md
Learn more about [asset inventory](asset-inventory.md).
### Added support for Azure Active Directory security defaults (for multi-factor authentication)
-Security Center has added full support for [security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md), MicrosoftΓÇÖs free identity security protections.
+Security Center has added full support for [security defaults](../active-directory/fundamentals/concept-fundamentals-security-defaults.md), Microsoft's free identity security protections.
Security defaults provide preconfigured identity security settings to defend your organization from common identity-related attacks. Security defaults already protecting more than 5 million tenants overall; 50,000 tenants are also protected by Security Center.
To ensure a consistent experience for all users, regardless of the scanner type
|Unified recommendation|Change description| |-|:-|
-|**A vulnerability assessment solution should be enabled on your virtual machines**|Replaces the following two recommendations:<br> **ΓÇó** Enable the built-in vulnerability assessment solution on virtual machines (powered by Qualys (now deprecated) (Included with standard tier)<br> **ΓÇó** Vulnerability assessment solution should be installed on your virtual machines (now deprecated) (Standard and free tiers)|
-|**Vulnerabilities in your virtual machines should be remediated**|Replaces the following two recommendations:<br>**ΓÇó** Remediate vulnerabilities found on your virtual machines (powered by Qualys) (now deprecated)<br>**ΓÇó** Vulnerabilities should be remediated by a Vulnerability Assessment solution (now deprecated)|
+|**A vulnerability assessment solution should be enabled on your virtual machines**|Replaces the following two recommendations:<br> ***** Enable the built-in vulnerability assessment solution on virtual machines (powered by Qualys (now deprecated) (Included with standard tier)<br> ***** Vulnerability assessment solution should be installed on your virtual machines (now deprecated) (Standard and free tiers)|
+|**Vulnerabilities in your virtual machines should be remediated**|Replaces the following two recommendations:<br>***** Remediate vulnerabilities found on your virtual machines (powered by Qualys) (now deprecated)<br>***** Vulnerabilities should be remediated by a Vulnerability Assessment solution (now deprecated)|
||| Now you'll use the same recommendation to deploy Security Center's vulnerability assessment extension or a privately licensed solution ("BYOL") from a partner such as Qualys or Rapid7.
If you have scripts, queries, or automations referring to the previous recommend
##### Before August 2020
-|Recommendation|Scope|
+| Recommendation|Scope|
|-|:-| |**Enable the built-in vulnerability assessment solution on virtual machines (powered by Qualys)**<br>Key: 550e890b-e652-4d22-8274-60b3bdb24c63|Built-in| |**Remediate vulnerabilities found on your virtual machines (powered by Qualys)**<br>Key: 1195afff-c881-495e-9bc5-1486211ae03f|Built-in| |**Vulnerability assessment solution should be installed on your virtual machines**<br>Key: 01b1ed4c-b733-4fee-b145-f23236e70cf3|BYOL| |**Vulnerabilities should be remediated by a Vulnerability Assessment solution**<br>Key: 71992a2a-d168-42e0-b10e-6b45fa2ecddb|BYOL|
-||||
+|||
|Policy|Scope| |-|:-| |**Vulnerability assessment should be enabled on virtual machines**<br>Policy ID: 501541f7-f7e7-4cd6-868c-4190fdad3ac9|Built-in| |**Vulnerabilities should be remediated by a vulnerability assessment solution**<br>Policy ID: 760a85ff-6162-42b3-8d70-698e268f648c|BYOL|
-||||
+|||
##### From August 2020
If you have scripts, queries, or automations referring to the previous recommend
|-|:-| |**A vulnerability assessment solution should be enabled on your virtual machines**<br>Key: ffff0522-1e88-47fc-8382-2a80ba848f5d|Built-in + BYOL| |**Vulnerabilities in your virtual machines should be remediated**<br>Key: 1195afff-c881-495e-9bc5-1486211ae03f|Built-in + BYOL|
-||||
+|||
|Policy|Scope| |-|:-| |[**Vulnerability assessment should be enabled on virtual machines**](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f501541f7-f7e7-4cd6-868c-4190fdad3ac9)<br>Policy ID: 501541f7-f7e7-4cd6-868c-4190fdad3ac9 |Built-in + BYOL|
-||||
+|||
### New AKS security policies added to ASC_default initiative ΓÇô for use by private preview customers only
The early phase of this project includes a private preview and the addition of n
You can safely ignore these policies and there will be no impact on your environment. If you'd like to enable them, sign up for the preview at https://aka.ms/SecurityPrP and select from the following options:
-1. **Single Preview** ΓÇô To join only this private preview. Explicitly mention ΓÇ£ASC Continuous ScanΓÇ¥ as the preview you would like to join.
+1. **Single Preview** ΓÇô To join only this private preview. Explicitly mention "ASC Continuous Scan" as the preview you would like to join.
1. **Ongoing Program** ΓÇô To be added to this and future private previews. You'll need to complete a profile and privacy agreement.
Learn more about [creating Logic Apps](../logic-apps/logic-apps-overview.md).
### Integration of Azure Security Center with Windows Admin Center
-ItΓÇÖs now possible to move your on-premises Windows servers from the Windows Admin Center directly to the Azure Security Center. Security Center then becomes your single pane of glass to view security information for all your Windows Admin Center resources, including on-premises servers, virtual machines, and additional PaaS workloads.
+It's now possible to move your on-premises Windows servers from the Windows Admin Center directly to the Azure Security Center. Security Center then becomes your single pane of glass to view security information for all your Windows Admin Center resources, including on-premises servers, virtual machines, and additional PaaS workloads.
-After moving a server from Windows Admin Center to Azure Security Center, youΓÇÖll be able to:
+After moving a server from Windows Admin Center to Azure Security Center, you'll be able to:
- View security alerts and recommendations in the Security Center extension of the Windows Admin Center. - View the security posture and retrieve additional detailed information of your Windows Admin Center managed servers in the Security Center within the Azure portal (or via an API).
Learn more about [how to integrate Azure Security Center with Windows Admin Cent
Azure Security Center is expanding its container security features to protect Azure Kubernetes Service (AKS).
-The popular, open-source platform Kubernetes has been adopted so widely that itΓÇÖs now an industry standard for container orchestration. Despite this widespread implementation, thereΓÇÖs still a lack of understanding regarding how to secure a Kubernetes environment. Defending the attack surfaces of a containerized application requires expertise to ensuring the infrastructure is configured securely and constantly monitored for potential threats.
+The popular, open-source platform Kubernetes has been adopted so widely that it's now an industry standard for container orchestration. Despite this widespread implementation, there's still a lack of understanding regarding how to secure a Kubernetes environment. Defending the attack surfaces of a containerized application requires expertise to ensuring the infrastructure is configured securely and constantly monitored for potential threats.
The Security Center defense includes: - **Discovery and visibility** - Continuous discovery of managed AKS instances within the subscriptions registered to Security Center.-- **Security recommendations** - Actionable recommendations to help you comply with security best-practices for AKS. These recommendations are included in your secure score to ensure theyΓÇÖre viewed as a part of your organizationΓÇÖs security posture. An example of an AKS-related recommendation you might see is "Role-based access control should be used to restrict access to a Kubernetes service cluster".
+- **Security recommendations** - Actionable recommendations to help you comply with security best-practices for AKS. These recommendations are included in your secure score to ensure they're viewed as a part of your organization's security posture. An example of an AKS-related recommendation you might see is "Role-based access control should be used to restrict access to a Kubernetes service cluster".
- **Threat protection** - Through continuous analysis of your AKS deployment, Security Center alerts you to threats and malicious activity detected at the host and AKS cluster level. Learn more about [Azure Kubernetes Services' integration with Security Center](defender-for-kubernetes-introduction.md).
Learn more about [the container security features in Security Center](container-
### Improved just-in-time experience
-The features, operation, and UI for Azure Security CenterΓÇÖs just-in-time tools that secure your management ports have been enhanced as follows:
+The features, operation, and UI for Azure Security Center's just-in-time tools that secure your management ports have been enhanced as follows:
- **Justification field** - When requesting access to a virtual machine (VM) through the just-in-time page of the Azure portal, a new optional field is available to enter a justification for the request. Information entered into this field can be tracked in the activity log. - **Automatic cleanup of redundant just-in-time (JIT) rules** - Whenever you update a JIT policy, a cleanup tool automatically runs to check the validity of your entire ruleset. The tool looks for mismatches between rules in your policy and rules in the NSG. If the cleanup tool finds a mismatch, it determines the cause and, when it's safe to do so, removes built-in rules that aren't needed anymore. The cleaner never deletes rules that you've created.
Updates in November include:
Azure Key Vault is an essential service for protecting data and improving performance of cloud applications by offering the ability to centrally manage keys, secrets, cryptographic keys and policies in the cloud. Since Azure Key Vault stores sensitive and business critical data, it requires maximum security for the key vaults and the data stored in them.
-Azure Security CenterΓÇÖs support for Threat Protection for Azure Key Vault provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit key vaults. This new layer of protection allows customers to address threats against their key vaults without being a security expert or manage security monitoring systems. The feature is in public preview in North America Regions.
+Azure Security Center's support for Threat Protection for Azure Key Vault provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit key vaults. This new layer of protection allows customers to address threats against their key vaults without being a security expert or manage security monitoring systems. The feature is in public preview in North America Regions.
### Threat Protection for Azure Storage includes Malware Reputation Screening
Threat protection for Azure Storage offers new detections powered by Microsoft T
Organizations with centrally managed security and IT/operations implement internal workflow processes to drive required action within the organization when discrepancies are discovered in their environments. In many cases, these workflows are repeatable processes and automation can greatly streamline processes within the organization.
-Today we are introducing a new capability in Security Center that allows customers to create automation configurations leveraging Azure Logic Apps and to create policies that will automatically trigger them based on specific ASC findings such as Recommendations or Alerts. Azure Logic App can be configured to do any custom action supported by the vast community of Logic App connectors, or use one of the templates provided by Security Center such as sending an email or opening a ServiceNowΓäó ticket.
+Today we are introducing a new capability in Security Center that allows customers to create automation configurations leveraging Azure Logic Apps and to create policies that will automatically trigger them based on specific ASC findings such as Recommendations or Alerts. Azure Logic App can be configured to do any custom action supported by the vast community of Logic App connectors, or use one of the templates provided by Security Center such as sending an email or opening a ServiceNow&trade; ticket.
For more information about the automatic and manual Security Center capabilities for running your workflows, see [workflow automation](workflow-automation.md).
Kubernetes is quickly becoming the new standard for deploying and managing softw
The new capabilities in this public preview release include: -- **Discovery & Visibility** - Continuous discovery of managed AKS instances within Security CenterΓÇÖs registered subscriptions.
+- **Discovery & Visibility** - Continuous discovery of managed AKS instances within Security Center's registered subscriptions.
- **Secure Score recommendations** - Actionable items to help customers comply with security best practices for AKS, and increase their secure score. Recommendations include items such as "Role-based access control should be used to restrict access to a Kubernetes Service Cluster".-- **Threat Detection** - Host and cluster-based analytics, such as ΓÇ£A privileged container detectedΓÇ¥.
+- **Threat Detection** - Host and cluster-based analytics, such as "A privileged container detected".
### Virtual machine vulnerability assessment (preview)
-Applications that are installed in virtual machines could often have vulnerabilities that could lead to a breach of the virtual machine. We are announcing that the Security Center standard tier includes built-in vulnerability assessment for virtual machines for no additional fee. The vulnerability assessment, powered by Qualys in the public preview, will allow you to continuously scan all the installed applications on a virtual machine to find vulnerable applications and present the findings in the Security Center portalΓÇÖs experience. Security Center takes care of all deployment operations so that no extra work is required from the user. Going forward we are planning to provide vulnerability assessment options to support our customersΓÇÖ unique business needs.
+Applications that are installed in virtual machines could often have vulnerabilities that could lead to a breach of the virtual machine. We are announcing that the Security Center standard tier includes built-in vulnerability assessment for virtual machines for no additional fee. The vulnerability assessment, powered by Qualys in the public preview, will allow you to continuously scan all the installed applications on a virtual machine to find vulnerable applications and present the findings in the Security Center portal's experience. Security Center takes care of all deployment operations so that no extra work is required from the user. Going forward we are planning to provide vulnerability assessment options to support our customers' unique business needs.
[Learn more about vulnerability assessments for your Azure Virtual Machines](deploy-vulnerability-assessment-vm.md). ### Advanced data security for SQL servers on Azure Virtual Machines (preview)
-Azure Security CenterΓÇÖs support for threat protection and vulnerability assessment for SQL DBs running on IaaS VMs is now in preview.
+Azure Security Center's support for threat protection and vulnerability assessment for SQL DBs running on IaaS VMs is now in preview.
[Vulnerability assessment](../azure-sql/database/sql-vulnerability-assessment.md) is an easy to configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security posture as part of Azure secure score and includes the steps to resolve security issues and enhance your database fortifications.
Azure Security Center now supports custom policies (in preview).
Our customers have been wanting to extend their current security assessments coverage in Security Center with their own security assessments based on policies that they create in Azure Policy. With support for custom policies, this is now possible.
-These new policies will be part of the Security Center recommendations experience, Secure Score, and the regulatory compliance standards dashboard. With the support for custom policies, youΓÇÖre now able to create a custom initiative in Azure Policy, then add it as a policy in Security Center and visualize it as a recommendation.
+These new policies will be part of the Security Center recommendations experience, Secure Score, and the regulatory compliance standards dashboard. With the support for custom policies, you're now able to create a custom initiative in Azure Policy, then add it as a policy in Security Center and visualize it as a recommendation.
### Extending Azure Security Center coverage with platform for community and partners
-Use Security Center to receive recommendations not only from Microsoft but also from existing solutions from partners such as Check Point, Tenable, and CyberArk with many more integrations coming. Security CenterΓÇÖs simple onboarding flow can connect your existing solutions to Security Center, enabling you to view your security posture recommendations in a single place, run unified reports and leverage all of Security Center's capabilities against both built-in and partner recommendations. You can also export Security Center recommendations to partner products.
+Use Security Center to receive recommendations not only from Microsoft but also from existing solutions from partners such as Check Point, Tenable, and CyberArk with many more integrations coming. Security Center's simple onboarding flow can connect your existing solutions to Security Center, enabling you to view your security posture recommendations in a single place, run unified reports and leverage all of Security Center's capabilities against both built-in and partner recommendations. You can also export Security Center recommendations to partner products.
[Learn more about Microsoft Intelligent Security Association](https://www.microsoft.com/security/partnerships/intelligent-security-association).
Use Security Center to receive recommendations not only from Microsoft but also
### Advanced integrations with export of recommendations and alerts (preview)
-In order to enable enterprise level scenarios on top of Security Center, itΓÇÖs now possible to consume Security Center alerts and recommendations in additional places except the Azure portal or API. These can be directly exported to an Event Hub and to Log Analytics workspaces. Here are a few workflows you can create around these new capabilities:
+In order to enable enterprise level scenarios on top of Security Center, it's now possible to consume Security Center alerts and recommendations in additional places except the Azure portal or API. These can be directly exported to an Event Hub and to Log Analytics workspaces. Here are a few workflows you can create around these new capabilities:
- With export to Log Analytics workspace, you can create custom dashboards with Power BI.-- With export to Event Hub, youΓÇÖll be able to export Security Center alerts and recommendations to your third-party SIEMs, to a third-party solution in real time, or Azure Data Explorer.
+- With export to Event Hub, you'll be able to export Security Center alerts and recommendations to your third-party SIEMs, to a third-party solution in real time, or Azure Data Explorer.
### Onboard on-prem servers to Security Center from Windows Admin Center (preview)
The experience of managing rules for virtual machines using adaptive application
### Control container security recommendation using Azure Policy
-Azure Security CenterΓÇÖs recommendation to remediate vulnerabilities in container security can now be enabled or disabled via Azure Policy.
+Azure Security Center's recommendation to remediate vulnerabilities in container security can now be enabled or disabled via Azure Policy.
To view your enabled security policies, from Security Center open the Security Policy page.
Requests are logged in the Azure Activity Log, so you can easily monitor and aud
Secure score is a tool that helps you assess your workload security posture. It reviews your security recommendations and prioritizes them for you, so you know which recommendations to perform first. This helps you find the most serious security vulnerabilities to prioritize investigation.
-In order to simplify remediation of security misconfigurations and help you to quickly improve your secure score, weΓÇÖve added a new capability that allows you to remediate a recommendation on a bulk of resources in a single click.
+In order to simplify remediation of security misconfigurations and help you to quickly improve your secure score, we've added a new capability that allows you to remediate a recommendation on a bulk of resources in a single click.
This operation will allow you to select the resources you want to apply the remediation to and launch a remediation action that will configure the setting on your behalf.
security-center Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/secure-score-security-controls.md
The maximum score for this control, Apply system updates, is always 6. In this e
|**Security control's current score**|<br>![Equation for calculating a security control's score](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the Security Score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br>| |**Secure score**<br>Single subscription|<br>![Equation for calculating a subscription's secure score](media/secure-score-security-controls/secure-score-equation-single-sub.png)<br><br>![Single subscription secure score with all controls enabled](media/secure-score-security-controls/secure-score-example-single-sub.png)<br>In this example, there is a single subscription with all security controls available (a potential maximum score of 60 points). The score shows 28 points out of a possible 60 and the remaining 32 points are reflected in the "Potential score increase" figures of the security controls.<br>![List of controls and the potential score increase](media/secure-score-security-controls/secure-score-example-single-sub-recs.png)| |**Secure score**<br>Multiple subscriptions|<br>![Equation for calculating the secure score for multiple subscriptions](media/secure-score-security-controls/secure-score-equation-multiple-subs.png)<br><br>When calculating the combined score for multiple subscriptions, Security Center includes a *weight* for each subscription. The relative weights for your subscriptions are determined by Security Center based on factors such as the number of resources.<br>The current score for each subscription is calculated in the same way as for a single subscription, but then the weight is applied as shown in the equation.<br>When viewing multiple subscriptions, secure score evaluates all resources within all enabled policies and groups their combined impact on each security control's maximum score.<br>![Secure score for multiple subscriptions with all controls enabled](media/secure-score-security-controls/secure-score-example-multiple-subs.png)<br>The combined score is **not** an average; rather it's the evaluated posture of the status of all resources across all subscriptions.<br>Here too, if you go to the recommendations page and add up the potential points available, you will find that it's the difference between the current score (24) and the maximum score available (60).|
-||||
+ ### Which recommendations are included in the secure score calculations?
Another way to improve your score and ensure your users don't create resources t
The table below lists the security controls in Azure Security Center. For each control, you can see the maximum number of points you can add to your secure score if you remediate *all* of the recommendations listed in the control, for *all* of your resources.
-The set of security recommendations provided with Security Center is tailored to the available resources in each organizationΓÇÖs environment. The recommendations can be further customized by [disabling policies](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations) and [exempting specific resources from a recommendation](exempt-resource.md).
+The set of security recommendations provided with Security Center is tailored to the available resources in each organization's environment. The recommendations can be further customized by [disabling policies](tutorial-security-policy.md#disable-security-policies-and-disable-recommendations) and [exempting specific resources from a recommendation](exempt-resource.md).
We recommend every organization carefully review their assigned Azure Policy initiatives. > [!TIP] > For details of reviewing and editing your initiatives, see [Working with security policies](tutorial-security-policy.md).
-Even though Security CenterΓÇÖs default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, itΓÇÖll sometimes be necessary to adjust the default initiative - without compromising security - to ensure itΓÇÖs aligned with your organizationΓÇÖs own policies. industry standards, regulatory standards, and benchmarks youΓÇÖre obligated to meet.<br><br>
+Even though Security Center's default security initiative is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. Consequently, it'll sometimes be necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies. industry standards, regulatory standards, and benchmarks you're obligated to meet.<br><br>
<div class="foo"> <style type="text/css">
security-center Security Center Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-alerts-overview.md
The severity is based on how confident Security Center is in the finding or the
> [!NOTE] > Alert severity is displayed differently in the portal and versions of the REST API that predate 01-01-2019. If you're using an older version of the API, upgrade for the consistent experience described below.
-| Severity | Recommended response |
-|-|-|
+| Severity | Recommended response |
+|||
| **High** | There is a high probability that your resource is compromised. You should look into it right away. Security Center has high confidence in both the malicious intent and in the findings used to issue the alert. For example, an alert that detects the execution of a known malicious tool such as Mimikatz, a common tool used for credential theft. | | **Medium** | This is probably a suspicious activity might indicate that a resource is compromised. Security Center's confidence in the analytic or finding is medium and the confidence of the malicious intent is medium to high. These would usually be machine learning or anomaly-based detections. For example, a sign-in attempt from an anomalous location. | | **Low** | This might be a benign positive or a blocked attack. Security Center isn't confident enough that the intent is malicious and the activity might be innocent. For example, log clear is an action that might happen when an attacker tries to hide their tracks, but in many cases is a routine operation performed by admins. Security Center doesn't usually tell you when attacks were blocked, unless it's an interesting case that we suggest you look into. |
-| **Informational** | You will only see informational alerts when you drill down into a security incident, or if you use the REST API with a specific alert ID. An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. | |
-| | |
+| **Informational** | You will only see informational alerts when you drill down into a security incident, or if you use the REST API with a specific alert ID. An incident is typically made up of a number of alerts, some of which might appear on their own to be only informational, but in the context of the other alerts might be worthy of a closer look. |
## Export alerts
security Security Control Data Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/benchmarks/security-control-data-recovery.md
Enable Azure Backup and target VM(s), as well as the desired frequency and reten
- [How to enable Azure Backup](../../backup/index.yml) -- [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey?view=azurermps-6.13.0)
+- [How to backup key vault keys in Azure](/powershell/module/azurerm.keyvault/backup-azurekeyvaultkey)
## 9.3: Validate all backups including customer managed keys
Ensure ability to periodically perform data restoration of content within Azure
- [How to recover files from Azure Virtual Machine backup](../../backup/backup-azure-restore-files-from-vm.md) -- [How to restore key vault keys in Azure](/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey?view=azurermps-6.13.0)
+- [How to restore key vault keys in Azure](/powershell/module/azurerm.keyvault/restore-azurekeyvaultkey)
## 9.4: Ensure protection of backups and customer managed keys
security Secure Design https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/develop/secure-design.md
- ms.assetid: 521180dc-2cc9-43f1-ae87-2701de7ca6b8- # Design secure applications on Azure
Use the following resources during the training stage to familiarize
yourself with the Azure services that are available to developers and with security best practices on Azure:
- - [DeveloperΓÇÖs guide to
+ - [Developer's guide to
Azure](https://azure.microsoft.com/campaigns/developer-guide/) shows you how to get started with Azure. The guide shows you which services you can use to run your applications, store your data,
Ask security questions like:
information, to identify, contact, or locate a single person? - Does my application collect or contain data that can be used to
- access an individualΓÇÖs medical, educational, financial, or
+ access an individual's medical, educational, financial, or
employment information? Identifying the sensitivity of your data during the requirements phase helps you classify your data and identify the data protection method you will use for your
tools](../../index.yml?panel=sdkstools-all&pivot=sdkstools)
that you can use to develop applications on Azure. An example is [Azure for .NET and .NET Core developers](/dotnet/azure/). For each language
-and framework that we offer, youΓÇÖll find quickstarts, tutorials, and API
+and framework that we offer, you'll find quickstarts, tutorials, and API
references to help you get started fast. Azure offers a variety of services you can use to host websites and web
threatsΓÇöSpoofing, Tampering, Repudiation, Information Disclosure, Denial
of Service, and Elevation of PrivilegeΓÇöacross all trust boundaries has proven an effective way to catch design errors early on. The following table lists the STRIDE threats and gives some example mitigations that
-use features provided by Azure. These mitigations wonΓÇÖt work in every
+use features provided by Azure. These mitigations won't work in every
situation. | Threat | Security property | Potential Azure platform mitigation | | - | | |
-| Spoofing | Authentication | [Require HTTPS connections](/aspnet/core/security/enforcing-ssl?tabs=visual-studio&view=aspnetcore-2.1). |
+| Spoofing | Authentication | [Require HTTPS connections](/aspnet/core/security/enforcing-ssl?tabs=visual-studio). |
| Tampering | Integrity | Validate SSL/TLS certificates. Applications that use SSL/TLS must fully verify the X.509 certificates of the entities they connect to. Use Azure Key Vault certificates to [manage your x509 certificates](../../key-vault/general/about-keys-secrets-certificates.md). | | Repudiation | Non-repudiation | Enable Azure [monitoring and diagnostics](/azure/architecture/best-practices/monitoring).| | Information Disclosure | Confidentiality | Encrypt sensitive data [at rest](../fundamentals/encryption-atrest.md) and [in transit](../fundamentals/data-encryption-best-practices.md#protect-data-in-transit). |
situation.
### Reduce your attack surface An attack surface is the total sum of where potential vulnerabilities
-might occur. In this paper, we focus on an applicationΓÇÖs attack surface.
+might occur. In this paper, we focus on an application's attack surface.
The focus is on protecting an application from attack. A simple and quick way to minimize your attack surface is to remove unused resources and code from your application. The smaller your application, the smaller your attack surface. For example, remove: -- Code for features you havenΓÇÖt released yet.
+- Code for features you haven't released yet.
- Debugging support code.-- Network interfaces and protocols that arenΓÇÖt used or which have been deprecated.-- Virtual machines and other resources that you arenΓÇÖt using.
+- Network interfaces and protocols that aren't used or which have been deprecated.
+- Virtual machines and other resources that you aren't using.
Doing regular cleanup of your resources and ensuring that you remove unused code are great ways to ensure that there are fewer opportunities
Threat modeling is the process of identifying potential security threats to your
### Adopt a policy of identity as the primary security perimeter
-When you design cloud applications, itΓÇÖs important to expand your
+When you design cloud applications, it's important to expand your
security perimeter focus from a network-centric approach to an identity-centric approach. Historically, the primary on-premises security perimeter was an organization's network. Most on-premises
personal computer? Evaluating access to software is no different. If you
use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) to give users different abilities and authority in your application, you
-wouldnΓÇÖt give everyone access to everything. By limiting access to what
+wouldn't give everyone access to everything. By limiting access to what
is required for each role, you limit the risk of a security issue occurring.
to:
### Require re-authentication for important transactions [Cross-site request
-forgery](/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1)
+forgery](/aspnet/core/security/anti-request-forgery)
(also known as *XSRF* or *CSRF*) is an attack against web-hosted apps in which a malicious web app influences the interaction between a client browser and a web app that trusts that browser. Cross-site request
organizations.
### Use logging and alerting
-[Log](/aspnet/core/fundamentals/logging/?view=aspnetcore-2.1)
+[Log](/aspnet/core/fundamentals/logging/)
your security issues for security investigations and trigger alerts about issues to ensure that people know about problems in a timely manner. Enable auditing and logging on all components. Audit logs should
service-bus-messaging Service Bus Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-geo-dr.md
Note the following considerations to keep in mind with this release:
4. Synchronizing entities can take some time, approximately 50-100 entities per minute. Subscriptions and rules also count as entities.
-### Availability Zones
+## Availability Zones
The Service Bus Premium SKU supports [Availability Zones](../availability-zones/az-overview.md), providing fault-isolated locations within the same Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus keeps all the three copies in sync for data and management operations. If the primary copy fails, one of the secondary copies is promoted to primary with no perceived downtime. If the applications see transient disconnects from Service Bus, the retry logic in the SDK will automatically reconnect to Service Bus.
service-bus-messaging Service Bus Integrate With Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-integrate-with-rabbitmq.md
description: Step-by-step guide on how to integrate RabbitMQ with Azure Service
- Last updated 07/02/2020
service-fabric How To Managed Cluster App Deployment Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-app-deployment-template.md
The sample application contains [Azure Resource Manager templates](https://githu
| - | - | | | | clusterName | The name of the cluster you're deploying to | sf-cluster123 | | | application | The name of the application | Voting |
-| version | The resource ID, application type, and version of the app. | /providers/Microsoft.ServiceFabric/managedClusters/sf-cluster-123/applicationTypes/VotingType/versions/1.0.0 | Must match ApplicationManifest.xml | |
+| version | The resource ID, application type, and version of the app. | /providers/Microsoft.ServiceFabric/managedClusters/sf-cluster-123/applicationTypes/VotingType/versions/1.0.0 | Must match ApplicationManifest.xml |
| serviceName | The name of the service | VotingWeb | Must be in the format ServiceType | | serviceTypeName | The type name of the service | VotingWebType | Must match ServiceManifest.xml | | appPackageUrl | The blob storage URL of the application | https:\//servicefabricapps.blob.core.windows.net/apps/Voting.sfpkg | The URL of the application package in blob storage (the procedure to set the URL is described later in the article) |
service-fabric Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/release-notes.md
We're excited to announce that 7.2 release of the Service Fabric runtime has sta
Due to the current COVID-19 crisis, and taking into consideration the challenges faced by our customers, we are making 7.1 available, but will not automatically upgrade clusters set to receive automatic upgrades. We are pausing automatic upgrades until further notice to ensure that customers can apply upgrades when most appropriate for them, to avoid unexpected disruptions.
-You will be able to update to 7.1 via through the [Azure portal](./service-fabric-cluster-upgrade-version-azure.md#upgrading-to-a-new-version-on-a-cluster-that-is-set-to-manual-mode-via-portal) or via an [Azure Resource Manager deployment](./service-fabric-cluster-upgrade-version-azure.md#set-the-upgrade-mode-using-a-resource-manager-template).
+You will be able to update to 7.1 via through the [Azure portal](./service-fabric-cluster-upgrade-version-azure.md#manual-upgrades-with-azure-portal) or via an [Azure Resource Manager deployment](./service-fabric-cluster-upgrade-version-azure.md#resource-manager-template).
Service Fabric clusters with automatic upgrades enabled will begin to receive the 7.1 update automatically once we resume the standard rollout procedure. We will provide another announcement before the standard rollout begins on the [Service Fabric Tech Community Site](https://techcommunity.microsoft.com/t5/azure-service-fabric/bg-p/Service-Fabric). We also have published updates to end of support date for major releases starting from 6.5 up to 7.1 [here](./service-fabric-versions.md#supported-versions).
service-fabric Service Fabric Cluster Capacity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-capacity.md
Follow these recommendations for managing node types with Silver or Gold durabil
* Maintain a minimum count of five nodes for any virtual machine scale set that has durability level of Gold or Silver enabled. Your cluster will enter error state if you scale in below this threshold, and you'll need to manually clean up state (`Remove-ServiceFabricNodeState`) for the removed nodes. * Each virtual machine scale set with durability level Silver or Gold must map to its own node type in the Service Fabric cluster. Mapping multiple virtual machine scale sets to a single node type will prevent coordination between the Service Fabric cluster and the Azure infrastructure from working properly. * Do not delete random VM instances, always use virtual machine scale set scale in feature. The deletion of random VM instances has a potential of creating imbalances in the VM instance spread across [upgrade domains](service-fabric-cluster-resource-manager-cluster-description.md#upgrade-domains) and [fault domains](service-fabric-cluster-resource-manager-cluster-description.md#fault-domains). This imbalance could adversely affect the systems ability to properly load balance among the service instances/Service replicas.
-* If using Autoscale, set the rules such that scale in (removing of VM instances) operations are done only one node at a time. Scaling down more than one instance at a time is not safe.
+* If using Autoscale, set the rules such that scale in (removing of VM instances) operations are done only one node at a time. Scaling in more than one instance at a time is not safe.
* If deleting or deallocating VMs on the primary node type, never reduce the count of allocated VMs below what the reliability tier requires. These operations will be blocked indefinitely in a scale set with a durability level of Silver or Gold. ### Changing durability levels
service-fabric Service Fabric Cluster Upgrade Version Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-upgrade-version-azure.md
Title: Upgrade a cluster's Azure Service Fabric version
-description: Upgrade the Service Fabric code and/or configuration that runs a Service Fabric cluster, including setting cluster update mode, upgrading certificates, adding application ports, doing OS patches, and so on. What can you expect when the upgrades are performed?
-- Previously updated : 11/12/2018
+ Title: Manage Service Fabric cluster upgrades
+description: Manage when and how your Service Fabric cluster runtime is updated
+ Last updated : 03/26/2021
-# Upgrade the Service Fabric version of a cluster
+# Manage Service Fabric cluster upgrades
-For any modern system, designing for upgradability is key to achieving long-term success of your product. An Azure Service Fabric cluster is a resource that you own, but is partly managed by Microsoft. This article describes how to upgrade the version of Service Fabric running in your Azure cluster.
+An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. Here's how to manage when and how Microsoft updates your Azure Service Fabric cluster.
-You can set your cluster to receive automatic fabric upgrades as they are released by Microsoft or you can select a supported fabric version you want your cluster to be on.
+For further background on cluster upgrade concepts and processes, see [Upgrading and updating Azure Service Fabric clusters](service-fabric-cluster-upgrade.md)
-You do this by setting the "upgradeMode" cluster configuration on the portal or using Resource Manager at the time of creation or later on a live cluster
+## Set upgrade mode
-> [!NOTE]
-> Make sure to keep your cluster running a supported fabric version always. As and when we announce the release of a new version of service fabric, the previous version is marked for end of support after a minimum of 60 days from that date. The new releases are announced [on the service fabric team blog](https://techcommunity.microsoft.com/t5/azure-service-fabric/bg-p/Service-Fabric). The new release is available to choose then.
->
->
+You can set your cluster to receive automatic Service Fabric upgrades as they are released by Microsoft, or you can manually choose from a list of currently supported versions by setting the upgrade mode for your cluster. This can be done either through the *Fabric upgrade mode* control in Azure portal or the `upgradeMode` setting in your cluster deployment template.
-14 days prior to the expiry of the release your cluster is running, a health event is generated that puts your cluster into a warning health state. The cluster remains in a warning state until you upgrade to a supported fabric version.
+### Azure portal
-## Set the upgrade mode in the Azure portal
-You can set the cluster to automatic or manual when you are creating the cluster.
+Using Azure portal, you'll choose between automatic or manual upgrades when creating a new Service Fabric cluster.
-![Screenshot shows the Create Service Fabric cluster pane with option 2 Cluster configuration selected and the Cluster configuration pane open.][Create_Manualmode]
-You can set the cluster to automatic or manual when on a live cluster, using the manage experience.
+You can also toggle between automatic or manual upgrades from the **Fabric upgrades** section of an existing cluster resource.
-### Upgrading to a new version on a cluster that is set to Manual mode via portal.
-To upgrade to a new version, all you need to do is select the available version from the dropdown and save. The Fabric upgrade gets kicked off automatically. The cluster health policies (a combination of node health and the health all the applications running in the cluster) are adhered to during the upgrade.
-If the cluster health policies are not met, the upgrade is rolled back. Scroll down this document to read more on how to set those custom health policies.
+### Manual upgrades with Azure portal
-Once you have fixed the issues that resulted in the rollback, you need to initiate the upgrade again, by following the same steps as before.
+When you select the manual upgrade option, all that's needed to initiate an upgrade is to select from the available versions dropdown and then *Save*. From there, the cluster upgrade gets kicked off immediately.
-![Screenshot shows the Service Fabric clusters window with the Fabric upgrades pane open and the upgrade options highlighted, including Automatic and Manual.][Manage_Automaticmode]
+The [cluster health policies](#custom-policies-for-manual-upgrades) (a combination of node health and the health all the applications running in the cluster) are adhered to during the upgrade. If cluster health policies are not met, the upgrade will be rolled back.
-## Set the upgrade mode using a Resource Manager template
-Add the "upgradeMode" configuration to the Microsoft.ServiceFabric/clusters resource definition and set the "clusterCodeVersion" to one of the supported fabric versions as shown below and then deploy the template. The valid values for "upgradeMode" are "Manual" or "Automatic"
+Once you have fixed the issues that resulted in the rollback, you'll need to initiate the upgrade again, by following the same steps as before.
-![Screenshot shows a template, which is plaintext indented to reflect structure and the clusterCodeVersion and upgradeMode are highlighted.][ARMUpgradeMode]
+### Resource Manager template
-### Upgrading to a new version on a cluster that is set to Manual mode via a Resource Manager template.
-When the cluster is in Manual mode, to upgrade to a new version, change the "clusterCodeVersion" to a supported version and deploy it.
-The deployment of the template, kicks of the Fabric upgrade gets kicked off automatically. The cluster health policies (a combination of node health and the health all the applications running in the cluster) are adhered to during the upgrade.
+To change your cluster upgrade mode using a Resource Manager template, specify either *Automatic* or *Manual* for the `upgradeMode` property of the *Microsoft.ServiceFabric/clusters* resource definition. If you choose manual upgrades, also set the `clusterCodeVersion` to a currently [supported fabric version](#query-for-supported-cluster-versions).
-If the cluster health policies are not met, the upgrade is rolled back.
-Once you have fixed the issues that resulted in the rollback, you need to initiate the upgrade again, by following the same steps as before.
+Upon successful deployment of the template, changes to the cluster upgrade mode will be applied. If your cluster is in manual mode, the cluster upgrade will kick off automatically.
-## Set custom health polices for upgrades
-You can specify custom health polices for fabric upgrade. If you have set your cluster to Automatic fabric upgrades, then these policies get applied to the [Phase-1 of the automatic fabric upgrades](service-fabric-cluster-upgrade.md#fabric-upgrade-behavior-during-automatic-upgrades).
-If you have set your cluster for Manual fabric upgrades, then these policies get applied each time you select a new version triggering the system to kick off the fabric upgrade in your cluster. If you do not override the policies, the defaults are used.
+The [cluster health policies](#custom-policies-for-manual-upgrades) (a combination of node health and the health all the applications running in the cluster) are adhered to during the upgrade. If cluster health policies are not met, the upgrade is rolled back.
-You can specify the custom health policies or review the current settings under the "fabric upgrade" blade, by selecting the advanced upgrade settings. Review the following picture on how to.
+Once you have fixed the issues that resulted in the rollback, you'll need to initiate the upgrade again, by following the same steps as before.
-![Manage custom health policies][HealthPolices]
+## Wave deployment for automatic upgrades
-## List all available versions for all environments for a given subscription
-Run the following command, and you should get an output similar to this.
+With automatic upgrade mode, you have the option to enable your cluster for wave deployment. With wave deployment, you can create a pipeline for upgrading your test, stage, and production clusters in sequence, separated by built-in 'bake time' to validate upcoming Service Fabric versions before your production clusters are updated.
-"supportExpiryUtc" tells your when a given release is expiring or has expired. The latest release does not have a valid date - it has a value of "9999-12-31T23:59:59.9999999", which just means that the expiry date is not yet set.
+### Enable wave deployment
-```REST
-GET https://<endpoint>/subscriptions/{{subscriptionId}}/providers/Microsoft.ServiceFabric/locations/{{location}}/clusterVersions?api-version=2016-09-01
+> [!NOTE]
+> Wave deployment requires the `2020-12-01-preview` (or later) API version for your *Microsoft.ServiceFabric/clusters* resource.
-Example: https://management.azure.com/subscriptions/1857f442-3bce-4b96-ad95-627f76437a67/providers/Microsoft.ServiceFabric/locations/eastus/clusterVersions?api-version=2016-09-01
+To enable wave deployment for automatic upgrade, first determine which wave to assign your cluster:
-Output:
+* **Wave 0** (`Wave0`): Clusters are updated as soon as a new Service Fabric build is released. Intended for test/dev clusters.
+* **Wave 1** (`Wave1`): Clusters are updated one week (seven days) after a new build is released. Intended for pre-prod/staging clusters.
+* **Wave 2** (`Wave2`): Clusters are updated two weeks (14 days) after a new build is released. Intended for production clusters.
+
+Then, simply add an `upgradeWave` property to your cluster resource template with one of the wave values listed above. Ensure your cluster resource API version is `2020-12-01-preview` or later.
+
+```json
{
- "value": [
- {
- "id": "subscriptions/35349203-a0b3-405e-8a23-9f1450984307/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/5.0.1427.9490",
- "name": "5.0.1427.9490",
- "type": "Microsoft.ServiceFabric/environments/clusterVersions",
- "properties": {
- "codeVersion": "5.0.1427.9490",
- "supportExpiryUtc": "2016-11-26T23:59:59.9999999",
- "environment": "Windows"
- }
- },
- {
- "id": "subscriptions/35349203-a0b3-405e-8a23-9f1450984307/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/4.0.1427.9490",
- "name": "5.1.1427.9490",
- "type": " Microsoft.ServiceFabric/environments/clusterVersions",
- "properties": {
- "codeVersion": "5.1.1427.9490",
- "supportExpiryUtc": "9999-12-31T23:59:59.9999999",
- "environment": "Windows"
- }
- },
- {
- "id": "subscriptions/35349203-a0b3-405e-8a23-9f1450984307/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/4.4.1427.9490",
- "name": "4.4.1427.9490",
- "type": " Microsoft.ServiceFabric/environments/clusterVersions",
- "properties": {
- "codeVersion": "4.4.1427.9490",
- "supportExpiryUtc": "9999-12-31T23:59:59.9999999",
- "environment": "Linux"
- }
- }
- ]
- }
+ "apiVersion": "2020-12-01-preview",
+ "type": "Microsoft.ServiceFabric/clusters",
+ ...
+ "fabricSettings": [...],
+ "managementEndpoint": ...,
+ "nodeTypes": [...],
+ "provisioningState": ...,
+ "reliabilityLevel": ...,
+ "upgradeMode": "Automatic",
+ "upgradeWave": "Wave1",
+ ...
+```
+
+Once you deploy the updated template, your cluster will be enrolled in the specified wave for the next upgrade period and after that.
+
+You can [register for email notifications](#register-for-notifications) with links to further help if a cluster upgrade fails.
+
+### Register for notifications
+
+You can register for notifications when a cluster upgrade fails. An email will be sent to your designated email address(es) with further details on the upgrade failure and links to further help.
+
+> [!NOTE]
+> Enrollment in wave deployment is not required to receive notifications for upgrade failures.
+
+To enroll in notifications, add a `notifications` section to your cluster resource template, and designate one or more email addresses (*receivers*) to receive notifications:
+
+```json
+ "apiVersion": "2020-12-01-preview",
+ "type": "Microsoft.ServiceFabric/clusters",
+ ...
+ "upgradeMode": "Automatic",
+ "upgradeWave": "Wave1",
+ "notifications": [
+ {
+ "isEnabled": true,
+ "notificationCategory": "WaveProgress",
+ "notificationLevel": "Critical",
+ "notificationTargets": [
+ {
+ "notificationChannel": "EmailUser",
+ "receivers": [
+ "devops@contoso.com"
+ ]
+ }]
+ }]
+```
+
+Once you deploy your updated template, you'll be enrolled for upgrade failure notifications.
+
+## Custom policies for manual upgrades
+
+You can specify custom health policies for manual cluster upgrades. These policies get applied each time you select a new runtime version, which triggers the system to kick off the upgrade of your cluster. If you do not override the policies, the defaults are used.
+
+You can specify the custom health policies or review the current settings under the **Fabric upgrades** section of your cluster resource in Azure portal by selecting *Custom* option for **Upgrade policy**.
++
+## Query for supported cluster versions
+
+You can use [Azure REST API](/rest/api/azure/) to list all available Service Fabric runtime versions ([clusterVersions](/rest/api/servicefabric/sfrp-api-clusterversions_list)) available for the specified location and your subscription.
+
+You can also reference [Service Fabric versions](service-fabric-versions.md) for further details on supported versions and operating systems.
+
+```REST
+GET https://<endpoint>/subscriptions/{{subscriptionId}}/providers/Microsoft.ServiceFabric/locations/{{location}}/clusterVersions?api-version=2018-02-01
+
+"value": [
+ {
+ "id": "subscriptions/########-####-####-####-############/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/5.0.1427.9490",
+ "name": "5.0.1427.9490",
+ "type": "Microsoft.ServiceFabric/environments/clusterVersions",
+ "properties": {
+ "codeVersion": "5.0.1427.9490",
+ "supportExpiryUtc": "2016-11-26T23:59:59.9999999",
+ "environment": "Windows"
+ }
+ },
+ {
+ "id": "subscriptions/########-####-####-####-############/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/4.0.1427.9490",
+ "name": "5.1.1427.9490",
+ "type": " Microsoft.ServiceFabric/environments/clusterVersions",
+ "properties": {
+ "codeVersion": "5.1.1427.9490",
+ "supportExpiryUtc": "9999-12-31T23:59:59.9999999",
+ "environment": "Windows"
+ }
+ },
+ {
+ "id": "subscriptions/########-####-####-####-############/providers/Microsoft.ServiceFabric/environments/Windows/clusterVersions/4.4.1427.9490",
+ "name": "4.4.1427.9490",
+ "type": " Microsoft.ServiceFabric/environments/clusterVersions",
+ "properties": {
+ "codeVersion": "4.4.1427.9490",
+ "supportExpiryUtc": "9999-12-31T23:59:59.9999999",
+ "environment": "Linux"
+ }
+ }
+]
+}
```
+The `supportExpiryUtc` in the output reports when a given release is expiring or has expired. Latest releases will not have a valid date, but rather a value of *9999-12-31T23:59:59.9999999*, which just means that the expiry date is not yet set.
++ ## Next steps
-* Learn how to customize some of the [service fabric cluster fabric settings](service-fabric-cluster-fabric-settings.md)
-* Learn how to [scale your cluster in and out](service-fabric-cluster-scale-in-out.md)
+
+* [Manage Service Fabric upgrades](service-fabric-cluster-upgrade-version-azure.md)
+* Customize your [Service Fabric cluster settings](service-fabric-cluster-fabric-settings.md)
+* [Scale your cluster in and out](service-fabric-cluster-scale-in-out.md)
* Learn about [application upgrades](service-fabric-application-upgrade.md) + <!--Image references--> [CertificateUpgrade]: ./media/service-fabric-cluster-upgrade/CertificateUpgrade2.png [AddingProbes]: ./media/service-fabric-cluster-upgrade/addingProbes2.PNG
service-fabric Service Fabric Cluster Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-cluster-upgrade.md
Title: Upgrade an Azure Service Fabric cluster
-description: Learn about upgrading the version or configuration of an Azure Service Fabric clusterΓÇösetting cluster update mode, upgrading certificates, adding application ports, doing OS patches, and what you can expect when the upgrades are performed.
+ Title: Upgrading Azure Service Fabric clusters
+description: Learn about options for updating your Azure Service Fabric cluster
Previously updated : 11/12/2018 Last updated : 03/26/2021
-# Upgrading and updating an Azure Service Fabric cluster
+# Upgrading and updating Azure Service Fabric clusters
-For any modern system, designing for upgradability is key to achieving long-term success of your product. An Azure Service Fabric cluster is a resource that you own, but is partly managed by Microsoft. This article describes what is managed automatically and what you can configure yourself.
+An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. This article describes the options for when and how to update your Azure Service Fabric cluster.
-## Controlling the fabric version that runs on your cluster
+## Automatic versus manual upgrades
-Make sure your cluster is always running a [supported fabric version](service-fabric-versions.md). Each time Microsoft announces the release of a new version of Service Fabric, the previous version is marked for end of support after a minimum of 60 days from that date. New releases are announced on the [Service Fabric team blog](https://techcommunity.microsoft.com/t5/azure-service-fabric/bg-p/Service-Fabric).
+It's critical to ensure your Service Fabric cluster is always running a [supported runtime version](service-fabric-versions.md). Each time Microsoft announces the release of a new version of Service Fabric, the previous version is marked for *end of support* after a minimum of 60 days from that date. New releases are announced on the [Service Fabric team blog](https://techcommunity.microsoft.com/t5/azure-service-fabric/bg-p/Service-Fabric).
-14 days prior to the expiry of the release your cluster is running, a health event is generated that puts your cluster into a warning health state. The cluster remains in a warning state until you upgrade to a supported fabric version.
+Fourteen days before the expiry of the release your cluster is running, a health event is generated that puts your cluster into a *Warning* health state. The cluster remains in a warning state until you upgrade to a supported runtime version.
-You can set your cluster to receive automatic fabric upgrades as they are released by Microsoft or you can select a supported fabric version you want your cluster to be on. To learn more, read [upgrade the Service Fabric version of your cluster](service-fabric-cluster-upgrade-version-azure.md).
+You can set your cluster to receive automatic Service Fabric upgrades as they are released by Microsoft, or you can manually choose from a list of currently supported versions. These options are available in the **Fabric upgrades** section of your Service Fabric cluster resource.
-## Fabric upgrade behavior during automatic upgrades
-Microsoft maintains the fabric code and configuration that runs in an Azure cluster. We perform automatic monitored upgrades to the software on an as-needed basis. These upgrades could be code, configuration, or both. To make sure that your application suffers no impact or minimal impact due to these upgrades, upgrades are performed in the following phases:
+You can also set your cluster upgrade mode and select a runtime version [using a Resource Manager template](service-fabric-cluster-upgrade-version-azure.md#resource-manager-template).
+
+Automatic upgrades are the recommended upgrade mode, as this option ensures your cluster stays in a supported state and benefits from the latest fixes and features while also allowing you to schedule updates in a manner that is least disruptive to your workloads using a [wave deployment](#wave-deployment-for-automatic-upgrades) strategy.
+
+## Wave deployment for automatic upgrades
+
+With wave deployment, you can minimize the disruption of an upgrade to your cluster by selecting the maturity level of an upgrade, depending on your workload. For example, you can set up a *Test* -> *Stage* -> *Production* wave deployment pipeline for your various Service Fabric clusters in order to test the compatibility of a runtime upgrade before you apply it to your production workloads.
+
+To opt in to wave deployment, specify one of the following wave values for your cluster (in its deployment template):
+
+* **Wave 0**: Clusters are updated as soon as a new Service Fabric build is released. Intended for test/dev clusters.
+* **Wave 1**: Clusters are updated one week (seven days) after a new build is released. Intended for pre-prod/staging clusters.
+* **Wave 2**: Clusters are updated two weeks (14 days) after a new build is released. Intended for production clusters.
+
+You can register for email notifications with links to further help if a cluster upgrade fails. See [Wave deployment for automatic upgrades](service-fabric-cluster-upgrade-version-azure.md#wave-deployment-for-automatic-upgrades) to get started.
+
+## Phases of automatic upgrade
+
+Microsoft maintains the Service Fabric runtime code and configuration that runs in an Azure cluster. We perform automatically monitored upgrades to the software on an as-needed basis. These upgrades could be code, configuration, or both. To minimize the impact of these upgrades on your applications, they are performed in the following phases:
### Phase 1: An upgrade is performed by using all cluster health policies
If the cluster health policies are not met, the upgrade is rolled back and an em
* Suggested remedial actions, if any. * The number of days (*n*) until we execute Phase 2.
-We try to execute the same upgrade a few more times in case any upgrades failed for infrastructure reasons. After the *n* days from the date the email was sent, we proceed to Phase 2.
+We try to execute the same upgrade a few more times in case any upgrades failed for infrastructure reasons. After the *n* days from the date the email was sent, we continue to Phase 2.
-If the cluster health policies are met, the upgrade is considered successful and marked complete. This can happen during the initial upgrade or any of the upgrade reruns in this phase. There is no email confirmation of a successful run. This is to avoid sending you too many emails; receiving an email should be seen as an exception to normal. We expect most of the cluster upgrades to succeed without impacting your application availability.
+If the cluster health policies are met, the upgrade is considered successful and marked complete. This situation can happen during the initial upgrade or any of the upgrade reruns in this phase. There is no email confirmation of a successful run, to avoid sending excessive emails. Receiving an email indicates an exception to normal operations. We expect most of the cluster upgrades to succeed without impacting your application availability.
### Phase 2: An upgrade is performed by using default health policies only
-The health policies in this phase are set in such a way that the number of applications that were healthy at the beginning of the upgrade remains the same for the duration of the upgrade process. As in Phase 1, the Phase 2 upgrades proceed one upgrade domain at a time, and the applications that were running in the cluster continue to run without any downtime. The cluster health policies (a combination of node health and the health all the applications running in the cluster) are adhered to for the duration of the upgrade.
+The health policies in this phase are set in such a way that the number of applications that were healthy at the beginning of the upgrade remains the same during the upgrade process. As in Phase 1, the Phase 2 upgrades proceed one upgrade domain at a time, and the applications that were running in the cluster continue to run without any downtime. The cluster health policies (a combination of node health and the health all the applications running in the cluster) are adhered to during the upgrade.
If the cluster health policies in effect are not met, the upgrade is rolled back. Then an email is sent to the owner of the subscription. The email contains the following information:
If the cluster health policies are met, the upgrade is considered successful and
### Phase 3: An upgrade is performed by using aggressive health policies
-These health policies in this phase are geared towards completion of the upgrade rather than the health of the applications. Few cluster upgrades end up in this phase. If your cluster gets to this phase, there is a good chance that your application becomes unhealthy and/or lose availability.
+These health policies in this phase are geared towards completion of the upgrade rather than the health of the applications. Few cluster upgrades end up in this phase. If your cluster gets to this phase, there is a good chance that your application becomes unhealthy and/or loses availability.
Similar to the other two phases, Phase 3 upgrades proceed one upgrade domain at a time.
An email with this information is sent to the subscription owner, along with the
If the cluster health policies are met, the upgrade is considered successful and marked complete. This can happen during the initial upgrade or any of the upgrade reruns in this phase. There is no email confirmation of a successful run.
-## Manage certificates
+## Custom policies for manual upgrades
+
+You can specify custom policies for manual cluster upgrades. These policies get applied each time you select a new runtime version, which triggers the system to kick off the upgrade of your cluster. If you do not override the policies, the defaults are used. For more, see [Set custom polices for manual upgrades](service-fabric-cluster-upgrade-version-azure.md#custom-policies-for-manual-upgrades).
+
+## Other cluster updates
+
+Outside of upgrading the runtime, there are a number of other actions you may need to perform to keep your cluster up to date, including the following:
+
+### Managing certificates
Service Fabric uses [X.509 server certificates](service-fabric-cluster-security.md) that you specify when you create a cluster to secure communications between cluster nodes and authenticate clients. You can add, update, or delete certificates for the cluster and client in the [Azure portal](https://portal.azure.com) or using PowerShell/Azure CLI. To learn more, read [add or remove certificates](service-fabric-cluster-security-update-certs-azure.md)
-## Open application ports
+### Opening application ports
You can change application ports by changing the Load Balancer resource properties that are associated with the node type. You can use the Azure portal, or you can use PowerShell/Azure CLI. For more information, read [Open application ports for a cluster](create-load-balancer-rule.md).
-## Define node properties
+### Defining node properties
Sometimes you may want to ensure that certain workloads run only on certain types of nodes in the cluster. For example, some workload may require GPUs or SSDs while others may not. For each of the node types in a cluster, you can add custom node properties to cluster nodes. Placement constraints are the statements attached to individual services that select for one or more node properties. Placement constraints define where services should run. For details on the use of placement constraints, node properties, and how to define them, read [node properties and placement constraints](service-fabric-cluster-resource-manager-cluster-description.md#node-properties-and-placement-constraints).
-## Add capacity metrics
+### Adding capacity metrics
For each of the node types, you can add custom capacity metrics that you want to use in your applications to report load. For details on the use of capacity metrics to report load, refer to the Service Fabric Cluster Resource Manager Documents on [Describing Your Cluster](service-fabric-cluster-resource-manager-cluster-description.md) and [Metrics and Load](service-fabric-cluster-resource-manager-metrics.md).
-## Set health policies for automatic upgrades
-
-You can specify custom health policies for fabric upgrade. If you have set your cluster to Automatic fabric upgrades, then these policies get applied to the Phase-1 of the automatic fabric upgrades.
-If you have set your cluster for Manual fabric upgrades, then these policies get applied each time you select a new version triggering the system to kick off the fabric upgrade in your cluster. If you do not override the policies, the defaults are used.
-
-You can specify the custom health policies or review the current settings under the "fabric upgrade" blade, by selecting the advanced upgrade settings. Review the following picture on how to.
-
-![Manage custom health policies][HealthPolices]
-
-## Customize Fabric settings for your cluster
+### Customizing settings for your cluster
Many different configuration settings can be customized on a cluster, such as the reliability level of the cluster and node properties. For more information, read [Service Fabric cluster fabric settings](service-fabric-cluster-fabric-settings.md).
-## Patch the OS in the cluster nodes
+### Upgrading OS images for cluster nodes
-The patch orchestration application (POA) is a Service Fabric application that automates operating system patching on a Service Fabric cluster without downtime. The [Patch Orchestration Application for Windows](service-fabric-patch-orchestration-application.md) can be deployed on your cluster to install patches in an orchestrated manner while keeping the services available all the time.
+Enabling automatic OS image upgrades for your Service Fabric cluster nodes is a best practice. In order to do so, there are several cluster requirements and steps to take. Another option is using Patch Orchestration Application (POA, a Service Fabric application that automates operating system patching on a Service Fabric cluster without downtime. To learn more about these options, see [Patch the Windows operating system in your Service Fabric cluster](service-fabric-patch-orchestration-application.md).
## Next steps
-* Learn how to customize some of the [service fabric cluster fabric settings](service-fabric-cluster-fabric-settings.md)
-* Learn how to [scale your cluster in and out](service-fabric-cluster-scale-in-out.md)
+* [Manage Service Fabric upgrades](service-fabric-cluster-upgrade-version-azure.md)
+* Customize your [Service Fabric cluster settings](service-fabric-cluster-fabric-settings.md)
+* [Scale your cluster in and out](service-fabric-cluster-scale-in-out.md)
* Learn about [application upgrades](service-fabric-application-upgrade.md)-
-<!--Image references-->
-[CertificateUpgrade]: ./media/service-fabric-cluster-upgrade/CertificateUpgrade2.png
-[AddingProbes]: ./media/service-fabric-cluster-upgrade/addingProbes2.PNG
-[AddingLBRules]: ./media/service-fabric-cluster-upgrade/addingLBRules.png
-[HealthPolices]: ./media/service-fabric-cluster-upgrade/Manage_AutomodeWadvSettings.PNG
-[ARMUpgradeMode]: ./media/service-fabric-cluster-upgrade/ARMUpgradeMode.PNG
-[Create_Manualmode]: ./media/service-fabric-cluster-upgrade/Create_Manualmode.PNG
-[Manage_Automaticmode]: ./media/service-fabric-cluster-upgrade/Manage_Automaticmode.PNG
service-fabric Service Fabric Content Roadmap https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-content-roadmap.md
A [guest executable](service-fabric-guest-executables-introduction.md) is an exi
## Application lifecycle As with other platforms, an application on Service Fabric usually goes through the following phases: design, development, testing, deployment, upgrade, maintenance, and removal. Service Fabric provides first-class support for the full application lifecycle of cloud applications, from development through deployment, daily management, and maintenance to eventual decommissioning. The service model enables several different roles to participate independently in the application lifecycle. [Service Fabric application lifecycle](service-fabric-application-lifecycle.md) provides an overview of the APIs and how they are used by the different roles throughout the phases of the Service Fabric application lifecycle.
-The entire app lifecycle can be managed using [PowerShell cmdlets](/powershell/module/servicefabric/?view=azureservicefabricps), [CLI commands](service-fabric-sfctl.md), [C# APIs](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient), [Java APIs](/jav) or [Jenkins](/azure/developer/jenkins/deploy-to-service-fabric-cluster).
+The entire app lifecycle can be managed using [PowerShell cmdlets](/powershell/module/servicefabric/?preserve-view=true&view=azureservicefabricps), [CLI commands](service-fabric-sfctl.md), [C# APIs](/dotnet/api/system.fabric.fabricclient.applicationmanagementclient), [Java APIs](/jav) or [Jenkins](/azure/developer/jenkins/deploy-to-service-fabric-cluster).
## Test applications and services To create truly cloud-scale services, it is critical to verify that your applications and services can withstand real-world failures. The Fault Analysis Service is designed for testing services that are built on Service Fabric. With the [Fault Analysis Service](service-fabric-testability-overview.md), you can induce meaningful faults and run complete test scenarios against your applications. These faults and scenarios exercise and validate the numerous states and transitions that a service will experience throughout its lifetime, all in a controlled, safe, and consistent manner.
Out of the box, Service Fabric components report health on all entities in the c
Service Fabric provides multiple ways to [view health reports](service-fabric-view-entities-aggregated-health.md) aggregated in the health store: * [Service Fabric Explorer](service-fabric-visualizing-your-cluster.md) or other visualization tools.
-* Health queries (through [PowerShell](/powershell/module/servicefabric/?view=azureservicefabricps), [CLI](service-fabric-sfctl.md), the [C# FabricClient APIs](/dotnet/api/system.fabric.fabricclient.healthclient) and [Java FabricClient APIs](/java/api/system.fabric), or [REST APIs](/rest/api/servicefabric)).
+* Health queries (through [PowerShell](/powershell/module/servicefabric/?preserve-view=true&view=azureservicefabricps), [CLI](service-fabric-sfctl.md), the [C# FabricClient APIs](/dotnet/api/system.fabric.fabricclient.healthclient) and [Java FabricClient APIs](/java/api/system.fabric), or [REST APIs](/rest/api/servicefabric)).
* General queries that return a list of entities that have health as one of the properties (through PowerShell, CLI, the APIs, or REST). ## Monitoring and diagnostics
service-fabric Service Fabric Controlled Chaos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-controlled-chaos.md
Title: Induce Chaos in Service Fabric clusters description: Using Fault Injection and Cluster Analysis Service APIs to manage Chaos in the cluster. Previously updated : 02/05/2018 Last updated : 03/26/2021 # Induce controlled Chaos in Service Fabric clusters
Large-scale distributed systems like cloud infrastructures are inherently unreli
The [Fault Injection and Cluster Analysis Service](./service-fabric-testability-overview.md) (also known as the Fault Analysis Service) gives developers the ability to induce faults to test their services. These targeted simulated faults, like [restarting a partition](/powershell/module/servicefabric/start-servicefabricpartitionrestart), can help exercise the most common state transitions. However targeted simulated faults are biased by definition and thus may miss bugs that show up only in hard-to-predict, long and complicated sequence of state transitions. For an unbiased testing, you can use Chaos.
-Chaos simulates periodic, interleaved faults (both graceful and ungraceful) throughout the cluster over extended periods of time. A graceful fault consists of a set of Service Fabric API calls, for example, restart replica fault is a graceful fault because this is a close followed by an open on a replica. Remove replica, move primary replica, and move secondary replica are the other graceful faults exercised by Chaos. Ungraceful faults are process exits, like restart node and restart code package.
+Chaos simulates periodic, interleaved faults (both graceful and ungraceful) throughout the cluster over extended periods of time. A graceful fault consists of a set of Service Fabric API calls, for example, restart replica fault is a graceful fault because this is a close followed by an open on a replica. Remove replica, move primary replica, move secondary replica, and move instance are the other graceful faults exercised by Chaos. Ungraceful faults are process exits, like restart node and restart code package.
Once you have configured Chaos with the rate and the kind of faults, you can start Chaos through C#, Powershell, or REST API to start generating faults in the cluster and in your services. You can configure Chaos to run for a specified time period (for example, for one hour), after which Chaos stops automatically, or you can call StopChaos API (C#, Powershell, or REST) to stop it at any time.
Chaos induces faults from the following categories:
* Restart a replica * Move a primary replica (configurable) * Move a secondary replica (configurable)
+* Move an instance
Chaos runs in multiple iterations. Each iteration consists of faults and cluster validation for the specified period. You can configure the time spent for the cluster to stabilize and for validation to succeed. If a failure is found in cluster validation, Chaos generates and persists a ValidationFailedEvent with the UTC timestamp and the failure details. For example, consider an instance of Chaos that is set to run for an hour with a maximum of three concurrent faults. Chaos induces three faults, and then validates the cluster health. It iterates through the previous step until it is explicitly stopped through the StopChaosAsync API or one-hour passes. If the cluster becomes unhealthy in any iteration (that is, it does not stabilize or it does not become healthy within the passed-in MaxClusterStabilizationTimeout), Chaos generates a ValidationFailedEvent. This event indicates that something has gone wrong and might need further investigation.
To get which faults Chaos induced, you can use GetChaosReport API (PowerShell, C
> Regardless how high a value *MaxConcurrentFaults* has, Chaos guarantees - in the absence of external faults - there is no quorum loss or data loss. >
-* **EnableMoveReplicaFaults**: Enables or disables the faults that cause the primary or secondary replicas to move. These faults are enabled by default.
+* **EnableMoveReplicaFaults**: Enables or disables the faults that cause the primary, secondary replicas, or instances to move. These faults are enabled by default.
* **WaitTimeBetweenIterations**: The amount of time to wait between iterations. That is, the amount of time Chaos will pause after having executed a round of faults and having finished the corresponding validation of the health of the cluster. The higher the value, the lower is the average fault injection rate. * **WaitTimeBetweenFaults**: The amount of time to wait between two consecutive faults in a single iteration. The higher the value, the lower the concurrency of (or the overlap between) faults. * **ClusterHealthPolicy**: Cluster health policy is used to validate the health of the cluster in between Chaos iterations. If the cluster health is in error or if an unexpected exception happens during fault execution, Chaos will wait for 30 minutes before the next health-check - to provide the cluster with some time to recuperate. * **Context**: A collection of (string, string) type key-value pairs. The map can be used to record information about the Chaos run. There cannot be more than 100 such pairs and each string (key or value) can be at most 4095 characters long. This map is set by the starter of the Chaos run to optionally store the context about the specific run. * **ChaosTargetFilter**: This filter can be used to target Chaos faults only to certain node types or only to certain application instances. If ChaosTargetFilter is not used, Chaos faults all cluster entities. If ChaosTargetFilter is used, Chaos faults only the entities that meet the ChaosTargetFilter specification. NodeTypeInclusionList and ApplicationInclusionList allow union semantics only. In other words, it is not possible to specify an intersection of NodeTypeInclusionList and ApplicationInclusionList. For example, it is not possible to specify "fault this application only when it is on that node type." Once an entity is included in either NodeTypeInclusionList or ApplicationInclusionList, that entity cannot be excluded using ChaosTargetFilter. Even if applicationX does not appear in ApplicationInclusionList, in some Chaos iteration applicationX can be faulted because it happens to be on a node of nodeTypeY that is included in NodeTypeInclusionList. If both NodeTypeInclusionList and ApplicationInclusionList are null or empty, an ArgumentException is thrown. * **NodeTypeInclusionList**:
- A list of node types to include in Chaos faults. All types of faults (restart node, restart codepackage, remove replica, restart replica, move primary, and move secondary) are enabled for the nodes of these node types. If a nodetype (say NodeTypeX) does not appear in the NodeTypeInclusionList, then node level faults (like NodeRestart) will never be enabled for the nodes of NodeTypeX, but code package and replica faults can still be enabled for NodeTypeX if an application in the ApplicationInclusionList happens to reside on a node of NodeTypeX. At most 100 node type names can be included in this list, to increase this number, a config upgrade is required for MaxNumberOfNodeTypesInChaosTargetFilter configuration.
+ A list of node types to include in Chaos faults. All types of faults (restart node, restart codepackage, remove replica, restart replica, move primary, move secondary, and move instance) are enabled for the nodes of these node types. If a nodetype (say NodeTypeX) does not appear in the NodeTypeInclusionList, then node level faults (like NodeRestart) will never be enabled for the nodes of NodeTypeX, but code package and replica faults can still be enabled for NodeTypeX if an application in the ApplicationInclusionList happens to reside on a node of NodeTypeX. At most 100 node type names can be included in this list, to increase this number, a config upgrade is required for MaxNumberOfNodeTypesInChaosTargetFilter configuration.
* **ApplicationInclusionList**:
- A list of application URIs to include in Chaos faults. All replicas belonging to services of these applications are amenable to replica faults (restart replica, remove replica, move primary, and move secondary) by Chaos. Chaos may restart a code package only if the code package hosts replicas of these applications only. If an application does not appear in this list, it can still be faulted in some Chaos iteration if the application ends up on a node of a node type that is included in NodeTypeInclusionList. However if applicationX is tied to nodeTypeY through placement constraints and applicationX is absent from ApplicationInclusionList and nodeTypeY is absent from NodeTypeInclusionList, then applicationX will never be faulted. At most 1000 application names can be included in this list, to increase this number, a config upgrade is required for MaxNumberOfApplicationsInChaosTargetFilter configuration.
+ A list of application URIs to include in Chaos faults. All replicas belonging to services of these applications are amenable to replica faults (restart replica, remove replica, move primary, move secondary, and move instance) by Chaos. Chaos may restart a code package only if the code package hosts replicas of these applications only. If an application does not appear in this list, it can still be faulted in some Chaos iteration if the application ends up on a node of a node type that is included in NodeTypeInclusionList. However if applicationX is tied to nodeTypeY through placement constraints and applicationX is absent from ApplicationInclusionList and nodeTypeY is absent from NodeTypeInclusionList, then applicationX will never be faulted. At most 1000 application names can be included in this list, to increase this number, a config upgrade is required for MaxNumberOfApplicationsInChaosTargetFilter configuration.
## How to run Chaos
class Program
MaxPercentUnhealthyNodes = 100 };
- // All types of faults, restart node, restart code package, restart replica, move primary replica,
- // and move secondary replica will happen for nodes of type 'FrontEndType'
+ // All types of faults, restart node, restart code package, restart replica, move primary
+ // replica, move secondary replica, and move instance will happen for nodes of type 'FrontEndType'
var nodetypeInclusionList = new List<string> { "FrontEndType"}; // In addition to the faults included by nodetypeInclusionList,
- // restart code package, restart replica, move primary replica, move secondary replica faults will
- // happen for 'fabric:/TestApp2' even if a replica or code package from 'fabric:/TestApp2' is residing
- // on a node which is not of type included in nodeypeInclusionList.
+ // restart code package, restart replica, move primary replica, move secondary replica,
+ // and move instance faults will happen for 'fabric:/TestApp2' even if a replica or code
+ // package from 'fabric:/TestApp2' is residing on a node which is not of type included
+ // in nodeypeInclusionList.
var applicationInclusionList = new List<string> { "fabric:/TestApp2" }; // List of cluster entities to target for Chaos faults.
service-fabric Service Fabric Testability Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-testability-actions.md
Title: Simulate failures in Azure microservices description: This article talks about the testability actions found in Microsoft Azure Service Fabric. Previously updated : 06/07/2017 Last updated : 03/26/2021 # Testability actions
For better quality validation, run the service and business workload while induc
| InvokeQuorumLoss |Puts a given stateful service partition into quorum loss. |InvokeQuorumLossAsync |Invoke-ServiceFabricQuorumLoss |Graceful | | MovePrimary |Moves the specified primary replica of a stateful service to the specified cluster node. |MovePrimaryAsync |Move-ServiceFabricPrimaryReplica |Graceful | | MoveSecondary |Moves the current secondary replica of a stateful service to a different cluster node. |MoveSecondaryAsync |Move-ServiceFabricSecondaryReplica |Graceful |
+| MoveInstance | Moves the current instance of a stateless service to a different cluster node. | MoveInstanceAsync | Move-ServiceFabricInstance | Graceful |
| RemoveReplica |Simulates a replica failure by removing a replica from a cluster. This will close the replica and will transition it to role 'None', removing all of its state from the cluster. |RemoveReplicaAsync |Remove-ServiceFabricReplica |Graceful | | RestartDeployedCodePackage |Simulates a code package process failure by restarting a code package deployed on a node in a cluster. This aborts the code package process, which will restart all the user service replicas hosted in that process. |RestartDeployedCodePackageAsync |Restart-ServiceFabricDeployedCodePackage |Ungraceful | | RestartNode |Simulates a Service Fabric cluster node failure by restarting a node. |RestartNodeAsync |Restart-ServiceFabricNode |Ungraceful |
service-fabric Service Fabric Testability Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-testability-scenarios.md
class Test
PowerShell
-The Service Fabric Powershell module includes two ways to begin a chaos scenario. `Invoke-ServiceFabricChaosTestScenario` is client-based, and if the client machine is shutdown midway through the test, no further faults will be introduced. Alternatively, there is a set of commands meant to keep the test running in the event of machine shutdown. `Start-ServiceFabricChaos` uses a stateful and reliable system service called FaultAnalysisService, ensuring faults will remain introduced until the TimeToRun is up. `Stop-ServiceFabricChaos` can be used to manually stop the scenario, and `Get-ServiceFabricChaosReport` will obtain a report. For more information see the [Azure Service Fabric Powershell reference](/powershell/module/ServiceFabric/New-ServiceFabricService?view=azureservicefabricps) and [Inducing controlled chaos in Service Fabric clusters](service-fabric-controlled-chaos.md).
+The Service Fabric Powershell module includes two ways to begin a chaos scenario. `Invoke-ServiceFabricChaosTestScenario` is client-based, and if the client machine is shutdown midway through the test, no further faults will be introduced. Alternatively, there is a set of commands meant to keep the test running in the event of machine shutdown. `Start-ServiceFabricChaos` uses a stateful and reliable system service called FaultAnalysisService, ensuring faults will remain introduced until the TimeToRun is up. `Stop-ServiceFabricChaos` can be used to manually stop the scenario, and `Get-ServiceFabricChaosReport` will obtain a report. For more information see the [Azure Service Fabric Powershell reference](/powershell/module/ServiceFabric/New-ServiceFabricService?preserve-view=true&view=azureservicefabricps) and [Inducing controlled chaos in Service Fabric clusters](service-fabric-controlled-chaos.md).
```powershell $connection = "localhost:19000"
site-recovery Azure Stack Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/azure-stack-site-recovery.md
Then run a failover as follows:
### Fail back to Azure Stack
-When your primary site is up and running again, you can fail back from Azure to Azure Stack. To do this, follow the steps listed out [here](/azure-stack/operator/site-recovery-failback?view=azs-2005).
+When your primary site is up and running again, you can fail back from Azure to Azure Stack. To do this, follow the steps listed out [here](/azure-stack/operator/site-recovery-failback).
## Conclusion
site-recovery Hyper V Deployment Planner Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-deployment-planner-cost-estimation.md
eastus, eastus2, westus, centralus, northcentralus, southcentralus, northeurope,
Site Recovery Deployment Planner can generate the cost report with any of the following currencies. |Currency|Name|Currency|Name|Currency|Name|
-|||||||||
+|||||||
|ARS|Argentine peso ($)|AUD|Australian dollar ($)|BRL|Brazilian real (R$)| |CAD|Canadian dollar ($)|CHF|Swiss franc (chf)|DKK|Danish krone (kr)|
-|EUR|Euro (€)|GBP|British pound (£)|HKD|Hong Kong dollar (HK$)|
+|EUR|Euro (&euro;)|GBP|British pound (£)|HKD|Hong Kong dollar (HK$)|
|IDR|Indonesia rupiah (Rp)|INR|Indian rupee (₹)|JPY|Japanese yen (¥)| |KRW|Korean won (₩)|MXN|Mexican peso (MX$)|MYR|Malaysian ringgit (RM$)|
-|NOK|Norwegian krone (kr)||NZD|New Zealand dollar ($)||RUB|Russian ruble (руб)|
+|NOK|Norwegian krone (kr)|NZD|New Zealand dollar ($)|RUB|Russian ruble (руб)|
|SAR|Saudi riyal (SR)|SEK|Swedish krona (kr)|TWD|Taiwanese dollar (NT$)| |TRY|Turkish lira (TL)|USD| US dollar ($)|ZAR|South African rand (R)|
site-recovery Hyper V Vmm Test Failover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-vmm-test-failover.md
You run a test failover from the primary to the secondary site. If you simply wa
When you run a test failover, you're asked to select network settings for test replica machines, as summarized in the table.
-| **Option** | **Details** | |
-| | | |
-| **None** | The test VM is created on the host on which the replica VM is located. It isnΓÇÖt added to the cloud, and isn't connected to any network.<br/><br/> You can connect the machine to a VM network after it has been created.| |
-| **Use existing** | The test VM is created on the host on which the replica VM is located. It isnΓÇÖt added to the cloud.<br/><br/>Create a VM network that's isolated from your production network.<br/><br/>If you're using a VLAN-based network, we recommend that you create a separate logical network (not used in production) in VMM for this purpose. This logical network is used to create VM networks for test failovers.<br/><br/>The logical network should be associated with at least one of the network adapters of all the Hyper-V servers that are hosting virtual machines.<br/><br/>For VLAN logical networks, the network sites that you add to the logical network should be isolated.<br/><br/>If youΓÇÖre using a Windows Network VirtualizationΓÇôbased logical network, Azure Site Recovery automatically creates isolated VM networks. | |
-| **Create a network** | A temporary test network is created automatically based on the setting that you specify in **Logical Network** and its related network sites.<br/><br/> Failover checks that VMs are created.<br/><br/> You should use this option if a recovery plan uses more than one VM network.<br/><br/> If you're using Windows Network Virtualization networks, this option can automatically create VM networks with the same settings (subnets and IP address pools) in the network of the replica virtual machine. These VM networks are cleaned up automatically after the test failover is complete.<br/><br/> The test VM is created on the host on which the replica virtual machine exists. It isnΓÇÖt added to the cloud.|
+| **Option** | **Details** |
+| | |
+| **None** | The test VM is created on the host on which the replica VM is located. It isn't added to the cloud, and isn't connected to any network.<br/><br/> You can connect the machine to a VM network after it has been created.|
+| **Use existing** | The test VM is created on the host on which the replica VM is located. It isn't added to the cloud.<br/><br/>Create a VM network that's isolated from your production network.<br/><br/>If you're using a VLAN-based network, we recommend that you create a separate logical network (not used in production) in VMM for this purpose. This logical network is used to create VM networks for test failovers.<br/><br/>The logical network should be associated with at least one of the network adapters of all the Hyper-V servers that are hosting virtual machines.<br/><br/>For VLAN logical networks, the network sites that you add to the logical network should be isolated.<br/><br/>If you're using a Windows Network VirtualizationΓÇôbased logical network, Azure Site Recovery automatically creates isolated VM networks. |
+| **Create a network** | A temporary test network is created automatically based on the setting that you specify in **Logical Network** and its related network sites.<br/><br/> Failover checks that VMs are created.<br/><br/> You should use this option if a recovery plan uses more than one VM network.<br/><br/> If you're using Windows Network Virtualization networks, this option can automatically create VM networks with the same settings (subnets and IP address pools) in the network of the replica virtual machine. These VM networks are cleaned up automatically after the test failover is complete.<br/><br/> The test VM is created on the host on which the replica virtual machine exists. It isn't added to the cloud.|
### Best practices
If a VM network is configured in VMM with Windows Network Virtualization, note t
If you simply want to check that a VM can fail over, you can run a test failover without an infrastructure. If you want to do a full DR drill to test app failover, you need to prepare the infrastructure at the secondary site: - If you run a test failover using an existing network, prepare Active Directory, DHCP, and DNS in that network.-- If you run a test failover with the option to create a VM network automatically, you need to add infrastructure resources to the automatically created network, before you run the test failover. In a recovery plan, you can facilitate this by adding a manual step before Group-1 in the recovery plan that youΓÇÖre going to use for the test failover. Then, add the infrastructure resources to the automatically created network before you run the test failover.
+- If you run a test failover with the option to create a VM network automatically, you need to add infrastructure resources to the automatically created network, before you run the test failover. In a recovery plan, you can facilitate this by adding a manual step before Group-1 in the recovery plan that you're going to use for the test failover. Then, add the infrastructure resources to the automatically created network before you run the test failover.
### Prepare DHCP
To run a test failover for application testing, you need a copy of the productio
### Prepare DNS Prepare a DNS server for the test failover as follows:
-* **DHCP**: If virtual machines use DHCP, the IP address of the test DNS should be updated on the test DHCP server. If youΓÇÖre using a network type of Windows Network Virtualization, the VMM server acts as the DHCP server. Therefore, the IP address of DNS should be updated in the test failover network. In this case, the virtual machines register themselves to the relevant DNS server.
+* **DHCP**: If virtual machines use DHCP, the IP address of the test DNS should be updated on the test DHCP server. If you're using a network type of Windows Network Virtualization, the VMM server acts as the DHCP server. Therefore, the IP address of DNS should be updated in the test failover network. In this case, the virtual machines register themselves to the relevant DNS server.
* **Static address**: If virtual machines use a static IP address, the IP address of the test DNS server should be updated in test failover network. You might need to update DNS with the IP address of the test virtual machines. You can use the following sample script for this purpose: ```powershell
site-recovery Vmware Azure Prepare Failback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-prepare-failback.md
Before you continue, get a quick overview with this video about how to fail back
You need a number of components and settings in place before you can reprotect and fail back from Azure.
-**Component**| **Details**
- |
-**On-premises configuration server** | The on-premises configuration server must be running and connected to Azure.<br/><br/> The VM you're failing back to must exist in the configuration server database. If disaster affects the configuration server, restore it with the same IP address to ensure that failback works.<br/><br/> If IP addresses of replicated machines were retained on failover, site-to-site connectivity (or ExpressRoute connectivity) should be established between Azure VMs machines and the failback NIC of the configuration server. For retained IP addresses the configuration server needs two NICs - one for source machine connectivity, and one for Azure failback connectivity. This avoids overlap of subnet address ranges for the source and failed over VMs.
-**Process server in Azure** | You need a process server in Azure before you can fail back to your on-premises site.<br/><br/> The process server receives data from the protected Azure VM, and sends it to the on-premises site.<br/><br/> You need a low-latency network between the process server and the protected VM, so we recommend that you deploy the process server in Azure for higher replication performance.<br/><br/> For proof-of-concept, you can use the on-premises process server, and ExpressRoute with private peering.<br/><br/> The process server should be in the Azure network in which the failed over VM is located. The process server must also be able to communicate with the on-premises configuration server and master target server.
-**Separate master target server** | The master target server receives failback data, and by default a Windows master target server runs on the on-premises configuration server.<br/><br/> A master target server can have up to 60 disks attached to it. VMs being failed back have more than a collective total of 60 disks, or if you're failing back large volumes of traffic, create a separate master target server for failback.<br/><br/> If machines are gathered into a replication group for multi-VM consistency, the VMs must all be Windows, or must all be Linux. Why? Because all VMs in a replication group must use the same master target server, and the master target server must have same operating system (With the same or a higher version) than those of the replicated machines.<br/><br/> The master target server shouldn't have any snapshots on its disks, otherwise reprotection and failback won't work.<br/><br/> The master target can't have a Paravirtual SCSI controller. The controller can only be an LSI Logic controller. Without an LSI Logic controller, reprotection fails.
-**Failback replication policy** | To replicate back to on-premises site, you need a failback policy. This policy is automatically created when you create a replication policy to Azure.<br/><br/> The policy is automatically associated with the configuration server. It's set to an RPO threshold of 15 minutes, recovery point retention of 24 hours, and app-consistent snapshot frequency is 60 minutes. The policy can't be edited.
-**Site-to-site VPN/ExpressRoute private peering** | Reprotection and failback needs a site-to-site VPN connection, or ExpressRoute private peering to replicate data.
+| **Component**| **Details** |
+| | |
+| **On-premises configuration server** | The on-premises configuration server must be running and connected to Azure.<br/><br/> The VM you're failing back to must exist in the configuration server database. If disaster affects the configuration server, restore it with the same IP address to ensure that failback works.<br/><br/> If IP addresses of replicated machines were retained on failover, site-to-site connectivity (or ExpressRoute connectivity) should be established between Azure VMs machines and the failback NIC of the configuration server. For retained IP addresses the configuration server needs two NICs - one for source machine connectivity, and one for Azure failback connectivity. This avoids overlap of subnet address ranges for the source and failed over VMs. |
+| **Process server in Azure** | You need a process server in Azure before you can fail back to your on-premises site.<br/><br/> The process server receives data from the protected Azure VM, and sends it to the on-premises site.<br/><br/> You need a low-latency network between the process server and the protected VM, so we recommend that you deploy the process server in Azure for higher replication performance.<br/><br/> For proof-of-concept, you can use the on-premises process server, and ExpressRoute with private peering.<br/><br/> The process server should be in the Azure network in which the failed over VM is located. The process server must also be able to communicate with the on-premises configuration server and master target server. |
+| **Separate master target server** | The master target server receives failback data, and by default a Windows master target server runs on the on-premises configuration server.<br/><br/> A master target server can have up to 60 disks attached to it. VMs being failed back have more than a collective total of 60 disks, or if you're failing back large volumes of traffic, create a separate master target server for failback.<br/><br/> If machines are gathered into a replication group for multi-VM consistency, the VMs must all be Windows, or must all be Linux. Why? Because all VMs in a replication group must use the same master target server, and the master target server must have same operating system (With the same or a higher version) than those of the replicated machines.<br/><br/> The master target server shouldn't have any snapshots on its disks, otherwise reprotection and failback won't work.<br/><br/> The master target can't have a Paravirtual SCSI controller. The controller can only be an LSI Logic controller. Without an LSI Logic controller, reprotection fails. |
+| **Failback replication policy** | To replicate back to on-premises site, you need a failback policy. This policy is automatically created when you create a replication policy to Azure.<br/><br/> The policy is automatically associated with the configuration server. It's set to an RPO threshold of 15 minutes, recovery point retention of 24 hours, and app-consistent snapshot frequency is 60 minutes. The policy can't be edited. |
+| **Site-to-site VPN/ExpressRoute private peering** | Reprotection and failback needs a site-to-site VPN connection, or ExpressRoute private peering to replicate data. |
## Ports for reprotection/failback
spatial-anchors Tutorial Share Anchors Across Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spatial-anchors/tutorials/tutorial-share-anchors-across-devices.md
description: In this tutorial, you learn how to share Azure Spatial Anchor ident
- Last updated 11/20/2020
Select **OK**.
Open Visual Studio Code, and then open the project in the *Sharing\SharingServiceSample* folder.
-To deploy the sharing service through Visual Studio Code, follow the instructions in <a href="/aspnet/core/tutorials/publish-to-azure-webapp-using-vscode?view=aspnetcore-2.2#open-it-with-visual-studio-code" target="_blank">Publish an ASP.NET Core app to Azure with Visual Studio Code</a>. Start at the "Open it with Visual Studio Code" section. Do not create another ASP.NET project as explained in the preceding step, because you already have a project to be deployed and published: SharingServiceSample.
+To deploy the sharing service through Visual Studio Code, follow the instructions in <a href="/aspnet/core/tutorials/publish-to-azure-webapp-using-vscode?view=aspnetcore-2.2&preserve-view=true#open-it-with-visual-studio-code" target="_blank">Publish an ASP.NET Core app to Azure with Visual Studio Code</a>. Start at the "Open it with Visual Studio Code" section. Do not create another ASP.NET project as explained in the preceding step, because you already have a project to be deployed and published: SharingServiceSample.
spring-cloud Troubleshooting Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/troubleshooting-vnet.md
To set up the Azure Spring Cloud service instance by using the Resource Manager
| Error Message | How to fix | ||| | Resources created by Azure Spring Cloud were disallowed by policy. | Network resources will be created when deploy Azure Spring Cloud in your own virtual network. Please check whether you have [Azure Policy](../governance/policy/overview.md) defined to block those creation. Resources failed to be created can be found in error message. |
-| Provided subnets have associated with route tables, please disassociate them. | Currently it is not supported to deploy Azure Spring Cloud in subnet associated with existing route tables, please dissociate them and try again. |
| Required traffic is not allowlisted. | Please refer to [Customer Responsibilities for Running Azure Spring Cloud in VNET](spring-cloud-vnet-customer-responsibilities.md) to ensure required traffic is allowlisted. | ## My application can't be registered
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/configuration.md
The following example configuration demonstrates how to override an error code.
}, { "route": "/.auth/login/twitter",
- "statusCode": 404,
+ "statusCode": 404
}, { "route": "/logout",
The following example configuration demonstrates how to override an error code.
}, "mimeTypes": { ".json": "text/json"
- },
+ }
} ```
static-web-apps Front End Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/front-end-frameworks.md
The intent of the table columns is explained by the following items:
| [Polymer](https://www.polymer-project.org/) | `build/default` | n/a | | [Preact](https://preactjs.com/) | `build` | n/a | | [React](https://reactjs.org/) | `build` | n/a |
+| [RedwoodJS](https://redwoodjs.com/) | `web/dist` | `yarn rw build` |
| [Stencil](https://stenciljs.com/) | `www` | n/a | | [Svelte](https://svelte.dev/) | `public` | n/a | | [Three.js](https://threejs.org/) | `/` | n/a |
static-web-apps Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/static-web-apps/get-started-cli.md
Now that the repository is created, you can create a static web app from the Azu
- `<RESOURCE_GROUP_NAME>`: Replace this value with an existing [Azure resource group name](../azure-resource-manager/management/manage-resources-cli.md).
- - See the [az group](/cli/azure/group?view=azure-cli-latest#az_group_list) documentation for details on listing resource groups.
+ - See the [az group](/cli/azure/group#az_group_list) documentation for details on listing resource groups.
- `<YOUR_GITHUB_ACCOUNT_NAME>`: Replace this value with your GitHub username.
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-best-practices.md
# Best practices for using Azure Data Lake Storage Gen2
-In this article, you learn about best practices and considerations for working with Azure Data Lake Storage Gen2. This article provides information around security, performance, resiliency, and monitoring for Data Lake Storage Gen2. Before Data Lake Storage Gen2, working with truly big data in services like Azure HDInsight was complex. You had to shard data across multiple Blob storage accounts so that petabyte storage and optimal performance at that scale could be achieved. Data Lake Storage Gen2 supports individual file sizes as high as 5TB and most of the hard limits for performance have been removed. However, there are still some considerations that this article covers so that you can get the best performance with Data Lake Storage Gen2.
+In this article, you learn about best practices and considerations for working with Azure Data Lake Storage Gen2. This article provides information around security, performance, resiliency, and monitoring for Data Lake Storage Gen2. Before Data Lake Storage Gen2, working with truly big data in services like Azure HDInsight was complex. You had to shard data across multiple Blob storage accounts so that petabyte storage and optimal performance at that scale could be achieved. Data Lake Storage Gen2 supports individual file sizes as high as 190.7 TiB and most of the hard limits for performance have been removed. However, there are still some considerations that this article covers so that you can get the best performance with Data Lake Storage Gen2.
## Security considerations
For example, a marketing firm receives daily data extracts of customer updates f
*NA/Extracts/ACMEPaperCo/In/2017/08/14/updates_08142017.csv*\ *NA/Extracts/ACMEPaperCo/Out/2017/08/14/processed_updates_08142017.csv*
-In the common case of batch data being processed directly into databases such as Hive or traditional SQL databases, there isnΓÇÖt a need for an **/in** or **/out** folder since the output already goes into a separate folder for the Hive table or external database. For example, daily extracts from customers would land into their respective folders, and orchestration by something like Azure Data Factory, Apache Oozie, or Apache Airflow would trigger a daily Hive or Spark job to process and write the data into a Hive table.
+In the common case of batch data being processed directly into databases such as Hive or traditional SQL databases, there isnΓÇÖt a need for an **/in** or **/out** folder since the output already goes into a separate folder for the Hive table or external database. For example, daily extracts from customers would land into their respective folders, and orchestration by something like Azure Data Factory, Apache Oozie, or Apache Airflow would trigger a daily Hive or Spark job to process and write the data into a Hive table.
storage Data Lake Storage Migrate Gen1 To Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-migrate-gen1-to-gen2.md
This table compares the capabilities of Gen1 to that of Gen2.
Choose a migration pattern, and then modify that pattern as needed.
-|||
+|Migration pattern | Details |
||| |**Lift and Shift**|The simplest pattern. Ideal if your data pipelines can afford downtime.| |**Incremental copy**|Similar to *lift and shift*, but with less downtime. Ideal for large amounts of data that take longer to copy.|
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/scalability-targets.md
Previously updated : 07/14/2020 Last updated : 03/27/2021
storage Soft Delete Blob Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/soft-delete-blob-enable.md
To recover blobs that were accidentally deleted, call Undelete on those blobs. R
To recover to a specific blob version, first call Undelete on a blob, then copy the desired snapshot over the blob. The following example recovers a block blob to its most recently generated snapshot: # [.NET v11](#tab/dotnet11)
blockBlob.StartCopy(copySource);
## Next steps - [Soft delete for Blob storage](./soft-delete-blob-overview.md)-- [Blob versioning](versioning-overview.md)
+- [Blob versioning](versioning-overview.md)
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blobs-introduction.md
Previously updated : 06/24/2020 Last updated : 03/27/2021
A container organizes a set of blobs, similar to a directory in a file system. A
Azure Storage supports three types of blobs: -- **Block blobs** store text and binary data. Block blobs are made up of blocks of data that can be managed individually. Block blobs store up to about 4.75 TiB of data. Larger block blobs are available in preview, up to about 190.7 TiB
+- **Block blobs** store text and binary data. Block blobs are made up of blocks of data that can be managed individually. Block blobs can store up to about 190.7 TiB.
- **Append blobs** are made up of blocks like block blobs, but are optimized for append operations. Append blobs are ideal for scenarios such as logging data from virtual machines. - **Page blobs** store random access files up to 8 TiB in size. Page blobs store virtual hard drive (VHD) files and serve as disks for Azure virtual machines. For more information about page blobs, see [Overview of Azure page blobs](storage-blob-pageblob-overview.md)
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-blobs-copy.md
Copy a blob to another storage account by using the [azcopy copy](storage-ref-az
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>'` | | **Example** | `azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net/mycontainer/myTextFile.txt'` |
Copy a directory to another storage account by using the [azcopy copy](storage-r
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<directory-path><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>' --recursive` | | **Example** | `azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myBlobDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive` |
Copy a container to another storage account by using the [azcopy copy](storage-r
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name><SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>' --recursive` | | **Example** | `azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive` |
Copy all containers, directories, and blobs to another storage account by using
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<SAS-token>' 'https://<destination-storage-account-name>.blob.core.windows.net/' --recursive` | | **Example** | `azcopy copy 'https://mysourceaccount.blob.core.windows.net/?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net' --recursive` |
The following examples show how to use the `--blob-tags` option.
> [!TIP] > These examples enclose path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Example | Code |
|--|--| | **Blob** | `azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net/mycontainer/myTextFile.txt' --blob-tags='my%20tag=my%20tag%20value&my%20second%20tag=my%20second%20tag%20value'` | | **Directory** | `azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myBlobDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive --blob-tags='my%20tag=my%20tag%20value&my%20second%20tag=my%20second%20tag%20value'` |
storage Storage Use Azcopy Blobs Download https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-blobs-download.md
Download a blob by using the [azcopy copy](storage-ref-azcopy-copy.md) command.
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-path>' '<local-file-path>'` | | **Example** | `azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt' 'C:\myDirectory\myTextFile.txt'` |
Download a directory by using the [azcopy copy](storage-ref-azcopy-copy.md) comm
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<directory-path>' '<local-directory-path>' --recursive` | | **Example** | `azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory' 'C:\myDirectory' --recursive` |
You can download the contents of a directory without copying the containing dire
> [!NOTE] > Currently, this scenario is supported only for accounts that don't have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.blob.core.windows.net/<container-name>/*' '<local-directory-path>/'` | | **Example** | `azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory/*' 'C:\myDirectory'` |
You can download specific blobs by using complete file names, partial names with
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-path` option. Separate individual blob names by using a semicolin (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-or-directory-name>' '<local-directory-path>' --include-path <semicolon-separated-file-list>` | | **Example** | `azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/FileDirectory' 'C:\myDirectory' --include-path 'photos;documents\myFile.txt' --recursive` |
You can also exclude blobs by using the `--exclude-path` option. To learn more,
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-pattern` option. Specify partial names that include the wildcard characters. Separate names by using a semicolin (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-or-directory-name>' '<local-directory-path>' --include-pattern <semicolon-separated-file-list-with-wildcard-characters>` | | **Example** | `azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/FileDirectory' 'C:\myDirectory' --include-pattern 'myFile*.txt;*.pdf*'` |
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-be
The following examples download files that were modified on or after the specified date.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-or-directory-name>/*' '<local-directory-path>' --include-after <Date-Time-in-ISO-8601-format>` | | **Example** | `azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/FileDirectory/*' 'C:\myDirectory' --include-after '2020-08-19T15:04:00Z'` |
Then, use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--list
You can download a [blob snapshot](../blobs/snapshots-overview.md) by referencing the **DateTime** value of a blob snapshot.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-path>?sharesnapshot=<DateTime-of-snapshot>' '<local-file-path>'` | | **Example** | `azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory\myTextFile.txt'` |
storage Storage Use Azcopy Blobs Synchronize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-blobs-synchronize.md
In this case, the container is the destination, and the local file system is the
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy sync '<local-directory-path>' 'https://<storage-account-name>.blob.core.windows.net/<container-name>' --recursive` | | **Example** | `azcopy sync 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --recursive` |
In this case, the local file system is the destination, and the container is the
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy sync 'https://<storage-account-name>.blob.core.windows.net/<container-name>' 'C:\myDirectory' --recursive` | | **Example** | `azcopy sync 'https://mystorageaccount.blob.core.windows.net/mycontainer' 'C:\myDirectory' --recursive` |
The first container that appears in this command is the source. The second one i
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy sync 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>' --recursive` | | **Example** | `azcopy sync 'https://mysourceaccount.blob.core.windows.net/mycontainer' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive` |
The first directory that appears in this command is the source. The second one i
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy sync 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' --recursive` | | **Example** | `azcopy sync 'https://mysourceaccount.blob.core.windows.net/<container-name>/myDirectory' 'https://mydestinationaccount.blob.core.windows.net/mycontainer/myDirectory' --recursive` |
storage Storage Use Azcopy Blobs Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-blobs-upload.md
You can use the [azcopy make](storage-ref-azcopy-make.md) command to create a co
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy make 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>'` | | **Example** | `azcopy make 'https://mystorageaccount.blob.core.windows.net/mycontainer'` |
Upload a file by using the [azcopy copy](storage-ref-azcopy-copy.md) command.
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-file-path>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-name>'` | | **Example** | `azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt'` |
This example copies a directory (and all of the files in that directory) to a bl
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>' --recursive` | | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --recursive` |
This example copies a directory (and all of the files in that directory) to a bl
To copy to a directory within the container, just specify the name of that directory in your command string.
-| | |
+| Syntax / example | Code |
|--|--| | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory' --recursive` | | **Example** (hierarchical namespace) | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myBlobDirectory' --recursive` |
Upload the contents of a directory by using the [azcopy copy](storage-ref-azcopy
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>\*' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<directory-path>'` | | **Example** | `azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory'` |
You can upload specific files by using complete file names, partial names with w
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-path` option. Separate individual file names by using a semicolon (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>' --include-path <semicolon-separated-file-list>` | | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --include-path 'photos;documents\myFile.txt' --recursive` |
You can also exclude files by using the `--exclude-path` option. To learn more,
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-pattern` option. Specify partial names that include the wildcard characters. Separate names by using a semicolin (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>' --include-pattern <semicolon-separated-file-list-with-wildcard-characters>` | | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --include-pattern 'myFile*.txt;*.pdf*'` |
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-be
The following examples upload files that were modified on or after the specified date.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>\*' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-or-directory-name>' --include-after <Date-Time-in-ISO-8601-format>` | | **Example** | `azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.blob.core.windows.net/mycontainer/FileDirectory' --include-after '2020-08-19T15:04:00Z'` |
The following examples show how to use the `--blob-tags` option.
> [!TIP] > This example encloses path arguments with single quotes (''). Use single quotes in all command shells except for the Windows Command Shell (cmd.exe). If you're using a Windows Command Shell (cmd.exe), enclose path arguments with double quotes ("") instead of single quotes ('').
-| | |
+| Syntax / example | Code |
|--|--| | **Upload a file** | `azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt' --blob-tags='my%20tag=my%20tag%20value&my%20second%20tag=my%20second%20tag%20value'` | | **Upload a directory** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --recursive --blob-tags='my%20tag=my%20tag%20value&my%20second%20tag=my%20second%20tag%20value'`|
storage Storage Use Azcopy Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-configure.md
You can run a performance benchmark test on specific blob containers or file sha
Use the following command to run a performance benchmark test.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy benchmark 'https://<storage-account-name>.blob.core.windows.net/<container-name>'` | | **Example** | `azcopy benchmark 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D'` |
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-files.md
Before you begin, see the [Get started with AzCopy](storage-use-azcopy-v10.md) a
You can use the [azcopy make](storage-ref-azcopy-make.md) command to create a file share. The example in this section creates a file share named `myfileshare`.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy make 'https://<storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>'` | | **Example** | `azcopy make 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D'` |
This section contains the following examples:
### Upload a file
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-file-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<file-name><SAS-token>'` | | **Example** | `azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D'` |
You can also upload a file by using a wildcard symbol (*) anywhere in the file p
This example copies a directory (and all of the files in that directory) to a file share. The result is a directory in the file share by the same name.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive` | | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive` | To copy to a directory within the file share, just specify the name of that directory in your command string.
-| | |
+| Syntax / example | Code |
|--|--| | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' --recursive` |
If you specify the name of a directory that does not exist in the file share, Az
You can upload the contents of a directory without copying the containing directory itself by using the wildcard symbol (*).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>/*' 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<directory-path><SAS-token>` | | **Example** | `azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D"` |
You can upload specific files by using complete file names, partial names with w
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-path` option. Separate individual file names by using a semicolon (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name><SAS-token>' --include-path <semicolon-separated-file-list>` | | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --include-path 'photos;documents\myFile.txt'` |
You can also exclude files by using the `--exclude-path` option. To learn more,
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-pattern` option. Specify partial names that include the wildcard characters. Separate names by using a semicolon (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name><SAS-token>' --include-pattern <semicolon-separated-file-list-with-wildcard-characters>` | | **Example** | `azcopy copy 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --include-pattern 'myFile*.txt;*.pdf*'` |
The `--include-pattern` and `--exclude-pattern` options apply only to filenames
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-after` option. Specify a date and time in ISO 8601 format (For example: `2020-08-19T15:04:00Z`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy '<local-directory-path>\*' 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name><SAS-token>' --include-after <Date-Time-in-ISO-8601-format>` | | **Example** | `azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --include-after '2020-08-19T15:04:00Z'` |
This section contains the following examples:
### Download a file
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<file-path><SAS-token>' '<local-file-path>'` | | **Example** | `azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory\myTextFile.txt'` | ### Download a directory
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<directory-path><SAS-token>' '<local-directory-path>' --recursive` | | **Example** | `azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory' --recursive` |
This example results in a directory named `C:\myDirectory\myFileShareDirectory`
You can download the contents of a directory without copying the containing directory itself by using the wildcard symbol (*).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/*<SAS-token>' '<local-directory-path>/'` | | **Example** | `azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory/*?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D' 'C:\myDirectory'` |
You can download specific files by using complete file names, partial names with
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-path` option. Separate individual file names by using a semicolon (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name><SAS-token>' '<local-directory-path>' --include-path <semicolon-separated-file-list>` | | **Example** | `azcopy copy 'https://mystorageaccount.file.core.windows.net/myFileShare/myDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --include-path 'photos;documents\myFile.txt' --recursive` |
You can also exclude files by using the `--exclude-path` option. To learn more,
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-pattern` option. Specify partial names that include the wildcard characters. Separate names by using a semicolon (`;`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name><SAS-token>' '<local-directory-path>' --include-pattern <semicolon-separated-file-list-with-wildcard-characters>` | | **Example** | `azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --include-pattern 'myFile*.txt;*.pdf*'` |
The `--include-pattern` and `--exclude-pattern` options apply only to filenames
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-after` option. Specify a date and time in ISO-8601 format (For example: `2020-08-19T15:04:00Z`).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-or-directory-name>/*<SAS-token>' '<local-directory-path>' --include-after <Date-Time-in-ISO-8601-format>` | | **Example** | `azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/*?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'C:\myDirectory' --include-after '2020-08-19T15:04:00Z'` |
For detailed reference, see the [azcopy copy](storage-ref-azcopy-copy.md) refere
You can download a specific version of a file or directory by referencing the **DateTime** value of a share snapshot. To learn more about share snapshots see [Overview of share snapshots for Azure Files](../files/storage-snapshots-files.md).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<storage-account-name>.file.core.windows.net/<file-share-name>/<file-path-or-directory-name><SAS-token>&sharesnapshot=<DateTime-of-snapshot>' '<local-file-or-directory-path>'` | | **Example** (Download a file) | `azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myTextFile.txt?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=%2FSOVEFfsKDqRry4bk3qz1vAQFwY5DDzp2%2B%2F3Eykf%2FJLs%3D&sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory\myTextFile.txt'` |
This section contains the following examples:
### Copy a file to another storage account
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name>/<file-path><SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name>/<file-path><SAS-token>'` | | **Example** | `azcopy copy 'https://mysourceaccount.file.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.file.core.windows.net/mycontainer/myTextFile.txt?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D'` |
This section contains the following examples:
### Copy a directory to another storage account
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name>/<directory-path><SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive` | | **Example** | `azcopy copy 'https://mysourceaccount.file.core.windows.net/myFileShare/myFileDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.file.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive` |
This section contains the following examples:
### Copy a file share to another storage account
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive` | | **Example** | `azcopy copy 'https://mysourceaccount.file.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.file.core.windows.net/mycontainer?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive` |
This section contains the following examples:
### Copy all file shares, directories, and files to another storage account
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://<source-storage-account-name>.file.core.windows.net/<SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<SAS-token>' --recursive'` | | **Example** | `azcopy copy 'https://mysourceaccount.file.core.windows.net?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.file.core.windows.net?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive` |
If you set the `--delete-destination` flag to `true`, AzCopy deletes files witho
The first file share that appears in this command is the source. The second one is the destination.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy sync 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive` | | **Example** | `azcopy sync 'https://mysourceaccount.file.core.windows.net/myfileShare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive` |
The first file share that appears in this command is the source. The second one
The first directory that appears in this command is the source. The second one is the destination.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy sync 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name>/<directory-name><SAS-token>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name>/<directory-name><SAS-token>' --recursive` | | **Example** | `azcopy sync 'https://mysourceaccount.file.core.windows.net/myFileShare/myDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' 'https://mydestinationaccount.file.core.windows.net/myFileShare/myDirectory?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive` |
The first directory that appears in this command is the source. The second one i
The first file share that appears in this command is the source. At the end of the URI, append the string `&sharesnapshot=` followed by the **DateTime** value of the snapshot.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy sync 'https://<source-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>&sharesnapsot<snapshot-ID>' 'https://<destination-storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive` | | **Example** | `azcopy sync 'https://mysourceaccount.file.core.windows.net/myfileShare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D&sharesnapshot=2020-03-03T20%3A24%3A13.0000000Z' 'https://mydestinationaccount.file.core.windows.net/myfileshare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive` |
storage Storage Use Azcopy Google Cloud https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-google-cloud.md
AzCopy uses the [Put Block From URL](/rest/api/storageservices/put-block-from-ur
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://storage.cloud.google.com/<bucket-name>/<object-name>' 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name>'` | | **Example** | `azcopy copy 'https://storage.cloud.google.com/mybucket/myobject' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myblob'` |
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hiera
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://storage.cloud.google.com/<bucket-name>/<directory-name>' 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' --recursive=true` | | **Example** | `azcopy copy 'https://storage.cloud.google.com/mybucket/mydirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer/mydirectory' --recursive=true` |
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hiera
You can copy the contents of a directory without copying the containing directory itself by using the wildcard symbol (*).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://storage.cloud.google.com/<bucket-name>/<directory-name>/*' 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' --recursive=true` | | **Example** | `azcopy copy 'https://storage.cloud.google.com/mybucket/mydirectory/*' 'https://mystorageaccount.blob.core.windows.net/mycontainer/mydirectory' --recursive=true` |
You can copy the contents of a directory without copying the containing director
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://storage.cloud.google.com/<bucket-name>' 'https://<storage-account-name>.blob.core.windows.net' --recursive=true` | | **Example** | `azcopy copy 'https://storage.cloud.google.com/mybucket' 'https://mystorageaccount.blob.core.windows.net' --recursive=true` |
First, set the `GOOGLE_CLOUD_PROJECT` to project ID of Google Cloud project.
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://storage.cloud.google.com/' 'https://<storage-account-name>.blob.core.windows.net' --recursive=true` | | **Example** | `azcopy copy 'https://storage.cloud.google.com/' 'https://mystorageaccount.blob.core.windows.net' --recursive=true` |
First, set the `GOOGLE_CLOUD_PROJECT` to project ID of Google Cloud project.
Copy a subset of buckets by using a wildcard symbol (*) in the bucket name. Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://storage.cloud.google.com/<bucket*name>' 'https://<storage-account-name>.blob.core.windows.net' --recursive=true` | | **Example** | `azcopy copy 'https://storage.cloud.google.com/my*bucket' 'https://mystorageaccount.blob.core.windows.net' --recursive=true` |
storage Storage Use Azcopy S3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-use-azcopy-s3.md
AzCopy uses the [Put Block From URL](/rest/api/storageservices/put-block-from-ur
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://s3.amazonaws.com/<bucket-name>/<object-name>' 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<blob-name>'` | | **Example** | `azcopy copy 'https://s3.amazonaws.com/mybucket/myobject' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myblob'` |
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hiera
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://s3.amazonaws.com/<bucket-name>/<directory-name>' 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' --recursive=true` | | **Example** | `azcopy copy 'https://s3.amazonaws.com/mybucket/mydirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer/mydirectory' --recursive=true` |
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hiera
You can copy the contents of a directory without copying the containing directory itself by using the wildcard symbol (*).
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://s3.amazonaws.com/<bucket-name>/<directory-name>/*' 'https://<storage-account-name>.blob.core.windows.net/<container-name>/<directory-name>' --recursive=true` | | **Example** | `azcopy copy 'https://s3.amazonaws.com/mybucket/mydirectory/*' 'https://mystorageaccount.blob.core.windows.net/mycontainer/mydirectory' --recursive=true` |
You can copy the contents of a directory without copying the containing director
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://s3.amazonaws.com/<bucket-name>' 'https://<storage-account-name>.blob.core.windows.net/<container-name>' --recursive=true` | | **Example** | `azcopy copy 'https://s3.amazonaws.com/mybucket' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --recursive=true` |
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hiera
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://s3.amazonaws.com/' 'https://<storage-account-name>.blob.core.windows.net' --recursive=true` | | **Example** | `azcopy copy 'https://s3.amazonaws.com' 'https://mystorageaccount.blob.core.windows.net' --recursive=true` |
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hiera
Use the same URL syntax (`blob.core.windows.net`) for accounts that have a hierarchical namespace.
-| | |
+| Syntax / example | Code |
|--|--| | **Syntax** | `azcopy copy 'https://s3-<region-name>.amazonaws.com/' 'https://<storage-account-name>.blob.core.windows.net' --recursive=true` | | **Example** | `azcopy copy 'https://s3-rds.eu-north-1.amazonaws.com' 'https://mystorageaccount.blob.core.windows.net' --recursive=true` |
storage Storage Files Enable Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-enable-soft-delete.md
The following sections show how to enable and use soft delete for Azure file sha
# [Azure CLI](#tab/azure-cli)
-Soft delete cmdlets are available in version 2.1.3 and newer of the [Azure CLI module](/cli/azure/install-azure-cli?view=azure-cli-latest).
+Soft delete cmdlets are available in version 2.1.3 and newer of the [Azure CLI module](/cli/azure/install-azure-cli).
## Getting started with CLI
storage Storage Files Prevent File Share Deletion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-prevent-file-share-deletion.md
description: Learn about soft delete for Azure file shares and how you can use i
Previously updated : 05/28/2020 Last updated : 03/29/2021
For soft-deleted premium file shares, the file share quota (the provisioned size
### Enabling or disabling soft delete
-Soft delete for file shares is enabled at the storage account level, because of this, the soft delete settings apply to all file shares within a storage account. You can enable or disable soft delete at any time. When you create a new storage account, soft delete for file shares is disabled by default, you can enable it during deployment or anytime afterwards. Soft delete will remain disabled by default for existing storage accounts. If you have configured [Azure file share backup](../../backup/azure-file-share-backup-overview.md) for a Azure file share, then soft delete for Azure file shares will be automatically enabled on that share's storage account.
+Soft delete for file shares is enabled at the storage account level, because of this, the soft delete settings apply to all file shares within a storage account. Soft delete is enabled by default for new storage accounts and can be disabled or enabled at any time. Soft delete is not automatically enabled for existing storage accounts unless [Azure file share backup](../../backup/azure-file-share-backup-overview.md) was configured for a Azure file share in that storage account. If Azure file share backup was configured, then soft delete for Azure file shares are automatically enabled on that share's storage account.
If you enable soft delete for file shares, delete some file shares, and then disable soft delete, if the shares were saved in that period you can still access and recover those file shares. When you enable soft delete, you also need to configure the retention period.
storage Storage Files Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-release-notes.md
Previously updated : 3/3/2021 Last updated : 3/26/2021
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--||
+| V12 Release - [KB4568585](https://support.microsoft.com/topic/b9605f04-b4af-4ad8-86b0-2c490c535cfd)| 12.0.0.0 | March 26, 2021 | Supported - Flighting |
| V11.2 Release - [KB4539952](https://support.microsoft.com/topic/azure-file-sync-agent-v11-2-release-february-2021-c956eaf0-cd8e-4511-98c0-e5a1f2c84048)| 11.2.0.0 | February 2, 2021 | Supported | | V11.1 Release - [KB4539951](https://support.microsoft.com/help/4539951)| 11.1.0.0 | November 4, 2020 | Supported | | V10.1 Release - [KB4522411](https://support.microsoft.com/help/4522411)| 10.1.0.0 | June 5, 2020 | Supported - Agent version will expire on June 7, 2021 |
The following Azure File Sync agent versions have expired and are no longer supp
### Azure File Sync agent update policy [!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)]
+## Agent version 12.0.0.0
+The following release notes are for version 12.0.0.0 of the Azure File Sync agent (released March 26, 2021).
+
+### Improvements and issues that are fixed
+- New portal experience to configure network access policy and private endpoint connections
+ - You can now use the portal to disable access to the Storage Sync Service public endpoint and to approve, reject and remove private endpoint connections. To configure the network access policy and private endpoint connections, open the Storage Sync Service portal, go to the Settings section and click Network.
+
+- Cloud Tiering support for volume cluster sizes larger than 64KiB
+ - Cloud Tiering now supports volume cluster sizes up to 2MiB on Server 2019. To learn more, see [What is the minimum file size for a file to tier?](https://docs.microsoft.com/azure/storage/files/storage-sync-choose-cloud-tiering-policies#minimum-file-size-for-a-file-to-tier).
+
+- Measure bandwidth and latency to Azure File Sync service and storage account
+ - The Test-StorageSyncNetworkConnectivity cmdlet can now be used to measure latency and bandwidth to the Azure File Sync service and storage account. Latency to the Azure File Sync service and storage account is measured by default when running the cmdlet. Upload and download bandwidth to the storage account is measured when using the "-MeasureBandwidth" parameter.
+
+ For example, to measure bandwidth and latency to the Azure File Sync service and storage account, run the following PowerShell commands:
+
+ ```powershell
+ Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
+ Test-StorageSyncNetworkConnectivity -MeasureBandwidth
+ ```
+
+- Improved error messages in the portal when server endpoint creation fails
+ - We heard your feedback and have improved the error messages and guidance when server endpoint creation fails.
+
+- Miscellaneous performance and reliability improvements
+ - Improved change detection performance to detect files that have changed in the Azure file share.
+ - Performance improvements for reconciliation sync sessions.
+ - Sync improvements to reduce ECS_E_SYNC_METADATA_KNOWLEDGE_SOFT_LIMIT_REACHED and ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED errors.
+ - Files may fail to tier on Server 2019 if Data Deduplication is enabled on the volume.
+ - AFSDiag fails to compress files if a file is larger than 2GiB.
+
+### Evaluation Tool
+Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](./storage-sync-files-planning.md#evaluation-cmdlet) section in the planning guide.
+
+### Agent installation and server configuration
+For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](storage-sync-files-planning.md) and [How to deploy Azure File Sync](storage-sync-files-deployment-guide.md).
+
+- A restart is required for servers that have an existing Azure File Sync agent installation.
+- The agent installation package must be installed with elevated (admin) permissions.
+- The agent is not supported on Nano Server deployment option.
+- The agent is supported only on Windows Server 2019, Windows Server 2016, and Windows Server 2012 R2.
+- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](./storage-sync-files-planning.md#recommended-system-resources) for more information.
+- The Storage Sync Agent (FileSyncSvc) service does not support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.
+
+### Interoperability
+- Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](storage-sync-files-troubleshoot.md).
+- File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.
+- Running sysprep on a server that has the Azure File Sync agent installed is not supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.
+
+### Sync limitations
+The following items don't sync, but the rest of the system continues to operate normally:
+- Files with unsupported characters. See [Troubleshooting guide](storage-sync-files-troubleshoot.md#handling-unsupported-characters) for list of unsupported characters.
+- Files or directories that end with a period.
+- Paths that are longer than 2,048 characters.
+- The system access control list (SACL) portion of a security descriptor that's used for auditing.
+- Extended attributes.
+- Alternate data streams.
+- Reparse points.
+- Hard links.
+- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints.
+- Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.
+
+ > [!Note]
+ > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
+
+### Server endpoint
+- A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.
+- Cloud tiering is not supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
+- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).
+- A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.
+- Do not store an OS or application paging file within a server endpoint location.
+- The server name in the portal is not updated if the server is renamed.
+
+### Cloud endpoint
+- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. In addition, changes made to an Azure file share over the REST protocol will not update the SMB last modified time and will not be seen as a change by sync.
+- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](./storage-sync-files-troubleshoot.md?tabs=portal1%252cportal#troubleshoot-rbac)).
+
+ > [!Note]
+ > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
+
+### Cloud tiering
+- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.
+- When copying files using robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
+ > [!Warning]
+ > Robocopy /B switch is not supported with Azure File Sync. Using the Robocopy /B switch with an Azure File Sync server endpoint as the source may lead to file corruption.
+ ## Agent version 11.2.0.0 The following release notes are for version 11.2.0.0 of the Azure File Sync agent released February 2, 2021. These notes are in addition to the release notes listed for version 11.1.0.0. ### Improvements and issues that are fixed - If a sync session is canceled due to a high number of per-item errors, sync may go through reconciliation when a new session starts if the Azure File Sync service determines a custom sync session is needed to correct the per-item errors. - Registering a server using the Register-AzStorageSyncServer cmdlet may fail with "Unhandled Exception" error.-- New PowerShell cmdlet (Add-StorageSyncAllowedServerEndpointPath) to configure allowed server endpoints paths on a server. This cmdlet is useful for scenarios in which the Azure File Sync deployment is managed by a Cloud Solution Provider (CSP) or Service Provider and the customer wants to configure allowed server endpoints paths on a server. When creating a server endpoint, if the path specified is not in the allow list, the server endpoint creation will fail. Note, this is an optional feature and all supported paths are allowed by default when creating a server endpoint.
+- New PowerShell cmdlet (Add-StorageSyncAllowedServerEndpointPath) to configure allowed server endpoints paths on a server. This cmdlet is useful for scenarios in which the Azure File Sync deployment is managed by a Cloud Solution Provider (CSP) or Service Provider and the customer wants to configure allowed server endpoints paths on a server. When creating a server endpoint, if the path specified is not in the allowlist, the server endpoint creation will fail. Note, this is an optional feature and all supported paths are allowed by default when creating a server endpoint.
- To add a server endpoint path thatΓÇÖs allowed, run the following PowerShell commands on the server:
storage Storage Sync Files Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-sync-files-troubleshoot.md
The table below contains all of the unicode characters Azure File Sync does not
### Common sync errors <a id="-2147023673"></a>**The sync session was canceled.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x800704c7 | | **HRESULT (decimal)** | -2147023673 |
Sync sessions may fail for various reasons including the server being restarted
<a id="-2147012889"></a>**A connection with the service could not be established.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80072ee7 | | **HRESULT (decimal)** | -2147012889 |
Sync sessions may fail for various reasons including the server being restarted
<a id="-2134376372"></a>**The user request was throttled by the service.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8004c | | **HRESULT (decimal)** | -2134376372 |
No action is required; the server will try again. If this error persists for sev
<a id="-2134364043"></a>**Sync is blocked until change detection completes post restore**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83075 | | **HRESULT (decimal)** | -2134364043 |
No action is required. When a file or file share (cloud endpoint) is restored us
<a id="-2147216747"></a>**Sync failed because the sync database was unloaded.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80041295 | | **HRESULT (decimal)** | -2147216747 |
This error typically occurs when a backup application creates a VSS snapshot and
<a id="-2134364065"></a>**Sync can't access the Azure file share specified in the cloud endpoint.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8305f | | **HRESULT (decimal)** | -2134364065 |
This error occurs because the Azure File Sync agent cannot access the Azure file
<a id="-2134351804"></a>**Sync failed because the request is not authorized to perform this operation.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c86044 | | **HRESULT (decimal)** | -2134351804 |
This error occurs because the Azure File Sync agent is not authorized to access
<a id="-2134364064"></a><a id="cannot-resolve-storage"></a>**The storage account name used could not be resolved.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80C83060 | | **HRESULT (decimal)** | -2134364064 |
This error occurs because the Azure File Sync agent is not authorized to access
<a id="-2134364022"></a><a id="storage-unknown-error"></a>**An unknown error occurred while accessing the storage account.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8308a | | **HRESULT (decimal)** | -2134364022 |
This error occurs because the Azure File Sync agent is not authorized to access
<a id="-2134364014"></a>**Sync failed due to storage account locked.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83092 | | **HRESULT (decimal)** | -2134364014 |
This error occurs because the storage account has a read-only [resource lock](..
<a id="-1906441138"></a>**Sync failed due to a problem with the sync database.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x8e5e044e | | **HRESULT (decimal)** | -1906441138 |
This error occurs when there is a problem with the internal database used by Azu
<a id="-2134364053"></a>**The Azure File Sync agent version installed on the server is not supported.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80C8306B | | **HRESULT (decimal)** | -2134364053 |
This error occurs if the Azure File Sync agent version installed on the server i
<a id="-2134351810"></a>**You reached the Azure file share storage limit.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8603e | | **HRESULT (decimal)** | -2134351810 |
If the share is full and a quota is not set, one possible way of fixing this iss
<a id="-2134351824"></a>**The Azure file share cannot be found.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c86030 | | **HRESULT (decimal)** | -2134351824 |
If the Azure file share was deleted, you need to create a new file share and the
<a id="-2134364042"></a>**Sync is paused while this Azure subscription is suspended.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80C83076 | | **HRESULT (decimal)** | -2134364042 |
This error occurs when the Azure subscription is suspended. Sync will be reenabl
<a id="-2134375618"></a>**The storage account has a firewall or virtual networks configured.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8033e | | **HRESULT (decimal)** | -2134375618 |
This error occurs when the Azure file share is inaccessible because of a storage
<a id="-2134375911"></a>**Sync failed due to a problem with the sync database.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c80219 | | **HRESULT (decimal)** | -2134375911 |
If this error persists for longer than a few hours, create a support request and
<a id="-2146762487"></a>**The server failed to establish a secure connection. The cloud service received an unexpected certificate.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x800b0109 | | **HRESULT (decimal)** | -2146762487 |
By setting this registry value, the Azure File Sync agent will accept any locall
<a id="-2147012894"></a>**A connection with the service could not be established.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80072ee2 | | **HRESULT (decimal)** | -2147012894 |
By setting this registry value, the Azure File Sync agent will accept any locall
<a id="-2134375680"></a>**Sync failed due to a problem with authentication.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c80300 | | **HRESULT (decimal)** | -2134375680 |
This error typically occurs because the server time is incorrect. If the server
<a id="-2134364040"></a>**Sync failed due to certificate expiration.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83078 | | **HRESULT (decimal)** | -2134364040 |
If the client authentication certificate is expired, perform the following steps
<a id="-2134375896"></a>**Sync failed due to authentication certificate not found.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c80228 | | **HRESULT (decimal)** | -2134375896 |
To resolve this issue, perform the following steps:
<a id="-2134364039"></a>**Sync failed due to authentication identity not found.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83079 | | **HRESULT (decimal)** | -2134364039 |
This error occurs because the server endpoint deletion failed and the endpoint i
<a id="-1906441711"></a><a id="-2134375654"></a><a id="doesnt-have-enough-free-space"></a>**The volume where the server endpoint is located is low on disk space.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x8e5e0211 | | **HRESULT (decimal)** | -1906441711 | | **Error string** | JET_errLogDiskFull | | **Remediation required** | Yes |
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8031a | | **HRESULT (decimal)** | -2134375654 |
This error occurs because the volume has filled up. This error commonly occurs b
<a id="-2134364145"></a><a id="replica-not-ready"></a>**The service is not yet ready to sync with this server endpoint.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8300f | | **HRESULT (decimal)** | -2134364145 |
This error occurs because the cloud endpoint was created with content already ex
<a id="-2134375877"></a><a id="-2134375908"></a><a id="-2134375853"></a>**Sync failed due to problems with many individual files.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8023b | | **HRESULT (decimal)** | -2134375877 | | **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_SOFT_LIMIT_REACHED | | **Remediation required** | Yes |
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8021c | | **HRESULT (decimal)** | -2134375908 | | **Error string** | ECS_E_SYNC_METADATA_KNOWLEDGE_LIMIT_REACHED | | **Remediation required** | Yes |
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c80253 | | **HRESULT (decimal)** | -2134375853 |
Sync sessions fail with one of these errors when there are many files that are f
<a id="-2134376423"></a>**Sync failed due to a problem with the server endpoint path.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c80019 | | **HRESULT (decimal)** | -2134376423 |
Ensure the path exists, is on a local NTFS volume, and is not a reparse point or
<a id="-2134375817"></a>**Sync failed because the filter driver version is not compatible with the agent version**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80C80277 | | **HRESULT (decimal)** | -2134375817 |
This error occurs because the Cloud Tiering filter driver (StorageSync.sys) vers
<a id="-2134376373"></a>**The service is currently unavailable.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8004b | | **HRESULT (decimal)** | -2134376373 |
This error occurs because the Azure File Sync service is unavailable. This error
<a id="-2146233088"></a>**Sync failed due to an exception.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80131500 | | **HRESULT (decimal)** | -2146233088 |
This error occurs because sync failed due to an exception. If the error persists
<a id="-2134364045"></a>**Sync failed because the storage account has failed over to another region.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83073 | | **HRESULT (decimal)** | -2134364045 |
This error occurs because the storage account has failed over to another region.
<a id="-2134375922"></a>**Sync failed due to a transient problem with the sync database.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8020e | | **HRESULT (decimal)** | -2134375922 |
This error occurs because of an internal problem with the sync database. This er
<a id="-2134364024"></a>**Sync failed due to change in Azure Active Directory tenant**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83088 | | **HRESULT (decimal)** | -2134364024 |
Once you have the latest agent version, you must give the Microsoft.StorageSync
<a id="-2134364010"></a>**Sync failed due to firewall and virtual network exception not configured**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83096 | | **HRESULT (decimal)** | -2134364010 |
This error occurs if the firewall and virtual network settings are enabled on th
<a id="-2147024891"></a>**Sync failed because permissions on the System Volume Information folder are incorrect.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80070005 | | **HRESULT (decimal)** | -2147024891 |
To resolve this issue, perform the following steps:
<a id="-2134375810"></a>**Sync failed because the Azure file share was deleted and recreated.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8027e | | **HRESULT (decimal)** | -2134375810 |
To resolve this issue, delete and recreate the sync group by performing the foll
<a id="-2145844941"></a>**Sync failed because the HTTP request was redirected**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80190133 | | **HRESULT (decimal)** | -2145844941 |
This error occurs because Azure File Sync does not support HTTP redirection (3xx
<a id="-2134364027"></a>**A timeout occurred during offline data transfer, but it is still in progress.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c83085 | | **HRESULT (decimal)** | -2134364027 |
This error occurs when a data ingestion operation exceeds the timeout. This erro
<a id="-2134375814"></a>**Sync failed because the server endpoint path cannot be found on the server.**
-| | |
+| Error | Code |
|-|-| | **HRESULT** | 0x80c8027a | | **HRESULT (decimal)** | -2134375814 |
synapse-analytics Sql Data Warehouse Partner Data Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-partner-data-integration.md
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Aecorsoft](./media/sql-data-warehouse-partner-data-integration/aecorsoft-logo.png) |**Aecorsoft**<br> AecorSoft offers fast, scalable, and real-time ELT/ETL software solution to help SAP customers bring complex SAP data to Azure Synapse Analytics and Azure data platform. With full compliance with SAP application layer security, AecorSoft solution is officially SAP Premium Certified to integrate with SAP applications. AecorSoftΓÇÖs unique Super Delta and Change-Data-Capture features enable SAP users to stream delta data from SAP transparent, pool, and cluster tables to Azure in CSV, Parquet, Avro, ORC, or GZIP format. Besides SAP tabular data, many other business-rule-heavy SAP objects like BW queries and S/4HANA CDS Views are fully supported. |[Product page](https://www.aecorsoft.com/products/dataintegrator)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/aecorsoftinc1588038796343.aecorsoftintegrationservice_adf)<br>| | ![Alooma](./media/sql-data-warehouse-partner-data-integration/alooma_logo.png) |**Alooma**<br> Alooma is an Extract, Transform, and Load (ETL) solution that enables data teams to integrate, enrich, and stream data from various data silos to an Azure Synapse data warehouse all in real time. |[Product page](https://www.alooma.com/) | | ![Alteryx](./media/sql-data-warehouse-partner-data-integration/alteryx_logo.png) |**Alteryx**<br> Alteryx Designer provides a repeatable workflow for self-service data analytics that leads to deeper insights in hours, not the weeks typical of traditional approaches! Alteryx Designer helps data analysts by combining data preparation, data blending, and analytics ΓÇô predictive, statistical, and spatial ΓÇô using the same intuitive user interface. |[Product page](https://www.alteryx.com/partners/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/alteryx.alteryx-designer)<br>|
-| ![Attunity](./media/sql-data-warehouse-partner-data-integration/attunity_logo.png) |**Attunity (CloudBeam)**<br>Attunity CloudBeam provides an automated solution for loading data into an Azure Synapse data warehouse. It simplifies batch loading and incremental replication of data from many sources - SQL Server, Oracle, DB2, Sybase, MySQL, and more. |[Product page](http://www.attunity.com/attunity-cloudbeam-for-azure/)<br>[Azure Marketplace](https://aws.amazon.com/marketplace/pp/Attunity-Attunity-CloudBeam/B00B5PB8IM) <br> |
| ![BI Builders (Xpert BI)](./media/sql-data-warehouse-partner-data-integration/bibuilders-logo.png) |**BI Builders (Xpert BI)**<br> Xpert BI helps organizations build and maintain a robust and scalable data platform in Azure faster through metadata-based automation. It extends Azure Synapse with best practices and DataOps, for agile data development with built-in data governance functionalities. Use Xpert BI to quickly test out and switch between different Azure solutions such as Azure Synapse, Azure Data Lake Storage, and Azure SQL Database, as your business and analytics needs changes and grows.|[Product page](https://www.bi-builders.com/adding-automation-and-governance-to-azure-analytics/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bi-builders-as.xpert-bi-vm)<br>| | ![BryteFlow](./media/sql-data-warehouse-partner-data-integration/bryteflow-logo.png) |**BryteFlow**<br> With BryteFlow, you can continually replicate data from transactional sources like Oracle, SQL Server, SAP, MySQL, and more to Azure Synapse Analytics in real time, with best practices, and access reconciled data that is ready-to-use. BryteFlow extracts and replicates data in minutes using log-based Change Data Capture and merges deltas automatically to update data. It can be configured with times series as well. There's no coding for any process (just point and select!) and tables are created automatically on the destination. BryteFlow supports enterprise-scale automated data integration with extremely high throughput, ingesting terabytes of data, with smart partitioning, and multi-threaded, parallel loading.|[Product page](https://bryteflow.com/data-integration-on-azure-synapse/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/bryte.bryteflowingest-azure-standard?tab=Overview)<br>| | ![CData](./media/sql-data-warehouse-partner-data-integration/cdata-logo.png) |**CData Sync - Cloud Data Pipeline**<br>Build high-performance data pipelines for Microsoft Azure Synapse in minutes. CData Sync is an easy-to-use, go-anywhere ETL/ELT pipeline that streamlines data flow from more than 200+ enterprise data sources to Azure Synapse. With CData Sync, users can easily create automated continuous data replication between Accounting, CRM, ERP, Marketing Automation, On-Premises, and cloud data.|[Product page](https://www.cdata.com/sync/to/azuresynapse/?utm_source=azuresynapse&utm_medium=partner)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/cdatasoftware.cdatasync?tab=Overview)<br>|
To create your data warehouse solution using the dedicated SQL pool in Azure Syn
| ![Matillion](./media/sql-data-warehouse-partner-data-integration/matillion-logo.png) |**Matillion**<br>Matillion is data transformation software for cloud data warehouses. Only Matillion is purpose-built for Azure Synapse enabling businesses to achieve new levels of simplicity, speed, scale, and savings. Matillion products are highly rated and trusted by companies of all sizes to meet their data integration and transformation needs. Learn more about how you can unlock the potential of your data with Matillion's cloud-based approach to data transformation.| [Product page](https://www.matillion.com/technology/cloud-data-warehouse/microsoft-azure-synapse/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/matillion.matillion-etl-azure-synapse?tab=Overview) | | ![oh22 HEDDA.IO](./media/sql-data-warehouse-partner-data-integration/heddaiowhitebg-logo.png) |**oh22 HEDDA<span></span>.IO**<br>oh22ΓÇÖs HEDDA<span></span>.IO is a knowledge-driven data quality product built for Microsoft Azure. It enables you to build a knowledge base and use it to perform various critical data quality tasks, including correction, enrichment, and standardization of your data. HEDDA<span></span>.IO also allows you to do data cleansing by using cloud-based reference data services provided by reference data providers or developed and provided by you.| [Product page](https://github.com/oh22is/HEDDA.IO)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/oh22.hedda-io) | | ![Precisely](./media/sql-data-warehouse-partner-data-integration/precisely-logo.png) |**Precisely**<br>Precisely Connect ETL enables extract transfer and load (ETL) of data from multiple sources to Azure targets. Connect ETL is an easy to configure tool that doesn't require coding or tuning. ETL transformation can be done on the fly. It eliminates the need for costly database staging areas or manual pushes, allowing you to create your own data blends with consistent sustainable performance. Import legacy data from multiple sources including mainframe DB2, VSAM, IMS, Oracle, SQL Server, Teradata, and write them to cloud targets including Azure Databricks, Azure Synapse Analytics, and Azure Data Lake Storage. By using the high performance Connect ETL engine, you can expect optimal performance and consistency.|[Product page](https://www.precisely.com/solution/microsoft-azure)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/syncsort.dmx) |
+| ![Qlik Data Integration](./media/sql-data-warehouse-partner-business-intelligence/qlik_logo.png) |**Qlik Data Integration**<br>Qlik Data Integration provides an automated solution for loading data into an Azure Synapse. It simplifies batch loading and incremental replication of data from many sources: SQL Server, Oracle, DB2, Sybase, MySQL, and more. |[Product page](https://www.qlik.com/us/products/data-integration-products)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qlik.qlik_data_integration_platform) <br> |
| ![Qubole](./media/sql-data-warehouse-partner-data-integration/qubole_logo.png) |**Qubole**<br>Qubole provides a cloud-native platform that enables users to conduct ETL, analytics, and AI/ML workloads. It supports different kinds of open-source engines - Apache Spark, TensorFlow, Presto, Airflow, Hadoop, Hive, and more. It provides easy-to-use end-user tools for data processing from SQL query tools, to notebooks, and dashboards that use powerful open-source engines.|[Product page](https://www.qubole.com/company/partners/partners-microsoft-azure/)<br> [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qubole-inc.qubole-data-service?tab=Overview) | | ![Segment](./media/sql-data-warehouse-partner-data-integration/segment_logo.png) |**Segment**<br>Segment is a data management and analytics solution that helps you make sense of customer data coming from various sources. It allows you to connect your data to over 200 tools to create better decisions, products, and experiences. Segment will transform and load multiple data sources into your warehouse for you using its built-in data connectors|[Product page](https://segment.com/)<br> | | ![Skyvia](./media/sql-data-warehouse-partner-data-integration/skyvia_logo.png) |**Skyvia (data integration)**<br>Skyvia data integration provides a wizard that automates data imports. This wizard allows you to migrate data between different kinds of sources - CRMs, application database, CSV files, and more. |[Product page](https://skyvia.com/)<br> |
time-series-insights How To Create Environment Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/time-series-insights/how-to-create-environment-using-cli.md
az tsi environment gen2 create --name "my-tsi-env" --location eastus2 --resource
You can use the Azure CLI to delete an individual resource, such as a Time Series Insights Environment, or delete a Resource Group and all its resources, including any Time Series Insights Environments.
-To [delete a Time Series Insights Environments](/cli/azure/ext/timeseriesinsights/tsi/environment?view=azure-cli-latest#ext_timeseriesinsights_az_tsi_environment_delete), run the following command:
+To [delete a Time Series Insights Environments](/cli/azure/ext/timeseriesinsights/tsi/environment#ext_timeseriesinsights_az_tsi_environment_delete), run the following command:
```azurecli-interactive az tsi environment delete --name "my-tsi-env" --resource-group $rg ```
-To [delete the storage account](/cli/azure/storage/account?view=azure-cli-latest#az_storage_account_delete), run the following command:
+To [delete the storage account](/cli/azure/storage/account#az_storage_account_delete), run the following command:
```azurecli-interactive az storage account delete --name $storage --resource-group $rg
virtual-desktop Partners https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/partners.md
- Title: Windows Virtual Desktop partner integrations - Azure
-description: Learn about Windows Virtual Desktop's partners and access documentation about how to integrate with them.
-- Previously updated : 09/11/2020---
-# Windows Virtual Desktop partner integrations
-
-This article lists approved partner providers and independent software vendors for Windows Virtual Desktop.
-
-## Citrix
-
-![Citrix logo](./media/partners/citrix.png)
-
-Citrix is an approved provider that offers enterprises centralized hybrid management of virtual apps and desktops workloads in Azure, side by side with on-premises deployments. Citrix Workspace with the Virtual Apps and Desktops service allows users to access apps and desktops from any device, leveraging the advanced Citrix HDX protocol to deliver a high definition experience from anywhere.
-
-Citrix extends the value of Windows Virtual Desktop with robust enterprise tools to improve user density and performance, provision workloads on demand, and simplify image and application management. IT can optimize costs with intelligent scaling tools, while delivering an incredible user experience that's field-tested against the toughest applications across industries. Additionally, Citrix Managed Desktops is a Windows Virtual Desktop-enabled desktops-as-a-service program that provides a simple, cloud-based management solution for delivering virtual apps and desktops to any device.
--- [Go to the partner website](https://more.citrix.com/wvd).-
-## VMware
-
-![VMware Logo](./media/partners/vmware.png)
-
-VMware Horizon Cloud on Microsoft Azure is a native cloud service that lets organizations quickly deploy remote desktops and applications from their existing Microsoft Azure subscriptions while leveraging all the features of VMware Horizon. Horizon Cloud uses the Horizon Control Plane to provide a single management interface for all Horizon environments, on-premises or in the cloud. This enables hybrid desktop virtualization and lets customers move their workloads to Azure at their own pace.
-
-As a Windows Virtual Desktop approved provider, VMware can help customers that want to use Windows Virtual Desktop while still enjoying the additional functionality that comes with VMware Horizon, such as integrated and easy-to-use power management, cloud-based monitoring, and the Blast Extreme protocol. These features adapt to changing network conditions on the fly to provide a consistently excellent user experience. VMware Horizon Cloud also comes with VMware App Volumes and Dynamic Environment Manager, which add advanced application and user environment management capabilities that work with MSIX app attach and FSLogix.
--- [Go to the partner website](https://www.vmware.com/products/horizon-cloud-virtual-desktops.html).-- [Read VMware Horizon Cloud technical documentation](https://techzone.vmware.com/mastering-horizon-cloud-microsoft-azure).-
-## 10ZiG
-
-![10ZiG logo](./media/partners/10zig.png)
-
-10ZiG Technology, with cutting-edge Thin and Zero Client hardware and software, is a longstanding partner with Microsoft and a dedicated Microsoft Azure and Windows Virtual Desktop partner. 10ZiG Windows 10 IoT-based Thin Clients are powerful, reliable, and affordable endpoints for all Windows Virtual Desktop multi-users. 10ZiG Manager Software provides exceptional management and deployment without license limitations at no additional cost. The 10ZiG Tech Team, Advance Warranty Program, and no-hassle demos are a one-stop Windows Virtual Desktop multi-session support solution in the cloud.
-
-10ZiG's world-market leadership in Thin and Zero Client endpoint devices and management software for virtual desktops is exemplified by how they work for their customers. Its Thin Client hardware comes with thoughtfully constructed benefit features and options designed to ensure customers receive the right Client devices based on their needs. 10ZiG customizes its devices to fit into customer environments with Windows-based and Linux-based Clients that provide the best possible performance in virtual desktops, both inside and outside the cloud.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4FaeR).-- [Go to the partner website](https://www.10zig.com/about/microsoft-windows-virtual-desktop).-
-## Automai
-
-![Automai logo](./media/partners/automai.png)
-
-You can use Automai's robotic automation platform to test key business processes in a Windows Virtual Desktop environment before your deployment goes live.
-
-With Automai's ScenarioBuilder tool and GUI-based workflow engine, IT teams can record real end-user workflows and automatically translate them into scripts. Automai then uses bots running processes from individual desktops to emulate end-user activity in a simulation and report the results. This greatly simplifies testing processes so that IT admins can stress-test even the most complex scenarios.
-
-Once you're ready for launch, you can use all the workflow scripts you created for load testing to continuously monitor performance in production. Automai's bots can do more than just availability monitoring. The bots can also test end-user workflows from key locations, taking screenshots and collecting error reports in real time. This leads to a more proactive than reactive approach to bug fixes for Windows Virtual Desktop applications.
-
-Automai lets you use the same scripts for performance testing, functional testing, performance monitoring, and even robotic process automation, all on one platform.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4B76N).-- [Go to the partner website](https://www.automai.com/windows-virtual-desktop-performance-testing/).-
-## Cloudhouse
-
-![Cloudhouse logo](./media/partners/cloudhouse.png)
-
-Cloudhouse is a Windows Virtual Desktop value-added services provider that offers customers a turnkey application migration service that can move all applications, including ones that are incompatible with modern Windows operating systems, to the Windows Virtual Desktop environment, allowing customers to truly leverage multi-session Windows 10.
-
-By leveraging proven Cloudhouse containerization technology, the Cloudhouse service takes all applications, including ones designed for Windows XP, Windows 7, or Windows 8, and deploys them to a modern Windows Virtual Desktop without needing to change code or impact user experience. Cloudhouse further adds to the value of Windows Virtual Desktop by isolating applications from the underlying operating system, allowing Windows Servicing updates to be rolled out without affecting the containerized application.
--- [Go to the partner website](https://cloudhouse.com/resources/migrate-everything-to-windows-10-on-microsoft-windows-virtua).-
-## CloudJumper
-
-![CloudJumper Logo](./media/partners/cloudjumper.png)
-
-CloudJumper is a Windows Virtual Desktop value-added services provider that equips solution providers and enterprise IT with software to provision and manage Windows Virtual Desktop environments holistically. With CloudJumper software, IT can manage every layer of a Windows Virtual Desktop deployment. Delivery of workloads and applications is automated, ensuring that users can quickly access their desktop anywhere on any device.
-
-CloudJumper's software, Cloud Workspace Management Suite extends the value of Windows Virtual Desktop by simplifying deployment and ongoing administration tasks in Azure. From a single pane of glass, IT can provision, manage, and optimize infrastructure for user workspaces. CloudJumper's Simple Script Triggering Engine integrates with IT service platforms to automate tasks involved in provisioning Windows Virtual Desktop. Additionally, CloudJumper APIs allow further extensibility and integration with other enterprise systems like ServiceNow and BMC Ready.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3p0Mg).-- [Go to the partner website](https://cloudjumper.com/wvd/).-
-## ControlUp
-
-![ControlUp Logo](./media/partners/controlup.png)
-
-ControlUp is a Windows Virtual Desktop value-added services provider that enables IT teams to monitor, troubleshoot, analyze, and directly remediate problems in their on-premises, hybrid cloud, and cloud infrastructure in real time from a single console. ControlUp's analytics and management platform also allows IT to proactively automate fixes for a rapidly growing set of use cases.
-
-When used with Windows Virtual Desktop, ControlUp provides additional capabilities to optimize Windows Virtual Desktop environments and the end-user experience. From the ControlUp console, IT gets end-user environment visibility to effectively monitor and troubleshoot performance issues. An intuitive dashboard provides insights and analytics for virtual desktop deployments, as well as options for automated reporting enriched with community benchmarks. ControlUp can manage multiple data sources and types, organizing them in high-performance data sets aggregated across compute, storage, and Windows Virtual Desktop infrastructure, allowing granular visibility from a single pane of glass.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3PUit).-- [Go to the partner website](https://www.controlup.com/solutions/ms_wvd/).-
-## Dell
-
-![Dell logo](./media/partners/dell.png)
-
-Dell Technologies' thin clients are optimized to access Microsoft Azure and Windows Virtual Desktop services. Capable of meeting the needs from collaborative knowledge workers up to graphics-intensive power users, Wyse thin clients deliver a high-quality computing experience to take full advantage of the growing number of apps and content. Ideal for space-constrained environments, Wyse thin clients adapt to the way people work with versatile form factors and a wide array of choices for mounting options.
-
-Wyse thin clients are designed with security in mind with limited attack surfaces, support for security compliance standards, and advanced multi-factor authentication solutions. Deploy highly secure thin clients with Windows 10 IoT Enterprise and Dell-added security features. Given secure, HTTPS-based communications and active directory authentication for role-based administration, Wyse Management Suite keeps Wyse endpoints always up to date, and the mobile app for WMS Pro allows IT to view critical alerts and send real-time commands with one tap at any time.
--- [Go to the partner website](https://www.delltechnologies.com/en-us/wyse/index.htm#scroll=off&overlay=//www.dellemc.com/en-us/collaterals/unauth/brochures/products/thin-clients/Wyse_Windows_Embedded_Standard_thin_clients_brochure.pdf).-
-## deviceTRUST
-
-![deviceTRUST Logo](./media/partners/devicetrust.png)
-
-deviceTRUST is a Windows Virtual Desktop value-added services provider that contextualizes the corporate enterprise. It allows users the freedom to access their Windows Virtual Desktop from any location, on any device, over any network, while giving IT departments the information and control they need to meet their governance requirements.
-
-deviceTRUST extends the value of Windows Virtual Desktop with their contextual security technology. deviceTRUST enables conditional access for a secure Windows Virtual Desktop access, conditional application access within Windows Virtual Desktop and to apply conditional Windows Virtual Desktop policies without any additional infrastructure. Using deviceTRUST enables a mobile, flexible workspace that meets all security, compliance, and regulatory requirements.
--- [Go to the partner website](https://devicetrust.com/).-
-## Ekran System
-
-![Ekran System Logo](./media/partners/ekran.png)
-
-Ekran System is a Windows Virtual Desktop value-add partner that lets IT teams monitor all remote user activity on Microsoft Azure virtual machines. With Ekran System, you can record on-screen activity for every user session in published applications or virtual desktops while collecting a wide range of context-rich metadata, such as application names, active window titles, visited URLs, and keystrokes. Advanced features offer in-depth visibility and quick incident response times, making Ekran System an efficient insider threat management and compliance solution.
-
-The unique floating endpoint licensing of Ekran System clients is automated to support dynamically changing virtual desktops. Ekran System lets you automatically unassign licenses from deleted non-persistent virtual desktops and remove them from your database. Ekran System seamlessly integrates with Azure Active Directory and Azure Sentinel.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4yqY8).-- [Go to the partner page](https://www.ekransystem.com/product/supported-platforms/windows-virtual-desktop-monitoring).-
-## FabulaTech
-
-![FabulaTech logo](./media/partners/fabulatech.png)
-
-FabulaTech seamlessly integrates with Windows Virtual Desktop clients. Once installed, FabulaTech software automatically starts working when you establish a connection with a remote desktop.
-
-When a user signs in to their virtual desktop, the FabulaTech software creates a virtual device. For example, you can create a virtual webcam, scanner, or fingerprint reader. Any apps running in a remote session can access the virtual device as if it was a physical device. You can configure the virtual device in Windows Virtual Desktop with the System Tray Icon menu, which means you can also use this solution on thin clients. On top of that, all communication happens over the existing remote desktop connection, which means the firewall is set up for you. Everything works right out of the box.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4B4zO).-- [Go to the partner website](https://www.fabulatech.com/partners/microsoft-windows-virtual-desktop/).-
-## Flexxible IT
-
-![Flexxible IT Logo](./media/partners/flexxible.png)
-
-Flexxible IT is a Windows Virtual Desktop value-add partner that offers organizations the ability to rapidly scale, monitor, and efficiently manage Windows Virtual Desktop and Citrix Workspace infrastructure. Flexxible|SUITE allows IT admins to intelligently provision and manage Windows Virtual Desktop workloads on-premises and hosted in Azure.
-
-Flexxible IT's technology extends the value of both native Windows Virtual Desktop and Citrix Workspace by automating common processes to simplify infrastructure configuration, desktop provisioning, and day-to-day management. With no need for complex PowerShell scripts or time-consuming manual processes, SUITE provides scalable desktop deployment, extensive monitoring and reporting, and secure delegated management. These features allow you to focus on delivering enhanced levels of service and a quality Windows Virtual Desktop experience for your users.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4yj7A).-- [Go to the partner website](https://www.flexxible.com/suite-for-windows-virtual-desktop).-
-## HP
-
-![HP logo](./media/partners/hp.png)
-
-HP Thin Client is an approved and verified partner of Microsoft's Azure and Windows Virtual Desktop services. HP Thin Clients with Windows 10 IoT Enterprise offer out-of-box support for Azure-based workloads and Windows Virtual Desktop hosted desktops. The hardware and OS are optimized to provide a best-in-class experience that effectively delivers remote workloads while reducing the OS footprint, hardware, and maintenance costs.
-
-As HP looked at industry trends, customer challenges, and the solutions virtualization offered during the development process, they were inspired to invent the ideal cloud endpoint using a four-pillar value proposition: design, manageability, security, and versatility. Every HP Thin Client is purpose-built with IT decision makers in mind. HP Thin Clients are long-lasting, secure, easy to deploy and manage, and powerful so you can effortlessly transition to VDI or cloud computing. HP's versatile portfolio gives you the freedom to choose the modern endpoint solution that's right for you.
--- [Go to the partner website](https://hp.com/go/thin).-
-## IGEL
-
-![IGEL logo](./media/partners/igel.png)
-
-IGEL is an approved and verified partner of Microsoft Azure and Windows Virtual Desktop services. IGEL offers IGEL OS, the next-gen edge OS for cloud workspaces designed to access virtual apps, desktops, and cloud workspaces from one or more user devices with a lightweight, simple, and secure Linux-based endpoint. A platform-independent software solution, IGEL OS and its server-based management and control software, IGEL Universal Management Suite (UMS), comprise an endpoint management and control solution that frees enterprises to take full advantage of Azure-based cloud instances and Windows Virtual Desktop desktops, including economical multi-session Windows Virtual Desktop, while reducing endpoint hardware and endpoint device management and operations costs.
-
-IGEL OS supports all popular virtual apps, desktops, and cloud workspace client protocols from Citrix, Microsoft, and VMware. It includes integrated technologies from 85 peripheral, interface, and protocol partners to help organizations quickly adopt Windows Virtual Desktop services into their own unique user environments. IGEL OS is a read-only, modular endpoint OS, which helps protect it from tampering. It now also includes a complete "chain of trust" that verifies the integrity of all key major processes running on the endpoint, from the endpoint hardware (some selected models) or UEFI process all the way to the Azure cloud and Windows Virtual Desktop services. With IGEL OS, enterprises can subscribe to Windows Virtual Desktop from the Azure cloud with full confidence in the integrity, security, and manageability of their users' endpoint devices.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4vviO).-- [Go to the partner website](https://www.igel.com/igel-os-universal-desktop-operating-system/).-
-## Ivanti
-
-![Ivanti Logo](./media/partners/ivanti.png)
-
-Ivanti User Workspace Manager is a Windows Virtual Desktop value-added service that eases desktop deployment and management by separating user data from the desktop for seamless portability. With Ivanti, users can deliver complex projects like migrating to Windows 10, adopting Microsoft 365, or moving services to the cloud faster.
-
-When used with Windows Virtual Desktop, Ivanti User Workspace Manager provides simple contextual management of the user desktop experience, eliminating long sign-in times and eradicating group policy nightmares. Ivanti User Workspace Manager out-of-the-box templates simplify installation for users through agents and the existing console. Ivanti User Workspace Manager delivers responsive, secure desktops that users love, saving money on servers, managing users more effectively, and reducing endpoint security risk.
--- [Go to the partner website](https://www.ivanti.com/products/user-workspace-manager).-
-## Lakeside Software
-
-![Lakeside Software Logo](./media/partners/lakeside.png)
-
-Lakeside Software is a Windows Virtual Desktop value-added services provider that equips IT teams with software for monitoring performance and assessing Azure migration readiness of user workloads. With this software, IT gains clearer visibility into application usage and resource consumption to streamline the migration process. Lakeside Software collects data at every workspace to create a comprehensive report on user environments, enabling quick troubleshooting and optimization of assets.
-
-Lakeside Software's digital experience monitoring solution, SysTrack, can help provide a great user experience by tracking performance and identifying ideal workloads for migration. SysTrack works to extend the value of Windows Virtual Desktop through right-sizing assessments and continuous monitoring of user environments.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3oL8Q).-- [Go to the partner website](https://www.lakesidesoftware.com/assessments/wvd).-
-## Lenovo
-
-![Lenovo Logo](./media/partners/lenovo.png)
-
-Lenovo Thin Clients give your network the flexibility of a client computer running from your server, but with native PC capability and power. Lenovo Thin Clients give Windows Virtual Desktop deployments blazing performance and intuitive manageability, elevating your company's network to the next level of reliability. Each Thin Client is equipped with a Lenovo Terminal Manager license at no additional cost giving organizations seamless and cost-effective hardware management and deployment options. They're also small and versatile, making them easy to add to existing Lenovo deployments.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4FnaB).-- [Go to the partner website](https://www.lenovo.com/us/en/desktops-and-all-in-ones/thinkcentre/m-series-thin-clients/c/M-Series-Thin-Clients).-
-## Liquidware
-
-![Liquidware Logo](./media/partners/liquidware.png)
-
-Liquidware is a Windows Virtual Desktop value-added services provider that delivers software that manages and optimizes Windows Virtual Desktop deployment. The Liquidware Essentials suite provides application delivery through layering, user environment management, and key user experience visibility and diagnostics. With solutions for assessing migration readiness and analyzing usage metrics, Liquidware provides a seamless virtual desktop experience for end users.
-
-Liquidware Essentials extends the value of Windows Virtual Desktop by efficiently harvesting user profiles and gathering key user data to streamline migration of user environments to Azure. Additionally, Liquidware Essentials simplifies image management by unifying user profiles and layering apps based on configurable rights management settings.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3oSY1).-- [Go to the partner website](https://www.liquidware.com/solutions/solutions-platform/microsoft).-
-## Liquit
-
-![Liquit logo](./media/partners/liquit.png)
-
-Liquit application aggregation and delivery software enables enterprises and service providers to connect to and combine with all workspace back-ends (Citrix, VMWare, Windows Virtual Desktop, RDP, and Legacy) and deliver a customized and consistent customer experience, regardless of where the customer's applications reside. When a customer publishes the smart icon, Liquit decides where to start the application based on the customer's location, device, and profile rights.
-
-As a certified integration partner, Liquit helps accelerate transition to the cloud without a rip-and-replace delay. Windows Virtual Desktop can easily connect to an existing environment, create a workspace, and deliver the desktop. You can then take your time migrating off of old platforms and make changes on the back-end without your users noticing. Gain a consistent end-user experience, flexible infrastructure, and maintain control of your applications no matter where they are.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4yol8).-- [Go to the partner website](https://www.liquit.com/wvd/).-
-## Login VSI
-
-![Login VSI Logo](./media/partners/loginvsi.png)
-
-Login VSI is a Windows Virtual Desktop value-added services provider and Microsoft partner delivering software for application performance testing in Windows Virtual Desktop environments. Customers moving their on-premises business services to Windows Virtual Desktop use Login VSI Enterprise Edition to evaluate and maintain optimal performance, scalability, and availability of Windows 10 Enterprise multi-session, Windows 10 Enterprise, and Windows 7 enabled with their business critical applications.
--- [Go to the partner website](https://www.loginvsi.com/use-cases-initiatives/windows-virtual-desktop).-
-## Nasuni
-
-![Nasuni Logo](./media/partners/nasuni.png)
-
-Nasuni Corp., the leading provider of cloud file services and a top Azure global ISV partner, offers the Nasuni software-as-a-service platform as the modern file storage solution for modern virtual desktops. Nasuni, when combined with Azure Blob Storage, consolidates primary file storage (NAS), file backup, disaster recovery, and cross-region file synchronization in one unified solution. With Nasuni, enterprises can deploy Windows Virtual Desktops for more use cases and in more Azure regions, simplify administration, and ensure business continuity.
-
-Being a modern cloud VDI solution, Windows Virtual Desktop requires modern cloud file storage. Traditionally, VDI file storage has been based on Network-Attached Storage (NAS) and file server hardware located on-premises, and the accompanying required technology to provide file backups, restoration, and disaster recovery. These traditional approaches are expensive, complex to maintain and administer, and donΓÇÖt scale easily. They also introduce latency if used with a modern cloud VDI solution like Windows Virtual Desktop due to the physical distance and the slower WAN connections between the desktops based in Azure and the file storage based on-premises. Nasuni, a file services platform built specifically for Azure, offers unlimited file storage capacity and high-performance file access. Nasuni can be co-located with Windows Virtual Desktop in the same Azure regions to deliver economical, high-performance file access to a single global namespace. Nasuni offers Windows Virtual Desktop file storage at a fraction of the cost of traditional NAS and Windows file servers and includes built-in backups and disaster recovery to further reduce costs and free up IT resources.
--- [Go to the partner website](https://www.nasuni.com/partner/microsoft/#wvd).-- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4FaeS).-
-## Nerdio
-
-![Nerdio Logo](./media/partners/nerdio.png)
-
-Nerdio is an Azure IT automation platform that makes it easy to deploy and manage Windows Virtual Desktop. Nerdio provides the knowledge and technology to deploy, price, package, manage, and optimize customers' Azure deploymentsΓÇöwith Windows Virtual Desktop front-and-center.
-
-Nerdio extends the value of Windows Virtual Desktop by making it easy to provision Azure resources and streamline deployment. With Nerdio for Azure, IT can automatically deploy and manage a complete Azure environment, including Windows Virtual Desktop, in under two hours.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3p0Mh).-- [Go to the partner website](https://getnerdio.com/windows-virtual-desktop/).-
-## Nexthink
-
-![Nexthink Logo](./media/partners/nexthink.png)
-
-Nexthink is a Windows Virtual Desktop value-added provider that helps organizations confidently measure, manage, and improve their employeesΓÇÖ digital experience and productivity. With a constant read on the pulse of digital employee experience, IT can continuously improve technologyΓÇÖs ability to engage, empower, and delight people, no matter where they work.
-
-By providing solutions with visible workplace resources, Nexthink gives you context and insight into your user base. NexthinkΓÇÖs powerful experience management platform helps IT teams ensure that migrations to Windows Virtual Desktop are planned and put into action in a timely and successful manner.
--- [Go to the partner website](https://www.nexthink.com/initiative/desktop-virtualization/).-- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4FaeT).-
-## Numecent
-
-![Numecent Logo](./media/partners/numecent.png)
-
-Numecent is a Windows Virtual Desktop value-added services provider that significantly reduces the total operating costs through rapid onboarding and migration of complicated or incompatible Windows apps in Windows Virtual Desktop environments. Numecent also minimizes the amount of configuration that users need to do, reduces application updates, and simplifies complex processes. Because Numecent Cloudpaging supports more applications seamlessly than any other application delivery tool, it reduces time and IT workloads in environments with a diverse set of applications.
-
-When used with Windows Virtual Desktop, Cloudpaging further reduces costs by completing software asset lifecycle from deployment to upgrading, metering, and removing applications. Cloudpaging simplifies image management by dynamically provisioning apps as needed in real time to the Windows Virtual Desktop deployments. Cloudpaging helps applications run without administration or intervention through the periodic Windows 10 updates. Cloudpaging also reduces the licensing cost of expensive applications by enabling more efficient deployment and usage of these applications.
--- [Go to the partner website](https://www.numecent.com/partners/cloudpaging-for-windows-applications-windows-virtual-desktop/).-
-## PolicyPak
-
-![PolicyPak Logo](./media/partners/policypak.png)
-
-PolicyPak Software is a Windows Virtual Desktop partner that performs total settings management for applications, desktop, browsers, Java, and security settings. PolicyPak keeps your desktop, system, and security settings in compliance. PolicyPak enhances the value of Windows Virtual Desktop by adding a suite of components to enhance Windows' built-in administration. Use your existing Active Directory Group Policy and/or Windows Intune to deliver PolicyPak's settings and increase administrators' ability to manage their Windows 10 machines.
-
-The top use cases for PolicyPak are to remove local admin rights and overcome UAC prompts, block ransomware, manage multiple browsers, manage Internet Explorer's Enterprise and Compatibility modes, reduce the number of GPOs, manage Windows 10 File Associations, manage Windows 10 Start Menu and Taskbar, and manage Windows 10 Features and Optional features.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4vviN).-- [Go to partner website](https://www.policypak.com/integration/policypak-windows-virtual-desktop.html).-
-## PrinterLogic
-
-![PrinterLogic Logo](./media/partners/printerlogic.png)
-
-PrinterLogic is a Windows Virtual Desktop value-added service provider platform that empowers IT professionals to eliminate all print servers and deliver a highly available serverless printing infrastructure. PrinterLogic extends the value of Windows Virtual Desktop and Azure by making it easy to manage centrally and deploy printer objects to any printer or endpoint OS.
-
-Available as SaaS or as a web stack in your own private cloud, the PrinterLogic platform ensures users always have the right printers they need in their virtual sessions based on user ID, device name, or location. This functionality is complemented by a full suite of enterprise print management features such as print tracking and reporting, mobile printing, and secure badge release printing.
--- [Go to partner website](https://www.printerlogic.com).-
-## Printix
-
-![Printix Logo](./media/partners/printix.png)
-
-Printix is a Windows Virtual Desktop value-added service provider that automates user connection to office printing resources. As the missing piece in your customer Azure migration, Printix is the most cost-effective service available to remove infrastructure and IT tasks associated with supporting and optimizing print workflow for every user, regardless of location.
-
-Printing is a fundamental task in just about every office and small business environment. In order to take full advantage of Windows Virtual Desktop and provide a great user experience, it's essential to ensure your users can connect to printers with minimum effort and maximum reliability. With Printix, you can get the most out of Windows Virtual Desktop through single sign-on (SSO), silent configuration, regular updates, and continuous monitoring of your print environment.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4aiK2).-- [Go to the partner website](https://www.printix.net/printix-for-windows-virtual-desktop).-
-## RDPSoft
-
-![RDPSoft logo](./media/partners/rdpsoft.png)
-
-RDPSoft is a Windows Virtual Desktop partner that provides powerful and inexpensive monitoring, management, and reporting solutions. Their Remote Desktop Commander offerings allow IT professionals to gain insight into the health, performance, user activity, licensing, and security of their Windows Virtual Desktop deployments.
-
-RDPSoft's Remote Desktop Commander solutions enhance Windows Virtual Desktop administration. Premium Management features simplify delegation of Windows Virtual Desktop management tasks to support desk staff by providing remote assistance, user session, and process management. At the same time, the Remote Desktop Commander Suite collects rich metrics about per-user performance and load, user activity and auditing, Windows Virtual Desktop connection quality (latency and bandwidth), licensing, and security into a central Azure SQL Database instance for review. With RDPSoft, rich historic reporting and comprehensive dashboards are just a click away.
--- [Go to the partner website](https://www.rdpsoft.com/products/remote-desktop-commander/suite/).-
-## Rimo3
-
-![Rimo3 logo](./media/partners/rimo3.png)
-
-Rimo3 enhances the Windows Virtual Desktop experience with its easy-to-use, scalable, and cloud-based Application Modernization Platform.
-
-For IT teams, Rimo3 helps discover, modernize, and manage application workloads for the move to Windows Virtual Desktop. Users can automatically scan their application portfolio to discover candidates suitable for onboarding into Windows Virtual Desktop or modernizing to MSIX.
-
-Users can automate pre-testing their applications, converting apps to MSIX, and post-conversion testing while applying automated remediation if the apps don't convert properly. The result is a modernized, deployment-ready MSIX package. With each Windows Virtual Desktop feature release and update, users can fully test apps automatically prior to deployment, providing complete confidence and ongoing management.
-
-For managed service providers, Rimo3 helps extend their managed services capability to improve margins, and help them bridge project-based revenue to subscription-based recurring revenue, and add value to customers who need to modernize and move to Windows Virtual Desktop and manage regular updates in their Desktop workspaces.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4yj7B).-- [Go to the partner website](https://rimo3.com/windows-virtual-desktop/).-
-## sepago
-
-![sepago logo](./media/partners/sepago.png)
-
-sepago was founded in 2002 by four friends in Cologne. Today, sepago is an IT management consultancy with a steadily increasing number of sepagists, with locations throughout Germany in Cologne, Munich, and Hamburg. sepago are experts on automated application provisioning, virtualization, cloud solutions, and IT security. sepago supports medium-sized and large companies on their way to digital transformation and ensures that users can work securely and efficiently.
-
-sepago's innovation and development lab builds smart solutions using big data and AI technologies. These solutions focus on improving the business, user experience, and administrations of partner products like Windows Virtual Desktop.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4qMsm).-- [Go to the partner website](https://www.sepago.de/en/).-
-## SSH2
-
-![SSH2 Logo](./media/partners/ssh2.png)
-
-SSH2 is a Windows Virtual Desktop value-added services provider that equips your IT teams with software to speed up the application delivery from the current platform to Windows Virtual Desktop on Azure. SSH2 lets IT accelerate application capture to streamline the migration process. SSH2ΓÇÖs appCURE captures running applications on the endpoint, enables updating and remediation to create a comprehensive step change in the speed in which end-user environments can be executed.
-
-appCURE captures application details from running applications to ensure all points that may impact your end-userΓÇÖs applications are understood. appCURE then updates and delivers them to your new Windows Virtual Desktop. By capturing all application integration points in your current environment, appCURE provides the speed to optimize IT resources and plan for your migrations better and quicker than ever before thus enabling organizations to get to production faster.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4Fs38).-- [Go to the partner page](#ssh2).-
-## ThinPrint
-
-![ThinPrint Logo](./media/partners/thinprint.png)
-
-ThinPrint is a Windows Virtual Desktop value-added services provider that delivers simple and secure cloud printing from Windows Virtual Desktop. With its services and software, existing print infrastructure can be utilized to print documents from the cloud. ThinPrint enables connection to both local and network printers, making it easy for users to print while at the office or working remotely.
-
-ThinPrint's ezeep solution extends the value of Windows Virtual Desktop by enabling the connection to existing enterprise print infrastructure. ezeep gives users control over printing in the enterprise no matter where they are. Using ezeep, users can bridge the gap between Windows Virtual Desktop and printing hardware.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3oYas).-- [Go to partner website](https://www.ezeep.com/wvd-printing).-
-## Tricerat
-
-![Tricerat Logo](./media/partners/tricerat.png)
-
-Tricerat offers a superior print management solution for Windows Virtual Desktop and other desktop platforms. Tricerat software has robust functionality, offering a better experience for both users and administrators. Administrators gain efficiencies through complete driver management, simplified deployment of print queues, and consistent management across hybrid platforms. User experience improves with shorter sign-in times, intelligent print queues based on user, device, and network location, and self-service options for quick printer selection.
-
-With Tricerat, printing is seamless in Windows Virtual Desktop and beyond. Tricerat software allows administrators to easily connect on-premises printers to the cloud, extending enterprise print management from traditional environments to new, modern workspaces.
--- [Go to the partner website](https://www.tricerat.com/microsoft-printing).-
-## vast limits
-
-![vast limits logo](./media/partners/vast-limits.png)
-
-vast limits, the uberAgent company, provides visibility in Windows Virtual Desktop deployments. It creates software for enterprise IT because it knows how IT professionals think and which tools they need. Its products help IT pros be more efficient by giving them exactly what they need to get their jobs done; no more, no less.
-
-uberAgent is a monitoring and analytics product designed for end-user computing that doesn't just collect dataΓÇöit gives customers the information that matters. uberAgent has its own metrics, covering key aspects of user experience, application performance, and endpoint security, telling you everything you need to know about your Windows Virtual Desktop VMs without affecting your systems' user density. uberAgent simplifies troubleshooting, helps with sizing, and provides rich information vital for information security.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4Fs39).-- [Go to the partner website](https://uberagent.com/docs/uberagent/latest/about-uberagent/system-requirements/#windows-virtual-desktop).-
-## Workspace 365
-
-![Workspace 365 logo](./media/partners/workspace-365.png)
-
-Workspace 365 unites all your information (business data, documents, communication and micro apps) and provides access to any local, web, or hosted application in one workspace. It automatically adapts to your role, location, device, browser, and more to provide a personalized workspace. Users get a simplified and consistent experience, no matter what technology lies below the surface. You can integrate your current solutions, such as RDP, Citrix and legacy applications, and move them to Windows Virtual Desktop while maintaining the same user experience. Furthermore, you can integrate all your file locations, such as SharePoint, OneDrive, Teams, and file servers, in one document management app.
-
-With Workspace 365, IT admins can make Windows Virtual Desktop-enabled applications available to people based on permissions. The admin can then add those applications to a shared application group. When the Windows Virtual Desktop application is visible in Workspace 365, users can open it from their workspace without having to sign in again.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4vARh).-- [Go to the partner website](https://workspace365.net/product-tour/hybrid-workspace-365/).-
-## Workspot
-
-![Workspot Logo](./media/partners/workspot.png)
-
-Workspot is a Windows Virtual Desktop value-added services provider that equips enterprises with high-performance desktops and workstations in Azure. With Workspot, infrastructure provisioning is automated, which means users can access their Windows Virtual Desktop environment from anywhere around the world with high availability.
-
-Workspot extends the value of Windows Virtual Desktop by simplifying the provisioning process of cloud desktop infrastructure. With Workspot, resources can be easily scaled up and down to meet the needs of different users and uses cases. Workspot can optimize deployments for high-performance GPU workstations necessary for CAD and engineering users, as well as Windows applications and Windows 10 desktops for all business users.
--- [See the joint solution brief](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3oL8P).-- [Go to partner website](https://www.workspot.com/wvd).-
-## Next steps
--- [Learn more about Windows Virtual Desktop](overview.md).-- [Create a tenant in Windows Virtual Desktop](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md).
virtual-desktop Rdp Bandwidth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/rdp-bandwidth.md
Remote Desktop Protocol (RDP) is a sophisticated technology that uses various techniques to perfect the server's remote graphics' delivery to the client device. Depending on the use case, availability of computing resources, and network bandwidth, RDP dynamically adjusts various parameters to deliver the best user experience.
-Remote Desktop Protocol multiplexes multiple Dynamic Virtual Channels (DVCs) into a single data channel sent over different network transports. There are separate DVCs for remote graphics, input, device redirection, printing, and others. WVD Partners may also implement their extensions that use DVC interfaces.
+Remote Desktop Protocol multiplexes multiple Dynamic Virtual Channels (DVCs) into a single data channel sent over different network transports. There are separate DVCs for remote graphics, input, device redirection, printing, and more. Windows Virtual Desktop partners can also use their extensions that use DVC interfaces.
+ The amount of the data sent over RDP depends on the user activity. For example, a user may work with basic textual content for most of the session and consume minimal bandwidth, but then generate a printout of a 200-page document to the local printer. This print job will use a significant amount of network bandwidth. When using a remote session, your network's available bandwidth dramatically impacts the quality of your experience. Different applications and display resolutions require different network configurations, so it's essential to make sure your network configuration meets your needs.
virtual-desktop Teams On Wvd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/teams-on-wvd.md
After installing the WebSocket Service and the Teams desktop app, follow these s
3. Select **Version**.
- If media optimizations loaded, the banner will show you **WVD Media optimized**. If the banner shows you **WVD Media not connected**, quit the Teams app and try again.
+ If media optimizations loaded, the banner will show you **Windows Virtual Desktop Media optimized**. If the banner shows you **Windows Virtual Desktop Media not connected**, quit the Teams app and try again.
4. Select your user profile image, then select **Settings**.
virtual-desktop Troubleshoot Set Up Issues 2019 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/virtual-desktop-fall-2019/troubleshoot-set-up-issues-2019.md
If you're running the GitHub Azure Resource Manager template, provide values for
### Error: vmSubnet not available when configuring virtual networks
-**Cause:** In the WVD Marketplace template, the UI only displays subnets that have at least as many IP addresses available as the total number of VMs specified in the template. The actual number of available IP addresses in the subnet only needs to be equal to the number of new VMs being deployed but this cannot be calculated by the current UI.
+**Cause:** In the Windows Virtual Desktop Marketplace template, the UI only displays subnets that have at least as many IP addresses available as the total number of VMs specified in the template. The actual number of available IP addresses in the subnet only needs to be equal to the number of new VMs being deployed but this cannot be calculated by the current UI.
**Fix:** You can specify a subnet with at least as many IP addresses available as the number of VMs being added by not using the Marketplace UI, this can be done by specifying the subnet name in the "**existingSubnetName**" parameter when you [redeploy an existing deployment](expand-existing-host-pool-2019.md#redeploy-from-azure) or [deploy using the underlying ARM template from GitHub](create-host-pools-arm-template.md#run-the-azure-resource-manager-template-for-provisioning-a-new-host-pool).
virtual-machine-scale-sets Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/azure-hybrid-benefit-linux.md
documentationcenter: ''
-
virtual-machine-scale-sets Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/proximity-placement-groups.md
-+ Last updated 07/01/2019
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/quick-create-cli.md
- Last updated 03/27/2018
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/quick-create-portal.md
- Last updated 06/30/2020
virtual-machine-scale-sets Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/quick-create-powershell.md
- Last updated 11/08/2018
virtual-machine-scale-sets Quick Create Template Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/quick-create-template-linux.md
-+ Last updated 03/27/2020
virtual-machine-scale-sets Quick Create Template Windows https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/quick-create-template-windows.md
-+ Last updated 03/27/2020
virtual-machine-scale-sets Cli Sample Create Simple Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-create-simple-scale-set.md
- Last updated 06/25/2020
virtual-machine-scale-sets Cli Sample Install Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/cli-sample-install-apps.md
- Last updated 03/27/2018
virtual-machine-scale-sets Powershell Sample Create Complete Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/powershell-sample-create-complete-scale-set.md
- Last updated 05/29/2018
virtual-machine-scale-sets Powershell Sample Create Simple Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/powershell-sample-create-simple-scale-set.md
- Last updated 03/27/2018
virtual-machine-scale-sets Powershell Sample Install Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/scripts/powershell-sample-install-apps.md
- Last updated 06/25/2020
virtual-machine-scale-sets Tutorial Install Apps Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/tutorial-install-apps-cli.md
- Last updated 03/27/2018
virtual-machine-scale-sets Tutorial Install Apps Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/tutorial-install-apps-powershell.md
- Last updated 11/08/2018
virtual-machine-scale-sets Tutorial Install Apps Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/tutorial-install-apps-template.md
- Last updated 03/27/2018
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
-+ Last updated 02/28/2020
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
-+ Last updated 06/26/2020
virtual-machine-scale-sets Virtual Machine Scale Sets Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-faq.md
- Last updated 06/30/2020
virtual-machine-scale-sets Virtual Machine Scale Sets Instance Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-instance-protection.md
-+ Last updated 02/26/2020
virtual-machine-scale-sets Virtual Machine Scale Sets Maintenance Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-maintenance-notifications.md
-+ Last updated 11/12/2020
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Start https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-start.md
- Last updated 04/26/2019
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
-+ Last updated 02/12/2021
virtual-machine-scale-sets Virtual Machine Scale Sets Scale In Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md
-+ Last updated 02/26/2020
virtual-machine-scale-sets Virtual Machine Scale Sets Terminate Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md
-+ Last updated 02/26/2020
virtual-machine-scale-sets Virtual Machine Scale Sets Vs Create https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-vs-create.md
- Last updated 09/09/2019
virtual-machines Disks Enable Customer Managed Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-customer-managed-keys-portal.md
For now, customer-managed keys have the following restrictions:
- For Windows: [Copy a managed disk](./windows/disks-upload-vhd-to-managed-disk-powershell.md#copy-a-managed-disk) -- Only [software and HSM RSA keys](../key-vault/keys/about-keys.md) of sizes 2,048-bit, 3,072-bit and 4,096-bit are supported, no other keys or sizes.
- - [HSM](../key-vault/keys/hsm-protected-keys.md) keys require the **premium** tier of Azure Key vaults.
[!INCLUDE [virtual-machines-managed-disks-customer-managed-keys-restrictions](../../includes/virtual-machines-managed-disks-customer-managed-keys-restrictions.md)] The following sections cover how to enable and use customer-managed keys for managed disks:
virtual-network Tutorial Connect Virtual Networks Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/tutorial-connect-virtual-networks-portal.md
You can connect virtual networks to each other with virtual network peering. The
If you prefer, you can complete this tutorial using the [Azure CLI](tutorial-connect-virtual-networks-cli.md) or [Azure PowerShell](tutorial-connect-virtual-networks-powershell.md).
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
+
+Before you begin, you require an Azure account with an active subscription. If you do not have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Log in to Azure
When no longer needed, delete the resource group and all resources it contains:
## Next steps
-In this tutorial, you learned how to connect two networks in the same Azure region, with virtual network peering. You can also peer virtual networks in different [supported regions](virtual-network-manage-peering.md#cross-region) and in [different Azure subscriptions](create-peering-different-subscriptions.md#portal), as well as create [hub and spoke network designs](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke#virtual-network-peering) with peering. To learn more about virtual network peering, see [Virtual network peering overview](virtual-network-peering-overview.md) and [Manage virtual network peerings](virtual-network-manage-peering.md).
+> [!div class="nextstepaction"]
+> [Learn more about virtual network peering](virtual-network-peering-overview.md)
+
-To connect your own computer to a virtual network through a VPN, and interact with resources in a virtual network, or in peered virtual networks, see [Connect your computer to a virtual network](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
virtual-network Tutorial Create Route Table Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/tutorial-create-route-table-portal.md
Azure routes traffic between all subnets within a virtual network, by default. Y
This tutorial uses the [Azure portal](https://portal.azure.com). You can also use [Azure CLI](tutorial-create-route-table-cli.md) or [Azure PowerShell](tutorial-create-route-table-powershell.md).
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+## Prerequisites
+
+Before you begin, you require an Azure account with an active subscription. If you do not have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Prerequisites
virtual-network Virtual Networks Udr Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/virtual-networks-udr-overview.md
When there is an exact prefix match between a route with an explicit IP prefix a
To use this feature specify a Service Tag name for the address prefix parameter in route table commands. For example, in Powershell you can create a new route to direct traffic sent to an Azure Storage IP prefix to a virtual appliance by using: </br> ```azurepowershell-interactive
-New-AzRouteConfig -Name "StorageRoute" -AddressPrefix ΓÇ£StorageΓÇ¥ -NextHopType "VirtualAppliance" -NextHopIpAddress "10.0.100.4"
+New-AzRouteConfig -Name "StorageRoute" -AddressPrefix "Storage" -NextHopType "VirtualAppliance" -NextHopIpAddress "10.0.100.4"
``` The same command for CLI will be: </br>
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-vpn-faq.md
Previously updated : 03/22/2021 Last updated : 03/29/2021 # VPN Gateway FAQ
This section applies to the Resource Manager deployment model.
[!INCLUDE [vpn-gateway-vnet-vnet-faq-include](../../includes/vpn-gateway-faq-vnet-vnet-include.md)]
+### How do I enable routing between my site-to-site VPN connection and my ExpressRoute?
+
+If you want to enable routing between your branch connected to ExpressRoute and your branch connected to a site-to-site VPN connection, you'll need to set up [Azure Route Server](../route-server/expressroute-vpn-support.md).
+ ### Can I use Azure VPN gateway to transit traffic between my on-premises sites or to another virtual network? **Resource Manager deployment model**<br>
web-application-firewall Tutorial Restrict Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
description: Learn how to restrict web traffic with a Web Application Firewall o
Previously updated : 08/31/2020 Last updated : 03/29/2021
It may take several minutes for the application gateway to be created. After the
In this example, you create a virtual machine scale set that provides two servers for the backend pool in the application gateway. The virtual machines in the scale set are associated with the *myBackendSubnet* subnet. To create the scale set, you can use [az vmss create](/cli/azure/vmss#az-vmss-create).
+Replace \<username> and \<password> with your values before you run this.
+ ```azurecli-interactive az vmss create \ --name myvmss \ --resource-group myResourceGroupAG \ --image UbuntuLTS \
- --admin-username azureuser \
- --admin-password Azure123456! \
+ --admin-username <username> \
+ --admin-password <password> \
--instance-count 2 \ --vnet-name myVNet \ --subnet myBackendSubnet \